Announcement

Collapse
No announcement yet.

LWF 2.2 Poll ! Yes or No !

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by kalografik
    But is it photorealistic?
    maybe not. and you're totally right, films are very far from being linear, it's only computers that behaves "linearly". but it's so much easier to work, and comp and doing post when you keep your workflow linear. and once you have a full float output it doesn't take that much to regain that "log" look.

    Comment


    • #17
      Im still in the undecided camp. I spent weeks researching LWF getting my head around it but just didnt get very good results in the end. I'm yet to be convinced to make the switch.
      James Hall
      Technical Director
      New Zealand
      www.jhall.nz

      Comment


      • #18
        Originally posted by kalografik
        This happen only in archiviz and is far from realism or photorealism
        ....don't forget, digital arch viz is isn't only about photo realism, it is heavily influenced by traditional arch viz, which has thousands of years of precedents behind it.

        Comment


        • #19
          I find it very useful. But i still have no idea why is it specifically set to 2.2. I mean, i know all about the curves, the LCD/CRT standards and whatnot, but i find many screens reacting differently to this specific value, and when i started reading about screen calibration the thing got even more confusing.

          The way i see it is - Maxwell has a gamma value, Maxwell produces stunning images. As i started using the LWF in vray, my renderings started looking more like maxwell ones. Nuff said.
          Dusan Bosnjak
          http://www.dusanbosnjak.com/

          Comment


          • #20
            Originally posted by studioDIM
            It still has a 2.2 gamma applied, which someone referred to as LWF (not me, mind you).
            along the lines of what i was curious about from the original poster. is he reffering to applying a 2.2 gamma to a linear color space, and saving it as a floating point format, or is he applying a 2.2 gamma to a linear color space, and saving as a 8 bit format.

            the later, of course, creating an image that is similar to a linear with a gamma of 1, but without the extra info it would have it were a float image.

            ...hell, maybe i am not using the term floating point correctly.

            Comment


            • #21
              Well and with Chris Nichols recent gnomonology video I saw that there is a difference between apply the gamma correction in screen and working with a float image and doing it the other way so I went back to the non float method and it does wonders for my noise issues. Mind you that will all change once we get the new versions of vray.

              Comment


              • #22
                Even with all of the many threads, videos, and .pdf docs on the topic, I have consistently been overwhelmed with the LWF workflow, though I see that it clearly works wonders for so many of you. Every so often, I give it a try and get all sorts of unpredictable results, so I shamefully return to Gamma 1.0. *Unpredictable to me, of course. Too much science for my feeble mind apparently.

                Granted, I'm generally not aiming for photorealism as such.

                Comment


                • #23
                  Originally posted by Sawyer
                  Well and with Chris Nichols recent gnomonology video I saw that there is a difference between apply the gamma correction in screen and working with a float image and doing it the other way so I went back to the non float method and it does wonders for my noise issues. Mind you that will all change once we get the new versions of vray.
                  hmmm... i guess i will completely thread jack.

                  so working with a LWF, but saving with a 2.2 gamma reduces your noise when compared to true float? .

                  ..or working with a non float image but a linear color space and a gamma of 1.0 reduces your noise? which would make perfect sense since you would need to bounce a lot more light around the scene to get a similar light level. ...but also, your scene would take longer to render this way, but probably less time than trying to reduce the noise rQMC sampler.

                  Comment


                  • #24
                    Originally posted by crazy homeless guy
                    [
                    ..or working with a non float image but a linear color space and a gamma of 1.0 reduces your noise? which would make perfect sense since you would need to bounce a lot more light around the scene to get a similar light level. ...but also, your scene would take longer to render this way, but probably less time than trying to reduce the noise rQMC sampler.
                    There is a new option for the next release that fixes this issue (which I never new I had thanks Chris). We will have to wait.

                    Comment


                    • #25
                      we jumped on the bandwagon a couple of weeks ago with a few new jobs.

                      we aint going back.

                      Comment


                      • #26
                        "we jumped on the bandwagon a couple of weeks ago with a few new jobs.

                        we aint going back."


                        Same here............!

                        Comment


                        • #27
                          Ok I'll try another shot.
                          Here are some thoughts and arguments of mine you may find useful:

                          - max, vray, photoshop, monitor settings, VGA drivers, printers - all this stuff is designed to reproduce light as close as possible to it's phisicaly characteristics known to the science. Of course with many restriction because of hardware imperfection and historicaly developed limits.

                          - max and vray is not perfect and may be there are tons of algorithms to be added to develope the way they acts to get close real life light distribution. Rememmber how exalted we were of the "realistic" balls of 3dstudio4 for DOS Despite this they are adjusted to be accurate to the phisical characteristics of light. Gamma 2.2 could be implemented as default setting, but it's not done for now.

                          - when apply gamma different from 1.0 you push devices to acts different way they were designed and stiched each other. Then you leave the science which is trying to imitate light. It's like putting a "curve algorithm filter glas" in front of your properly designed lens. The funny is that this method was called "linear", because of the Vray's color maping method name. It's not linear. Look at the color swatches in MAX - all the dark is pushed nonlineary to the borders. You get wide midtone areas there. This is the exact way light/shadows spreads in your scene. You get override overburns but you get pushed out the shadows too. Similar to Photoshop's "Shadows&Highlights" or "Curves".

                          -you get excellent results - we are adjustable by nature and want to please our cliens

                          -archiviz - I'm doing same images like rest archvizers and I accept all the "easy way stuff", "cheating tips&tricks" etc because I want to please my clients too, and all the deadlines are already passed. I do not waste time to achive "photorealistic" look. It's not needed nor apreciated when working archiviz. But I know what "realistic" is and LWF/2.2 is not the method to achieve. Better bet the old CG lighting technics or at least use gamma relative to current scene and your personal likings when hurry.

                          - LWF is excelent for test during modeling and mapping. You can see all the details in the dark. But not when texturing and final lighting is required.

                          - When well controled darks and burns are not problem. Noisy shadows too. Smooth images are apreciated mainly by other CG craftsmans.

                          - render times - of cource LFW is faster because sampling shadows takes time. If you get darkening despite LWF your render times will be similar to similar lighting set without LWF. (I'm not realy tested those my guesses). The LWF change lighting of scenes and is not post process as Color mapping. This is the reason of low render times. When using LWF shadows stays generally between very close surfaces, they are dramatically redused by the dramatical gamma 2.2. Midtones prevail at all so engine have to calculate mainly difuse lighting.

                          -2.2 is extreme.It pushes shadows deep in the corners. May be 1.4-1.8 if needed. 1.0 prefered

                          I'm tired writing english. It's so difficult.

                          Best regards to all.

                          Comment


                          • #28
                            Originally posted by crazy homeless guy
                            Originally posted by studioDIM
                            It still has a 2.2 gamma applied, which someone referred to as LWF (not me, mind you).
                            along the lines of what i was curious about from the original poster. is he reffering to applying a 2.2 gamma to a linear color space, and saving it as a floating point format, or is he applying a 2.2 gamma to a linear color space, and saving as a 8 bit format.

                            the later, of course, creating an image that is similar to a linear with a gamma of 1, but without the extra info it would have it were a float image.

                            ...hell, maybe i am not using the term floating point correctly.
                            Well, when you save an FP image (unclamped, 32 bit per CHANNELL image) you can tell it to burn in a gamma curve.
                            The point of FP imagery though, is that you have plenty of room to work on them later, in your Post application of choice (be it nuke, fusion, combustion, shake, etc.).
                            Most times, the 2.2 gamma applied @ rendertime is useful only for the frameBuffer display, as you'll want to save the FP image in linear mode (ie. gamma 1.0) to touch it up later.
                            As for my tutorial, i did not call it LWF.
                            Since the whole point of darkening materials was to speed up rendering, and getting a strong contrast, applying a 2.2 gamma has the only effect of brightening up the very very dark tones to an eye-pleasing level.
                            Whether that is LWF or not, i wouldn't know, i hardly stand labels.
                            Surely enough it brightens what was darkened in the first place.

                            Lele

                            Comment


                            • #29
                              Originally posted by kalografik
                              - LWF is excelent for test during modeling and mapping. You can see all the details in the dark. But not when texturing and final lighting is required.
                              why not?

                              i have to agree with Lele... everything aroun Lwf gets too complicated and confusing mostly due to semantics imo!
                              Nuno de Castro

                              www.ene-digital.com
                              nuno@ene-digital.com
                              00351 917593145

                              Comment


                              • #30
                                Lele,
                                of course it's amazing that LWF method developed and good explained. Highly appreciate LWF. It's always good to have alternatives. Hope you to accept my words positively.

                                I have a questions I'll be glad to find answers if you know this.
                                -how many bits per chanel is VRIMG and VFBuffer?
                                -how many bits per chanel is OpenEXR file format?
                                -how many bits per chanel is MAX9 frame buffer?
                                -why when converting vrimg ("clamp" and "subpixel mapping" are unchecked) to openexr, the file size drops down significialy (sometimes nine-ten times)? Is it depend of some kind of compresion or method for storing Fpoints data?
                                -why when open in Photoshop converted OpenEXR is not so "dinamic" as the same image when in VrayFrameBuffer?
                                -is there any bugs when converting VRimg to OpenEXR related to the bit depth?

                                Comment

                                Working...
                                X