Announcement

Collapse
No announcement yet.

Request- Film Response Curves (Comparing Octane to V-Ray)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    yes ofc.
    The camera response curves are a flat ascii list, so they would require conversion to a (1D) lut format.
    It's being asked to add them as a colormapping choice for the VFB, on the basis that somehow another render engine looks better, and has them.
    However, those findings seem quite flawed, both in method and results, so i fail to see what's the benefit of a non-editable, opaque, color remap list (albeit with fancy, exotic camera names.).
    Vlado will likely just build a converter from those curves, and to hell with it all.
    We'll then have to deal with more tech support requests because the renders don't match with the colormapping used by Octane, or something else along the lines.
    While it's amply demonstrated the two engines don't even follow the same approach, and the curves only skew image brightenss (so no tonal embellishment whatsoever.).
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #62
      My thoughts exactly. Why would you want to bake that in to an image? It can't be undone.

      It's the equivalent of recording an audio signal through a guitar amp because you like the effect - the downside being that once it's done, it's done.

      Workflows (imo) should always be to achieve the cleanest results possible giving you greater scope to control creative effects after.
      Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/

      www.robertslimbrick.com

      Cache nothing. Brute force everything.

      Comment


      • #63
        Originally posted by ^Lele^ View Post
        Mental Ray, however, wipes the floor with everyone else.
        Once i turned final gather on, it started rendering faster than the speed of light.
        I found my renders done the day before i started them.
        Oh, the irony. What if Mental Ray turns out to be really good and fast? Well, is it?
        https://www.behance.net/Oliver_Kossatz

        Comment


        • #64
          Eheh, i doubt it'll recover from years of arrested development by itself...
          iRay, however, was pretty good last i tried it.
          Will soon be playing with the NVidia beta, and see what happens with it.
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #65
            Originally posted by ^Lele^ View Post
            Eheh, i doubt it'll recover from years of arrested development by itself...
            iRay, however, was pretty good last i tried it.
            Will soon be playing with the NVidia beta, and see what happens with it.
            I've been playing with iRay for about a year. The quality was one of the best and, apparently, i was able to output the most photorealistic renders i've ever done.
            Well, with using Vray on the same scene while trying to match settings as close as possible to iRay's, i simply cannot match the photorealism. Not even to date.
            There's something i'm definitely missing there... and i think it is because how the exposure is balanced..., even though, the 32bit linear output(raw image) should be the same. Probably, mid-tones are toned differently or something ?

            Other than that, iRay was a pain in the ass to work with. It missed loads of stuff, it was heavily underdeveloped, slow(at that time, not that slow anymore) and you had to wait for Autodesk to implement any change into Max once a year(when a new Max version was released). Now (this year), Nvidia finally decided to make the development independent from Autodesk so, we'd hopefully see more updates and bug fixes.
            It's a nice toy to play with, but it will hardly keep up with Vray(RT), i think.

            And, now that we're speaking of features that the VFB provide... well, VFB is light years further than iRay's frame buffer which is basically, the standard Max frame buffer.
            The only thing i'm craving about at iRay is the tone mapping and / or that advanced photorealistic material they have (wich Vray doesn't have yet). Maybe it's time for Vray to develop a new photorealistic shader rather than trying to work on the base of the old VrayMtl ?
            Last edited by peteristrate; 11-08-2015, 11:12 AM.
            CGI studio: www.moxels.com

            i7-4930K @ 3,4GHz watercooled
            Asus Rampage V Extreme Black Edition Intel X79
            64 GB DDR3 Corsair Vengeance Jet Black 1866 MHz Quad Channel
            8 * GTX Titan X
            960GB SSD Crucial M500 SATA-3
            nVidia drivers: always latest
            Windows 10 up to date

            Comment


            • #66
              I don't get all the discussions about the realism of Vray. It is realistic, period. In the end it all comes down to the artist's skills. The artist makes the image, not the paintbrush (ie Vray). When used right, Vray is as realistic as it can get. As various galleries out there show.
              https://www.behance.net/Oliver_Kossatz

              Comment


              • #67
                Well, in all my comparisons between iray and V-Ray RT, I was getting identical images. So until I get an actual example of a difference, I don't see what else I can do.

                Best regards,
                Vlado
                I only act like I know everything, Rogers.

                Comment


                • #68
                  Originally posted by ^Lele^ View Post
                  Besides this, given the camera response curves are mono-tonal (ie, they simply represent a brightness curve which ain't linear, but kinky due to the patchy response of chemical film across the ranges), there is no way applying those to a vray render would produce a hue skew like the one you described for Octane.
                  Actually gamma correction does shift hue. Throw a linear EXR into Nuke and then compare a pixel before and after a gamma node, it'll shift its hue. That's why it's best to do saturation in gamma corrected space not linear space. It then saturates the hue of the gamma corrected image not the hue of the linear image.

                  I also disagree with your vehement objection to subjective image response curves. Film might be dead, but Kodak spent 100 years on their 'color science' and it looks gorgeous. It should always be an option but I love the look of filmconvert on Red Dragon footage for instance. While you "should" take a render into nuke or some other product for your final color, if you aren't then I can completely see why it would be desirable to have a subjectively superior image 'out of the camera'. After all Alexa is far less linear and accurate than RED but cinematographers have been picking Arri over RED because of "color".

                  Also we shouldn't be too obsessed with "correctness" since after all we aren't even doing spectral rendering. RGB is already quite the hack and has its blind spots.
                  Gavin Greenwalt
                  im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
                  Straightface Studios

                  Comment


                  • #69
                    Originally posted by kosso_olli View Post
                    I don't get all the discussions about the realism of Vray. It is realistic, period. In the end it all comes down to the artist's skills. The artist makes the image, not the paintbrush (ie Vray). When used right, Vray is as realistic as it can get. As various galleries out there show.
                    So probably the biggest thing I have revealed in all of this is that it is solely up to the artists skills that determine the result.
                    I have just simply done TONS more testing in Octane. After spending fucking hours doing the same thing I've come to the conclusion that none of this shit really matters anyway. I've managed to reproduce the "look" octane was giving simply by adjusting shaders and light intensity. It's just a matter of learning a trick or two to make some visual differences then shader and lighting work as usual.
                    Here is an example of where I'm at comparing the two.
                    There are so many subtle differences between the shaders at this point so it's pointless to even call this a comparison but it does show that you can reproduce the look of something (artistically) with zero understanding of the science of radiometric equations :P
                    Also, Maybe I'm probably harsh on V-Ray RT's speed because once again, I haven't tested it heavily but I'm definitely going to give things a try now! Some of the lacking features are obvious to me though and it would be amazing if RT worked the same as the progressive render! One Day...
                    The V-Ray FB is amazing though and I would be more than happy to simply instagram the renders with LUT's instead of a designated feature.
                    After all this testing, I can honestly say I prefer V-Ray now...
                    admin@masteringcgi.com.au

                    ----------------------
                    Mastering CGI
                    CGSociety Folio
                    CREAM Studios
                    Mastering V-Ray Thread

                    Comment


                    • #70
                      Originally posted by im.thatoneguy View Post
                      Actually gamma correction does shift hue. Throw a linear EXR into Nuke and then compare a pixel before and after a gamma node, it'll shift its hue. That's why it's best to do saturation in gamma corrected space not linear space. It then saturates the hue of the gamma corrected image not the hue of the linear image.
                      Not if the input is gray.
                      In the above example, the light is white, the shader is a white diffuse.
                      Where is the gamma going to change hues like Octane does?

                      I also disagree with your vehement objection to subjective image response curves. Film might be dead, but Kodak spent 100 years on their 'color science' and it looks gorgeous.
                      "Looks Gorgeous" is subjective. And i may counter that those 100 years of Kodak Film Science are in the past, no more than a foundation of what we do today.

                      It should always be an option but I love the look of filmconvert on Red Dragon footage for instance. While you "should" take a render into nuke or some other product for your final color, if you aren't then I can completely see why it would be desirable to have a subjectively superior image 'out of the camera'. After all Alexa is far less linear and accurate than RED but cinematographers have been picking Arri over RED because of "color".
                      More subjectivity. People can choose to lemming their way to the bottom of an ocean.
                      And god knows if i have not seen that time and again in VFX, with or without digital cameras.
                      Hype, or personal preference, have no place in science.
                      Choice is free, in other words, the results of Maths aren't.

                      Also we shouldn't be too obsessed with "correctness" since after all we aren't even doing spectral rendering. RGB is already quite the hack and has its blind spots.
                      I think we well should.
                      Correctness here is intended as the ability of a renderer to precisely match a (normalised!) reference, under the same display conditions.
                      Do spectral all you want, your display device will still be RGB, and so was the capture device of your reference image, most likely (or telecine thereof.).
                      Some specific effects are only possible within a spectral color space, but ultimately everything is converted back to RGB for saving (outside of custom formats, and usage in other apps) and display (forcefully.).
                      Material scanning, on the other hand, is providing for some shockingly accurate matches, through a "simple" RGB color space, of the scanned materials versus the source (See Chaos Group's patent on the material scanner...).
                      Notice those scanned materials work exclusively in LWF, and when they do so, the rendered color is EXACTLY the surface color times the lighting, without "artistic" hue skews from the renderer's internal color mapping.

                      As far as hypotetical, subtle differences between engines, nope.
                      As Vlado said about iRay, and as i stated before, all the major engines produce IDENTICAL results under identical lighting, shading, and color mapping conditions (none of the tested ones is spectral, btw, but wholly LWF oriented.).
                      I'd very much love anyone which has proof of the contrary to come forward with a testable pair of scenes, for in three months of work entirely dedicated to benchmarking and comparing engines, the differences i found between them were at most in shading models (Hair, specular, sss, and so on) and/or speed, but surely not in the way they transport light, and i'd love to understand where i did something conceptually and, quite stubbornly, repeatedly wrong.
                      Lele
                      Trouble Stirrer in RnD @ Chaos
                      ----------------------
                      emanuele.lecchi@chaos.com

                      Disclaimer:
                      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                      Comment


                      • #71
                        Come on Lele, try not to be bitter about it haha! You seem like you're on your last leg.
                        Do you consider yourself an artist or a technician? In the film industry I can see how matching to a filmed backplate with 100% accuracy so a compositor can do the color grading to an exact science is the ultimate goal but what about the guys who work with retouchers. We skew and warp the absolute crap out of everything, mostly in the hopes of making something more visually pleasing. It's absolutely instgramming things at a professional level.
                        If V-Ray wants to cater to the science crowd it can, but there's a massive market out there for those who simply don't give a shit about pure realism and want methods to make things look interesting and new. If having some adjusted color curves can help with that I don't see why it's not a beneficial addition to the software.
                        If I'm making a game poster or car render, like I've said, realism and what looks good are totally seperate.

                        I take back what I said about Octane "looking better", it was purely lack of experience comparing the two that brought me to that conclusion and I'm more than happy to load LUT's inside the VFB and leave the discussion at that. I know now that the render engines vary based on UI and speed and that's about it and that's actually great because my long standing belief that some render engines are more realistic than others is finally gone.
                        But where does V-Ray go from here on out then, after all your testing comparing the engines, what was the conclusion? What does V-Ray need added to it or changed?
                        admin@masteringcgi.com.au

                        ----------------------
                        Mastering CGI
                        CGSociety Folio
                        CREAM Studios
                        Mastering V-Ray Thread

                        Comment


                        • #72
                          Originally posted by ^Lele^ View Post
                          Not if the input is gray.
                          I thought we were talking about the Red in the Ironman suit. You are correct.

                          Do spectral all you want, your display device will still be RGB, and so was the capture device of your reference image, most likely (or telecine thereof.)
                          Your display device will be RGB which is fine but a capture device is analogous to a renderer and is a spectral capture device. A human's eye, a camera's sensor or film's emulsion is not sensitive to "RED", "GREEN" and "BLUE" they're all bell curves. When a camera sees red it also sees some green at the same time and even probably a little bit of blue. Many cameras see into the infrared a tiny bit as well. So if you have a texture that is "RED" it could be near infrared in the real world or it could be a very pure orange just outside of the sensitivity of the green receptor. If your light's spectrum has an IR spike like a tungsten bulb the IR reflective red texture would be far brighter than a quasi orange texture which are both expressed as "Red". Similarly "White" light could be a computer monitor producing white light or it could be a full spectrum source like the sun. The same material in the real world will look very different under those two different kinds of "white". "Accurately" capturing the real world is pretty subjective. Everybody is trying to some degree emulate the average human eye's spectral sensitivity but at the same time your brain is mucking with everything you see.

                          Click image for larger version

Name:	SB8opNy.png
Views:	1
Size:	78.9 KB
ID:	858052Click image for larger version

Name:	L_E19.tmp.png
Views:	1
Size:	25.1 KB
ID:	858053

                          So when you say "A gray render won't be affected by gamma" you're right. But the world is almost never ever "gray" since "gray" could mean tungsten "white" or it could mean daylight "white" or it could mean IR "white" or it could mean "RGB White". Since no sensor (chemical, analog or digital) is a nice neat Red, Green and Blue sensor every imaging device is to some degree subjective. So renderers might be consistent but they're consistently playing by made up rules. That doesn't make them "right" that just means they're following the same arbitrary fakery. How a camera sees the UV haze in a sky as "blue" will vary from camera to camera but renderers ignore it completely. There is no UV filtration setting on a Vray Physical Camera. And that even is completely ignoring the even more exotic traits like polarity that light can have! I doubt we'll ever simulate polarity in a renderer. That's also why none of these film emulations will ever be effective without spectral considerations. You can't just balance red green and blue curves you have to have the spectral sensitivity of the red green and blue emulsions, you have to capture that infrared contamination in the red and that ultraviolet contamination in the blue. You also in an RGB LUT are unable to capture the metamerism of a film stock whose "yellow" might falloff very quickly to green while another film stock's yellow might quickly fall off to red. We know how the eye reacts to yellow and we can stimulate the eye really well as a standardized delivery Rec2020 colorspace RGB value but there is a lot of complex optical interactions before an image is captured. Nobody is neutral, you can just have the same biased perspective as every other renderer.

                          Essentially what you're saying is that "Hobbits have hairy feet". Your statement is correct, hobbits do have hairy feet. But they also are a convenient fiction.
                          Last edited by im.thatoneguy; 11-08-2015, 10:18 PM.
                          Gavin Greenwalt
                          im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
                          Straightface Studios

                          Comment


                          • #73
                            Is there any indication when V-Ray RT GPU will support bercon maps? Apart from that I've come to the sad realization that I am completely retarded for not testing it in over a year...It's unbelievable
                            admin@masteringcgi.com.au

                            ----------------------
                            Mastering CGI
                            CGSociety Folio
                            CREAM Studios
                            Mastering V-Ray Thread

                            Comment


                            • #74
                              Originally posted by grantwarwick View Post
                              Is there any indication when V-Ray RT GPU will support bercon maps? Apart from that I've come to the sad realization that I am completely retarded for not testing it in over a year...It's unbelievable
                              Dude, it's been months I'm telling you RT GPU rocks haha

                              Stan
                              3LP Team

                              Comment


                              • #75
                                Originally posted by 3LP View Post
                                Dude, it's been months I'm telling you RT GPU rocks haha

                                Stan
                                It isn't working over here

                                Was it introduced in .01 or .03?
                                Last edited by grantwarwick; 12-08-2015, 01:26 AM.
                                admin@masteringcgi.com.au

                                ----------------------
                                Mastering CGI
                                CGSociety Folio
                                CREAM Studios
                                Mastering V-Ray Thread

                                Comment

                                Working...
                                X