Announcement

Collapse
No announcement yet.

Request- Film Response Curves (Comparing Octane to V-Ray)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by ^Lele^ View Post
    I am both.
    The IronMan match was uncanny, given the tools at hand.
    And while i may sound an utter fanboy (all the more so i have an official role of sorts), i REALLY try hard to be as impartial as i can be.
    Nothing would damage Chaos more than a biased, or wrong, comparison with a competitor, in my current line of work.
    If someone's being better at something, well, hat tip and let's find out ways to improve ourselves, this is the spirit.
    Octane's unfortunately been showing sings of age for a while, now, and my guess is that OTOY's fully concentrated on their next product.
    This said, it may just be we're doing something wrong, or missing the (non) obvious, and then the issue would be back to a simple UI/user one, rather than conceptual (something i am all too aware of. it of course keeps happening while i am getting comfortable with a new software, before i am confident enough to start benchmarking.).
    I won't be spending much more time on this, though, i have other stuff queued, and Octane wasn't on the radar beforeI stumbled in this thread...
    I'd be glad to see a solution to this, though, so i could actually benchmark the scene properly.
    I'll make sure to thank you in the next lesson and anyone else in the thread who took part as I entered this thread with a biased, flawed and warped sense of what makes a render engine better.
    The moral of the story is use V-Ray RT from now on, hope bercon gets fixed and I'll be back on my way!
    Everything isn't always as it seems, I'm sure we all appreciate your 3 months of testing even though 99% of us aren't smart enough to understand 1% of what you said hahahahahaha.

    Cheers man!
    Hopefully 3LP comes through with a method for reproducing things in AFX!
    admin@masteringcgi.com.au

    ----------------------
    Mastering CGI
    CGSociety Folio
    CREAM Studios
    Mastering V-Ray Thread

    Comment


    • Originally posted by ^Lele^ View Post
      I do know a wee bit of both physics and visual perception, but i fail to see where the color theory above has any visual impact, Gavin: show me a control image, one with and one without invisible colors (Like IR contamination, or UV spectrum bits), on a standard RGB display device, then we can maybe confront what the variance is, mathematically, and with a control group of humans, study the perceptual difference and likeable-ness of the three.
      What I'm saying is that you seem to be taking a rather errr... ideological position on what's "correct". That there shouldn't be color shifts in "White". But what I'm saying is that "White" is a complex function as is color. A red apple will look very different under very different "white" lights. There is no such thing as a purely neutral and "correct" renderer that's based on RGB. That doesn't have anything to do with color precision or color gamut but how different surfaces will react to different light types. Once we abandon an artificial notion that we're doing things perfectly today then we should be able to more readily accept subjective response curves as equally valid. Yes we should maintain energy conservation. But having a white wall push towards red isn't any more wrong than a white wall remaining white.

      For instance in both of these photos with the same sensor we have the same scene but under two different light sources white balanced to the same white point on the keyboard. Even though the RGB white balance is "Gray" on both, there are obvious non-invisible-spectrum deviations from two "white" light sources. If I matched the table hue the yellows would shift in RGB space etc. So slavishly maintaining mathematical perfection isn't actually emulating the real world, it's just one biased approximation of the real world, just like a film like color response is another biased approximation of the real world. The real world is wildly unpredictable. We shouldn't kid ourselves and think that we've "nailed" an accurate approach. And I think a little more humility would help us move away from a technical superiority and allow ourselves to more easily accomplish aesthetic superiority.
      Click image for larger version

Name:	LED-light-test4.jpg
Views:	1
Size:	173.4 KB
ID:	858064
      Gavin Greenwalt
      im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
      Straightface Studios

      Comment


      • But that is precisely my point, Gavin!
        What you show is unavoidable measurement bias, under current technology for image capturing.
        You have to deal with all the color balancing jazz PRECISELY because the capturing is rough (sensors, bit depths, and so on) within the visible range.
        Those are (up to the point of "capturing" the image) all non-existent issues in CG: white BE white, no two ways around it.
        If using the term "white" irks you for the above reasons, let's all part from it: call it 1.0.
        All i have been saying so far, is that in CG, if you multiply 1.0 x 1.0 you NEED to get 1.0 as a result, and not any other number, by the renderer's design.
        Any other number, and you are stuffed with an unknown relationship between the parts in a scene.
        It may be better, but for the sake of sanity, i''d stick with the maths i know and can depend on, simplified a metaphor of life that it is, until i can see incontrovertible proof.

        The fact that even ACES does not take CG into account at all (cg -> aces) should provide for more strength to my point.

        I don't know about the last sentence.
        *I* can surely do with stashes more humility, but whether that will make renders go prettier is a bit of a stretch...
        Oh, and are you on the beta, Gavin, i haven't read you that i can remember...
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • To back up your point about consistency: I remember an anecdote from a friend on Superman. There was a problem with VRay or Brazil, I can't remember which so halfway through the shot they switched renderers and finished rendering. You couldn't notice the difference.

          Doesn't ACES though ignore CG because the IDT is in the texture reader not the renderer? I would also point out that ACES' limitations are exactly the limitations to which I speak. Its gamut is huge (no longer infinite in the latest revision), its precision is superb but it's impossible for two cameras' IDTs to perfectly match one another. Any time you rely on RGB or XYZ 1 dimensional color you're going to get deviations. Kodak Vision's beautiful look relies on oddball metamerisms. There's the opportunity in CG to not just match the quality of digital cinema but to surpass it. After all we have negative lights! The Cinematography world is pushing back against clean precision and is looking for old lenses with weird color coatings, they're running as fast as they can away from what they perceive as Digital's sterile precision. I agree that not everything should be baked into the renderer, but there are areas where CG has nearly infinite flexibility.. infinite flexibility that's being constrained with a straight jacket. It would kind of be like Super Mario Brothers having regular earth gravity because that's "Correct".
          Last edited by im.thatoneguy; 13-08-2015, 10:10 AM.
          Gavin Greenwalt
          im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
          Straightface Studios

          Comment


          • The IDT is flawed, and would break energy preservation.
            A white 1.0 would become a 1.03 with a green skew.
            Can't feed that to a GI engine, as there would be no way to back-correct the lighting hue, not to mention emissive diffuse materials.
            Further, which texture should be converted (like for the old LWF approach) and which should be left alone?
            Should one then not also correct the lights colours and so on?
            Meaning, you write a plugin to do so for you, or texture map the hell out of your lights with corrected textures.

            During Oblivion, i stood firm and avoided (after a month of research, testing, and confrontation) using aces in the CG realm at all. The other office decided for it.
            We didn't have a single issue, and the two batches of shots ended up matching (well, no. ours looked WAY prettier, ofc. 'cause WE did them. ^^) once post was through.
            We just avoided sooo many useless headaches, and to this day i am not sure they ended up delivering the aces-IDTd shots or standard LWF ones.

            Further, yes, i entirely agree: the limits are in the technology, they are very evident, and supremely constraining, especially in a few departments.
            But my point wasn't about laying down and being squashed by them, nor about transcending them.
            My point was about speaking PROPERLY the LWF language, for it IS, bar nothing, the best way to go about CG CURRENTLY available, and only after that part is done, introduce the artistic bias.
            Someone wants to save PNGs from the VFB and feel proud it was all done in camera? Very free to do so.
            Is that a method that should be encouraged, in this day and age? Roughly, as far as i am concerned, it is recommendable as much as it would be to tell people to buy large format cameras and shoot jpg.
            Further, those curves are limited, and quite old (12 years are a lifetime.).
            The films response may not have changed, but the ability to at least get 12 or 14, 16 bits of data is today a reality, and i can't fathom why we should be using inferior technology from an old piece of software.
            Rather, there's room for people to prepare nice LuTs to sell for those interested, for instance, much like one would Photoshop actions.
            But then, why not prepare photoshop actions?

            In other words, if one wants to play mario kart, one should do so with full right to skewed laws of physics.
            But to have the skewed ones, one first has to know which are the right laws of physics, and still build an engine able to deal with the (skewed) relationships between forces in the CORRECT way (lest at each bend you have to turn a RANDOM amount each time).
            Further, if the goal is to simulate a race car on a race track, and the name of the game isn't bouncing around on powerups, gravity, as the rest of the forces, HAS to be set so to simulate the proper earthly behaviour.
            And Vray's not quite been built to be Mario Kart, i reckon, amongst the render engines.
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment


            • Re LUTS, well that's what I'm saying. There is the opportunity for a renderer to go well beyond a 3D LUT just as there is the opportunity for VRay's DOF, bloom, motion blur or glinting to go way beyond what's even possible with deep compositing.

              As to ACES, the good news is they fixed most of the green skew in the primaries for AP1 (ACES 1.0).
              Click image for larger version

Name:	ACES+Ap1+Color+Primaries.png
Views:	1
Size:	447.3 KB
ID:	858079

              But the fact that 1.0,1.0,1.0 isn't ACES "white" backs up my point that CG "white" (1.0) is a weird quirk that I think is nearing the end of its life. I think spectral white is the next bump up. GI used to be prohibitively expensive but it's worth the time. I think spectral rendering even with a 6 primary white would kill off some of the remaining quirks and make footage matching that much easier in comp. I would also love to see Vray move out of Photoshop style color corrections into real spectral insanity. For instance a Cooke lens' coatings are in my opinion gorgeous but they aren't an RGB tint filter they are a complex spectral filter. I want to be able to put a "Cooke" lens onto my Vray Physical camera not just apply a dull yellow filter. That's what I say when I say that I think CG is limiting itself. We should have much higher standards than the capabilities of Photoshop.
              Gavin Greenwalt
              im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
              Straightface Studios

              Comment


              • Although a lot of this is above my head, my basic point of few pages back seems to still hold.
                I asked if you shoot with different cams and lenses would end up with the same results, and obviously I asked as I knew it would not be the case.
                I was in the expectations that the LUT would be giving that "spectral" information of the lens, I understand now it's not the way to go. But I would love to have some presets of lenses that would mimic that.
                I was hoping to get that as a post effect, so I would be able to apply that in a post soft again and keep my export from Vray linear, but maybe it's just not possible...

                Although I do agree with Lele this should not be fed to the render engine when it's calculating GI and lights and stuff. In real life, two photons has the same trajectory, caries the same info (bouncing around etc) until it hit the lens (go through) and hit the CCD. If those lenses and CCD are different, the final colour of that photo has a different digital information, but all the real world ray tracing is the same no matter what lens/ccd or eye is catching it.

                In stead if LUT and other post presets then, would it be possible to create something like a lens effect that can mimic the diffraction of light and sensibility of CCDs?
                I guess this could be in addition to the already existing lens effect although I'm guessing this can't be applied as a post effect?
                This would also avoid all the data translation issue as well as the float 1.0 cap issue. All this would be "physicality" calculated the same way as all the rest of vray like glass, ior, fog color etc but for the lens, with coating.

                Does this make sense?
                Cheers
                Stan
                3LP Team

                Comment


                • Well, yes, a lens/CCd/CMOS/Film stock simulator, properly built, would be super cool.
                  However, recovering and analysing the data doesn't look to me like it's going to be trivial.
                  I am thinking of the DxO approach to denoising and aberration fixing, for instance, whereby the moment one loads a RAW, the metadata are read, and the right combination of lens and body are downloaded to the host machine.
                  Each of the combinations between camera and lens goes from a few hundred kilobytes to ten megabytes or so, and contains the measured noise and lens defects profiles at mutiple ISOs and F-Stops.
                  Needless to say, these number in the thousands, and took DxO a good decade to profile and begin to commercialise (The company initially did purely lens and body benchmarking and analysis for third parties).

                  So, while converting to a simple, standard format (say, DV25, DV50 and so on and so forth) is a simple case of (well known) color and spatial undersampling, doing the same type of work for a number of (even well selected) batch of lenses and cameras would require analysis of both, in the first place.
                  And that's a physical experiment, not a maths one only, so the times between iterations grow a lot (ie. it could well take a year or more to build the right equipment, through trial and error. See the Material Scanner.)

                  This point, coupled with the fact that there are, indeed, such type of plugins for NUKE, AE and so on (I'm thinking of Sapphire and such), begs to one possible way to tie the two issues together: adding OFX support to the VFB.
                  Surely more feasible than databasing a gazillion combination of lenses and cameras, but is it ACTUALLY feasible, both in effort to put in, and scope of a VFB?
                  Thankfully, that's definitely not for me to answer. ^^
                  Lele
                  Trouble Stirrer in RnD @ Chaos
                  ----------------------
                  emanuele.lecchi@chaos.com

                  Disclaimer:
                  The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                  Comment


                  • Originally posted by im.thatoneguy View Post
                    I want to be able to put a "Cooke" lens onto my Vray Physical camera not just apply a dull yellow filter. That's what I say when I say that I think CG is limiting itself. We should have much higher standards than the capabilities of Photoshop.
                    How did i miss this bit, Gavin?
                    Correct me if i am wrong, here: Cooke is making soft-focus lenses which behave the way they do chiefly because of their SHAPE, right?
                    For if that is the case, i believe you actually CAN reproduce it quite accurately in VRay: shoot the provided reference, pass the captured image to the lens analyser app, and feed that result into the lens slot of the vray physcam.
                    Activating DoF at that point would take into account the actual lens data, as retrieved from the aforementioned process, and as such it MAY well be able to account for those straighter paths happening well within what would be the CoC for a normal lens.

                    In fact, if you have access to one such lens, get shooting a reference for me, pretty please! ^^
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • Originally posted by vlado View Post
                      We can write a tool that converts these to LUT profiles, if you think this will be useful.
                      So, does this offer still stand? Or are you possibly already working on it?

                      Comment


                      • Originally posted by Laserschwert View Post
                        So, does this offer still stand? Or are you possibly already working on it?
                        Will get to it soon, I hope.

                        Best regards,
                        Vlado
                        I only act like I know everything, Rogers.

                        Comment


                        • Hi all. I made some of the most common LUTs in Nuke to use in Vray linear render (i use the Nuke Pattern so it could be not mathematically precise) , but honestly i prefer to do this stuff in post and not during render Phase, in my personal opinion render should be perfectly linear before post.

                          DOWNLOAD:
                          https://drive.google.com/file/d/0Bys...ew?usp=sharing
                          My Spanish tutorial channel: https://www.youtube.com/adanmq
                          Instagram: https://www.instagram.com/3dcollective/

                          Comment

                          Working...
                          X