Announcement

Collapse
No announcement yet.

Request- Film Response Curves (Comparing Octane to V-Ray)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by 3LP View Post
    I understand your point
    I also strongly agree that the artistic flair should be in the artists hands, but you could also have to possibility to switch that on and off, leaving both possibility to the artist. Like all post options in the VFB, you can switch it on, and off.

    You come from a pipeline where the tools that are used from start to end needs to work hand in hand (like me as I worked 6 years in VFX) to produce what we all see everyday in show and movies, and no point about saying it's not incredible, it is, and I know there are tool to achieve those results.
    They just imply the fact of going from one tool to another to another, to another, to be able to see the end result.
    I was referring to try and skip a few of those steps to be able to see faster what the end result would be, inside your 3d software
    I never said you'd need to bake that in your render, I would still go linear and make those adjustments down the line in my post-soft, but at least I'd have been able to skip a few step for the WIP and previews.

    Why do you think :
    * lens effect was introduced there in the first place?
    * the CC tools are being introduced as well?
    * people are asking to have even more tools in the VFB like those response curves?
    * people are asking (and other software company delivers) to be able to interact more with your workflow inside the frame buffer? (like picking the dof point in the VFB like octane, or having even more post effect done straight in the frame buffer like octane or unreal or unity)

    I know they can all be done in post soft like nuke or other, it's just more convenient to have them straight in the VFB because you win time and steps.
    Why not add some other presets to switch quickly between looks? Those would be "film response curve" named now, but could have another name tomorrow or just a bunch of home made or bought presets.
    Well, users asked for them, they were added.
    It's not strictly a developer's task to decide on the specific merit of a request, and Vlado's been historically very kind with the user base.
    As to how used each of those tools actually is on a day to day basis, i still have my doubts.
    Notice I DO use exposure controls to check my renders while developing them.
    But from there, to stating that the Lens Effects (or camera response curves. Jaysus, it's a flat value list, 1000 values from 0 to 1. Is it seriously Chaos' task to do a conversion to a readable LUT?) in the Vray FB are a must, there's quite a stretch.

    Further, and maybe this isn't quite as apparent as it ought to, LUTs are applied to log space footage.
    Ie. your image clips, your values aren't linear anymore (between UI and render), and there isn't a single thing VRay can do about it (nor saving to EXR, if you elect to keep the LUT look baked).

    So, here's my qualm with it: we'd be adding more complexity for very marginal, and duplicate (ie. in Post), returns, further running the risk of having the least skilled of users saving out clipped images or sequences, nigh untreatable after the fact, potentially running into more tech support requests, and negative rumor (ie. VRay renders clamped. now, that'd win Chaos plenty clients.).


    As for the fact of having a red not rendering pure red in your vfb, I though it could have something to do with the lens (or CCD?)

    I might be wrong, but take a stage with a white floor and a big 1m pure red ball.
    Shoot it with a Red scarlet, and a Red Epic Dragon, and a Arri alexa, and a Sony CineAlta and with cooke or Arri / zeiss lenses.
    Overlay all those footage on top of each other, are you really getting pixel perfect 1:1 match for all those (let's say) 8 different shots?
    Of course not: that is why one has to do white balancing of a shot, of course, not taking into consideration color undersampling and format compressions where those come into play.

    Everything is not pure straight maths. because real life is not pure straight. Otherwise people would not be adding dirt everywhere and make lens distortions and chromatic aberration and film grain and whatever other impurity that would end up that your pixel in the end, is not the pixel you would have with a pure mathematical render.
    I'd have thought by now Science made a strong enough case for mathematics being a delightful key to repeatably take measurements of the universe we live in, but perhaps not.

    Again, I understand you want to have the absolute power and be able to achieve this in post, but some things are just not feasible in post or require a lot of workarounds. Example, it's only with deep rendering (who has been "reasonably" recently introduced) that it has been possible to make correct/true DOF in post. Before that it was just not feasible straight out of the box in one pass. You needed to deconstruct your render in several passes to be able to have a correct dof.
    I was doing this stuff with max 2.5, paint*, effects* and the RPF format.
    And it kinda worked better than deeps.
    But hey, progress.

    Well nowadays, even if deep rendering is possible, it's still easier to enable dof in your cam, and done.
    This would also be with settings like blades, anisotropy and center bias etc. Again, all that is coming from "real life" camera, and it's only recently also that the Vray cam lost his Physical name as before it was called "VrayPhysicalCam".
    Aside from color mapping (ie. exposure/white balance/vignetting), direct access to DoF and MoBlur parameters (present elsewhere too.), and lens modelling (pretty, but to all practical effects quite useless.), what's physical about the VRay camera?
    You can get the same effect with a standard camera, and the VRay controls present in other parts of the interface.

    So if there was a way to get a more "realistic" render out of vray easily, why not let you enable that feature, like having a bit of blue or green in you supposed to be 100% pure red? By the way, GGX wasn't also introduced because it was better suited and with a more "real looking" end-results? Even if you where able to achieve the same result with 5 layered vray blend materials?
    The lobe-to-tail ratios of GGX are unique, as are the geometric masking terms in use by the model.
    No, you couldn't get the same looking speculars with however many other layers of different BRDFs (phong, blinn and ward) VRay already had.

    Now with your background I can also understand that you don't like to read "Adobe" and "post" in the same sentence And I get that, but that doesn't mean that some other people would like to use those technique, that's personal preference for each person out there, and just neglect their wishes because yours are different, doesn't mean they are unworthy of consideration.
    I don't like Adobe because it's seriously inconsistent in its workflows, because up to, and including, photoshop CC 14 it was still able to load an EXR and save it out broken (16bpc integer. 1980s all over again. let's all go work in prepress.), or forcefully upgraded to 32bpc (and this, after in 2009 one of the PS devs got a mythical pasting on how EXRs should be treated, by anoyone who was anyone in the field of imaging.).
    I don't like adobe because their image maths is opaque, and only very recently, and utterly slowly, is coming up to match the bare minimum requirements of modern image manipulation.
    And sure as hell i don't want to be a user of theirs for the next ten years, which they'll likely take to make the tools actually usable.

    Stan (who doesn't like to be the devil's advocate )
    What do you mean you don't like it?
    Don't you find debates like this quite entertaining? :P
    Don't take my curt writing style for anger, or distress lol...
    And keep them coming, it's the whole point!
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #47
      Nice mate, I like all this!
      It's just that often, the way writing can be read or interpreted is completely not the way I'd like the message to be passed along.
      I do like constructive debate and I learn so much through them but I don't want my opinions or thoughts be taken in a wrong way.
      So much easier if we would talk about all this over a beer! Should probably think of going to one of those gathering like EUE someday

      Cheers mate, and amen to RPF dof!

      Stan
      3LP Team

      Comment


      • #48
        Ah!
        Well, we're old enough (damn.) of this to take it all with a pinch of salt (or a kilo, when dealing with me.).
        So, peace out indeed, but let's keep training swords and shield up and clattering! ^^

        As for the whole arbitrary thing, Check these following pictures, and let me know how you would proceed in octane neutralising the hue skew a white light produces on a white plane under linear color mapping.
        Unfortunately, in Octane, to not have a render clamped to 1.0 float one has to entirely turn off color mapping (in max. that's all i could do to save unclamped renders. so it's not that i am saying it's a feature, mind you. I hope not, at least.).
        Further, could anyone elaborate on how to get a directly visible light source display the same value that was entered in the light's UI?
        You may notice Octane's displaying that light (with CM off entirely. otherwise it's 1.000 everywhere) as with a float value of 192.xxx, with the xxx showing a hue skew (while the light had a power of 1 unit, and a purely white colour.).
        After the three octane renders (in order: default CM, Linear CM, CM off so to pick the light values), there's a VRay one (without DoF) of the same scene, same light size, color and power, same white diffuse shader, under linear workflow.
        Click image for larger version

Name:	Octane_Default_CM.jpg
Views:	1
Size:	416.2 KB
ID:	858038Click image for larger version

Name:	Octane_Linear_CM.jpg
Views:	1
Size:	449.8 KB
ID:	858039Click image for larger version

Name:	Octane_No_CM.jpg
Views:	1
Size:	407.1 KB
ID:	858040Click image for larger version

Name:	VRay_Linear_CM.jpg
Views:	1
Size:	463.1 KB
ID:	858041
        It all points to some kind of whacky internal color space, and it merits the whacky adjective precisely because of what i showed above.
        I'm all ears on how to get Octane to work linearly, or failing that, why it is that it's ok to forego LWF, all of a sudden.
        Not to worry, for those which feel like parting from it.
        I'll be at the door waving, and that's where you'll find me long after I'll have turned to ash.
        I will take rigorous maths over this type of arbitrariety any time.
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #49
          Originally posted by ^Lele^ View Post
          Ah!
          Well, we're old enough (damn.) of this to take it all with a pinch of salt (or a kilo, when dealing with me.).
          So, peace out indeed, but let's keep training swords and shield up and clattering! ^^

          As for the whole arbitrary thing, Check these following pictures, and let me know how you would proceed in octane neutralising the hue skew a white light produces on a white plane under linear color mapping.
          Unfortunately, in Octane, to not have a render clamped to 1.0 float one has to entirely turn off color mapping (in max. that's all i could do to save unclamped renders. so it's not that i am saying it's a feature, mind you. I hope not, at least.).
          Further, could anyone elaborate on how to get a directly visible light source display the same value that was entered in the light's UI?
          You may notice Octane's displaying that light (with CM off entirely. otherwise it's 1.000 everywhere) as with a float value of 192.xxx, with the xxx showing a hue skew (while the light had a power of 1 unit, and a purely white colour.).
          After the three octane renders (in order: default CM, Linear CM, CM off so to pick the light values), there's a VRay one (without DoF) of the same scene, same light size, color and power, same white diffuse shader, under linear workflow.
          [ATTACH=CONFIG]25453[/ATTACH][ATTACH=CONFIG]25454[/ATTACH][ATTACH=CONFIG]25455[/ATTACH][ATTACH=CONFIG]25456[/ATTACH]
          It all points to some kind of whacky internal color space, and it merits the whacky adjective precisely because of what i showed above.
          I'm all ears on how to get Octane to work linearly, or failing that, why it is that it's ok to forego LWF, all of a sudden.
          Not to worry, for those which feel like parting from it.
          I'll be at the door waving, and that's where you'll find me long after I'll have turned to ash.
          I will take rigorous maths over this type of arbitrariety any time.
          Thanks for all your input Lele, It's probably best for me to once again point out that I am far from a mathematical person, I like to think I get by purely on artistic improvisation!
          So basically I agree with you, Octane doesn't behave like other renderers but the reason I created this thread was purely to open this discussion up.
          For the past 5-6 years all my "fun" personal work has been chucked into octane, loaded with a film response curve and then adjusted in realtime until it looks photoreal and the ONLY reason I feel I haven't been able to achieve the same level of realism in V-Ray is because I haven't had such ridiculous realtime feedback.
          So now I'm at a point, as an artist, not a technician where I'm simply trying to reproduce the look and feel of Octane renders in V-Ray simply disregarding what is mathematically and physically correct. There are artists like us out there and we probably don't deserve to work in the film industry in a crowded pipeline but as an individual on a print job, I can say right now, what is mathematically correct and what looks good are two entirely different things!
          As for your point on my findings only being rumor and hearsay I agree completely also, I'd love to have a direct comparison side to side (And that is what I tried to do earlier in this thread with the reflective sphere / torus knot scene etc) but the problem is, the light values don't line up in Octane, The colors don't line up and the the textures DEFINITELY don't line up. So when I get to advanced stuff like calculating the dirt maps, I have had to literally go through one by one and make those adjustments and the end result is not going to be identical because on a shader 30 nodes deep the problems compound. My goal was to get things as close as possible though and then share that scene with users so they could decide themselves if they preferred working with these altered settings (Reduced GI saturation, contrast etc)
          I've been doing so much testing I'm a bit burned out to the this whole topic but I'll do my best to understand the discussion going on.
          Appreciate all your work!
          Last edited by grantwarwick; 10-08-2015, 04:03 PM.
          admin@masteringcgi.com.au

          ----------------------
          Mastering CGI
          CGSociety Folio
          CREAM Studios
          Mastering V-Ray Thread

          Comment


          • #50
            Also, I think it's worth noting that if every render engine was identical there would be no competition, discussion or purpose in evaluating which one to use. The fact is, different render engines behave very differently unless extreme car is taken in re-aligning values to match at a mathematical level and that simply isn't realistic.
            It would also close off the debate as to what renders faster if they are calculating the math behind each pixel the same.
            So far, and this is by no means scientific, I believe Octane renders initial passes faster and takes longer to resolve highres textures whereas V-Ray, with it's heavy bias can drastically reduce the overall render time at the expense of initial previewing speed.
            V-Ray RT GPU simply gets demolished by Octane.
            Somebody needs to do some comparisons, I don't think I have the energy atm haha./
            admin@masteringcgi.com.au

            ----------------------
            Mastering CGI
            CGSociety Folio
            CREAM Studios
            Mastering V-Ray Thread

            Comment


            • #51
              Well, Blagovest from the GPU dev team said that RT GPU and Octance have nearly the same speed when tested under comparable conditions. That was half a year ago, maybe Octane had some improvements, but so did Vray.
              https://www.behance.net/Oliver_Kossatz

              Comment


              • #52
                Originally posted by grantwarwick View Post
                V-Ray RT GPU simply gets demolished by Octane.
                Somebody needs to do some comparisons, I don't think I have the energy atm haha./
                Yes, that's part of my daily tasks for Chaos Group, and it's an ongoing process, as you well said, quite rife with pitfalls and false conclusions, due to the vastly different approach to rendering (forget BRDF models, it's ALL skewed like i showed above, and how you found out. Odd internal color space...).
                It's for internal use, and doesn't just include Octane.

                However, Grant, someone of your caliber and fame stating flatly that
                V-Ray RT GPU simply gets demolished by Octane.
                isn't very much of help.
                My tests say quite differently, and have all been conducted under the most rigorous of controlled conditions, feature by feature, measured noise level by measured noise level, number of samples by number of samples.
                RT is still missing a few features, agreeably, which are being implemented, but what is there, as far as my testing goes, is of very high quality, and -so far- unbeaten speed.
                By anyone else.
                I shall ask the (very busy, with Siggraph ongoing) powers that be if i can share something here.
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #53
                  Yep I was also surprised to read that RT GPU was beaten by Octane.
                  I haven't tried octane extensively, but I do use RT GPU quite a lot and it's just incredibly fast.
                  That's exactly why I was asking a fair fight/comparison between both few reply back as I would really want to see how octane beats RT GPU

                  Definitely looking forward for anyone that can make a comparison

                  Stan
                  3LP Team

                  Comment


                  • #54
                    Originally posted by ^Lele^ View Post
                    No. I specifically tested iRay (0x1 version. Am on the NVidia beta, will soon switch to it, with identical results.), Arnold, Renderman (19 and 20) and Corona.
                    Hey Lele have you tested redshift? its available for beta 3dsmax. im really impressed with speed. logical way of working materials render settings.

                    Just coming from my workflow, I rather have chaos group's time spend working on light transport efficiency, leaning out the code or adding in new techniques (SSS, volume rendering/ fog) especially as its 2015 SIGGRAPH now. Then adding features into the vray frame buffer. I think that will be more beneficial.
                    Last edited by jenya.andersson; 11-08-2015, 02:56 AM.

                    Comment


                    • #55
                      I personally think mental Ray is the best...
                      Maxscript made easy....
                      davewortley.wordpress.com
                      Follow me here:
                      facebook.com/MaxMadeEasy

                      If you don't MaxScript, then have a look at my blog and learn how easy and powerful it can be.

                      Comment


                      • #56
                        Yes, I tested Redshift quite extensively, and it's by far the closer for speed to RT (in general tracing speed, not everywhere.), plus it has a few features being worked on in RT (Volumes, additive blend) which work quite well and speedily.
                        However, it too seems to cut just a few too many corners (HEAVY clamping by default, for good reasons. Nothing can go above 1000f.), and some of its stuff is seriously quite odd.
                        Lights change intensity based on their HUE, for instance.
                        Filters like Lanczos and Mitchell are included with their negative components, and produce ring around contrast areas.
                        The specular BRDF seems a simple half-vector reflection with an isotropic gloss cone, nothing remotely close to a complex BRDF (like GGX, or even phong, for that matter!), and it falls apart quickly when it isn't looking ugly.
                        However it's very quick (Of course.).
                        Hair, instead, are around three and a half times faster in RT, for a VERY similar model (as in to say: the Chaos Devs did a seriously optimised job.).
                        SSS, well, it's splotchy (only interpolated. no matter the settings used, it will always miss some geometric features, and blend the wrong samples.) and with interpolated SSS PROGRESSIVE is faster than RS (can't compare RT, as it does it raytraced only. Progressive VRay was faster for the same sampling level, and close enough SSS settings, on an amd FX8350, versus a GTX 980...) while sporting less artifacts.

                        All in all, it's not unpleasant to use (like, f.e., rMan in Maya), but i found the lack of specular models (and glossy reflections!), the global clamp and the GI clamped at 15 bounces to be quite annoying.

                        Mental Ray, however, wipes the floor with everyone else.
                        Once i turned final gather on, it started rendering faster than the speed of light.
                        I found my renders done the day before i started them.
                        Last edited by ^Lele^; 11-08-2015, 03:58 AM.
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #57
                          So... bottom line, from what i've got so far from this topic..., is that there's nothing that should be changed on V-Ray, except requesting the introduction of some new features, like OCIO and more advanced LUT controls (?), but the GI and how gamma is handled, is no different than other rendering engines ?
                          CGI studio: www.moxels.com

                          i7-4930K @ 3,4GHz watercooled
                          Asus Rampage V Extreme Black Edition Intel X79
                          64 GB DDR3 Corsair Vengeance Jet Black 1866 MHz Quad Channel
                          8 * GTX Titan X
                          960GB SSD Crucial M500 SATA-3
                          nVidia drivers: always latest
                          Windows 10 up to date

                          Comment


                          • #58
                            Arnold, Renderman, Corona, iRay and a slew of other non-GPU renderers perform EXACTLY in the same way as far as the radiometric equations are concerned.
                            The differentiator is speed (arnold, rMan being 3 to 7 times slower, across the board, iRay being close if not ever so slightly quicker for the GI part, Corona i'll test soon for performance.), when using purely brute-force methods, and features quality, where non-Brute Force approaches are available (Say, the LC, Importons, FinalGather, UHD Cache and so on).
                            There is no difference in Linear Workflow in any of those, and i wouldn't see how it could be any different.
                            Linear Workflow is grounded in solid Logic before being mathematically sound.
                            Any deviation from that, at the time of writing this, is an optimistic, and careless, attempt.

                            Scene setup, under LWF, is also an exact science, within the bounds of which creativity can be expressed, and failing to correctly account for the needs of an LWF approach to LnR (pure blacks, for instance, or arbitrary gamma in textures) and will indeed produce skewed results.
                            But the render engine just did (correctly) what it was (wrongly) instructed to do.

                            OCIO is supported fully, it has been for a good year now, to the public, and privately a lot longer still.
                            Whether it's anybody's preference to convert their whole image library to it, to get exactly the same result as without, escapes me.
                            As it's something i wholly miss why anyone would like to load an image AND an OCIO descriptor for each texture to display in sRGB, and end up rendering in LWF anyway.
                            However, it's there, with all its convoluted, and overbearing, color transform matrices and tools.

                            As far as LUT "controls" i don't see which you'd like to have, aside from loading it.
                            LUTs aren't humanly handled, normally, and much less so after they have been created and approved.

                            *I* personally stay the hell away from anything that may contribute (either by design, or by user mistake) to skew the results of the Radiometrically Linear light transport equations, and relegate any such skew to compositing (where tools to deal with color spaces are hopefully proper, along with the chance of trying out one's work on the correct display device, or emulator thereof.).

                            Converting those linked camera profiles to a 1D LUT should be an easy job for anyone with a Nuke or a Fusion laying around.
                            Making sure the lin-log log-log lin-lin internal data conversion is respected, will require a tiny bit more color science skill.
                            All the points i made before about implementing them as a standard feature for VRay remain valid: i can't stop anyone from doing whatever they please, but i'd rather not force a non-standard, and provenly limited (see clamping.) approach to LnR unto the wider user base, especially as the case made seems to me very much riled with arbitrariety and not very strong at all.

                            Now, it's a good thing for everyone i am not the one making these choices (or we'd have thrashRay by now.).
                            I am merely pointing out the whys and why nots of it as clearly as I can.
                            I have yet to see, ultimately, LWF+minimal post falling short of any other in-camera method, so I'd rather concentrate on making full use of the LWF tools around which modern LnR (and VRay.) revolves, rather than chasing camera curves to do instagramming at rendertime.
                            Lele
                            Trouble Stirrer in RnD @ Chaos
                            ----------------------
                            emanuele.lecchi@chaos.com

                            Disclaimer:
                            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                            Comment


                            • #59
                              Originally posted by ^Lele^ View Post
                              instagramming
                              You should write a book Lele.
                              A.

                              ---------------------
                              www.digitaltwins.be

                              Comment


                              • #60
                                I haven't read this whole thread but can't you just load a LUT into the frame buffer? The latest version of photoshop also allows you to load LUTs.
                                Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/

                                www.robertslimbrick.com

                                Cache nothing. Brute force everything.

                                Comment

                                Working...
                                X