Announcement

Collapse
No announcement yet.

V-Ray Scene Optimisation Script

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Thanks for the encouragement, James, i'm glad it is of some use to you!

    Having done pretty much only movies, for the past 8 years or so, i tended to choose some pretty aggressive settings.
    The truth of my routine is that a huge farm will have to deal with a huge amount of 2 or 4k frames, sometimes with motion blur, and nigh always with terabytes (literally.) of sim data, often, multiple times for the same shots/ sequences for reasons outside of the artist's (or even the company's) control.
    A good estimate for a job well done, is to keep the FRAME rendertimes to within the hour.
    No matter in which year the movie gets made, no matter the hardware, no matter the contents of the scenes: go above one hour per frame on average, and people get first very skittish, then you get Comp complaining of frames not coming in, and finally you have Supervisors looking at you (well, me, as a LnR TD) with quizzing eyes and more than their share of disappointment.

    All my talking of trying different settings for the glossies was so that you could MATCH what look you had on the original shot.
    Having seen the kind of (flabberghastingly beautiful, at that) work you do, i have come up with a new preset (aptly named) that MAY work well enough for the ArchViz/ProductViz, where renders are akin to paintings people tend to stare at for a long time (as opposed to a movie frame that's gone from your sight in 1/24th of a second.).
    This said, it is my belief that there is huge scope, in any viz market, for an approach which isn't "fire and find out later" how long it took to render.

    I think about it this way: in screen space, we have those many pixels for a given resolution. So the variance in rendertimes HAS to stay within some boundaries, no matter how complex a shot looks when finished.
    To this effect, notice how Vlado and the guys developed a number of screen-space, rather than world-space, calculations: the Lc is never (for 1 sample per pixel, in SS) a 10 hour affair: it always goes in a matter of minutes, if that.
    The same applies to displacement, IRMap, and so on.
    Of course, there will always be a variance which depends on scene complexity, but it's been my task as a TD to strip the less contributing of effects to try and maintain a steady flow of material from LnR to comp.
    For instance, cutting glossy bounces, using textured area lights in place of dozen of individual ones, baking of lighting which isn't principal, and so on and so forth.
    Then again, those companies hire the likes of me for specifically that part of the job, so that the artists don't have to worry about it.
    Being able, however, to go to a Supervisor and precisely let him/her know how long that sequence will take to complete, before it rendered fully the first time, is overly well received.

    The aim of this script would be to enable any Vray user with similar capabilities: you know your render will sample those many times within a pixel (say, 1024 rays), times the number of lights that hit that pixel, times the light samples.
    Something the script allows you to control very finely (albeit, agreeably, with the risk of ending up with a noisy image), whereas any type of universal setting scenario has TIME as the great variable, to reach a given S/N ratio, which in most fast-paced productions I have been in is not a viable option (grab 50 slaves to render a 300 frames sequence at 16 hours per frame, and you'll positively incur in the grief of just about anyone else in the office, bordering on the bodily type, under duresse.).

    Anyhow, blubbering to the side, give the new version of the script a whirl (maybe not right away on the 16 hours shot...), and let me know if it works any better for the stuff you do.
    Last edited by ^Lele^; 13-08-2014, 04:45 AM. Reason: script removed. grab it from the first post.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #17
      Hehe your posts are always a good read Truth be told I have to re-read them several times in attempt to comprehend the technicality of it :P

      I grew sick of waiting on the 16 hour scene to re-render with your settings so I did a simpler sun+sky scene to see how it went.

      Unfortunately it resulted in a bit of a blowout... There are also some noticeable noise issues with glass-behind-glass situations.
      I wonder if this time around my render times sky-rocketed because my scene already had heavily neutered settings such as very low reflection max depth and cuttoff values as high as .01. Obviously your settings would produce a theoretically higher quality of image.

      One question: Does your script work with standard lights? I try to avoid using them but I find the target-spot type to be so quick and easy to set up when doing things like street lights and uplighting. I have some recent aerials employing 100's of these lights...

      I imagine some of my scenes are particularly hard to get clean. They're usually very dark, employing IBL along with lots of small sphere lights and target-spot standard lights and plenty of super glossy materials (wet-look paving, glossy tiles etc.)

      I'm pondering where the advantages lay between the 'new universal method' (i'll refer to it as 'NUM' from here on) vs RenderMate. Although the 'NUM' is super easy and tends to get good results relatively quickly, I think having the global subdivs mult. @ 0 means if the scene is largely looking good except for one particular light or material, they cannot be overridden manually so the entire scene needs to be adjusted to compensate to cover for these minor artifacts. What I like about a script-based solution is that, as you say, settings can then be further tweaked from there. My only fear is my incomplete understanding of your methodology. They seem so far from what I've become used to that if I change a few settings here and there to try and further tweak a scene and mess it all up, I could be at a point where I don't know how to recover it. Anyway my ill-formed, idle mutterings are probably mostly redundant...

      And thank you for the compliment!

      I think my next test will be using your LC methodology on a scene using this 'NUM' to see what the differences are.



      testing, testing, testing...
      James Burrell www.objektiv-j.com
      Visit my Patreon patreon.com/JamesBurrell

      Comment


      • #18
        not right now, as i first filter by class, and that has to be a vraylight.
        I'll see what i can do, so that it will just grab anithing which has a "subdivs" parameter, including standard light with vray shadows and IES.
        It may be trivial, or a bit more annoying, but i have to look into the code, as that part was written millions of years ago, or thereabout.

        In principle, no, scene complexity should not have an effect of the speed gains, however there may be a number of cases i have not foreseen where the approach may fall apart.

        I suppose i would greatly benefit from taking a look at one of the scenes, in that i'd learn about you set them up, and would try and see if there's something the script isn't doing quite right.
        On the other hand, i'd fully understand if you preferred not to divulge the material, or trade secrets.

        Give me a shout in private, if you want: my Sunday afternoon looks very bleak, right now, with a mother of a storm coming. ^^
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #19
          So it turned out ot be fairly trivial to change.
          It'll now act on any type of light which supports a vray shadow with the "subdivs" parameter in it, be it photometric, standard, or any of the VRay light types, while treating the dome lights as "GI" lights (so they'll take the parameter for GI/IBL, not for Direct lighting).
          Last edited by ^Lele^; 13-08-2014, 04:45 AM. Reason: script removed. grab it from the first post
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #20
            I managed to talk to Vlado, which very kindly allowed me to post here the images for that shot benchmarked in the first post.
            Notice the preset was still not ideal due to the low min AA setting (hence some discernible geometric inaccuracy on the thin details. expectable, and cured using the VFX-Std preset. Which should also render marginally faster.)

            A refresh of the results (the images attached show render elements for the faster of the renders, as set up by the script):
            • 23:39 - rendered as sent
            • 24:49 - 2 stop preset
            • 40:09 - awsm preset
            • 30:04 - universal: default vray, BF + LC, 1-64, 1.0 adaptive, .01 noise threshold
            • 22:20 - universal w/ embree on (go embree!)
            • 3:13:34 - universal w/ .001 noise threshold w/ embree on
            One thing is worth noting: Vlado MUCH prefers the approach he devised for rendering, which, to all intents and purposes, is exactly like the one i am about with the script: raising the "min shading rate" effectively feeds the AA sampler a cleaner image, making the faster, specialised rays work harder before the AA has a chance to intervene.
            The only difference is that it's scene-wide, although there is nothing stopping you from changing individual shaders or lights (provided you don't set the GSM to 0, of course.).
            The script is meant for extreme optimisation, and to gauge, as stated, the rendertime before the renderer starts, as the number of SPP (Samples per pixel) are known in advance, and there is a (roughly) linear progression between rendertime and SPP (ie. 256 SPP will take half the time as 512, and so on.).
            Attached Files
            Last edited by ^Lele^; 03-08-2014, 06:25 AM.
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment


            • #21
              I'm happy to send you a scene at some point but I'm afraid you'll facepalm when you see my settings :P

              And trust me there are no "trade secrets" in my scenes :P

              I'll find some time to strip down a file, do a test with your script against my settings and if the render times still skyrocket, I'll send something over to you.
              James Burrell www.objektiv-j.com
              Visit my Patreon patreon.com/JamesBurrell

              Comment


              • #22
                Ah thanks!

                In the while, i trained the script a bit more on the scene for which i posted the images, and managed an increase of 20% rendertime for a much, much cleaner image, in both glossies and GI, particularly around the thin geo detail like the rightmost wall, and the lamps nooks and crannies (the former was noisy, the latter had light leaks due to the LC subdivs and sizes not being quite optimal for the shot).
                I'll post a new version of the script with a dedicated preset for when one has such issues tomorrow.
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #23
                  Interestingly I did a test using your LC methodology and saw a 15% decrease in total rendertime. I do however see a tiny bit more noise and slight change towards the green and red tones in my samplerate element along with a very slight brightening of bounced light.
                  James Burrell www.objektiv-j.com
                  Visit my Patreon patreon.com/JamesBurrell

                  Comment


                  • #24
                    I am assuming that the script is for v.3. I still have v.2.4 and I am getting an error.
                    Bobby Parker
                    www.bobby-parker.com
                    e-mail: info@bobby-parker.com
                    phone: 2188206812

                    My current hardware setup:
                    • Ryzen 9 5900x CPU
                    • 128gb Vengeance RGB Pro RAM
                    • NVIDIA GeForce RTX 4090
                    • ​Windows 11 Pro

                    Comment


                    • #25
                      Err, yeah, it is named Vray3RenderMate, Bobby :P

                      James, indeed i edited the post above where i talked about the 1 sample per pixel to include the cases where it is beneficial to have it higher than that (double, or quadruple the subdivs, and maybe do a testrender or two with LC/LC to check how good is its distribution and filtering.).
                      Notice i prefer the PREfiltering to the rendertime Filtering, chiefly because the former is done once, and not per pixel, so it's quite a bit faster.
                      The drawback is that if you have thin-sheet geo, or very very fine geometric detail, you may get light leaks and splotches, which would be absent for the same filtering number if done at rendertime.
                      Normally, i prefer to fix the geo, and get the LC to look right with prefiltering, but i am not quite current as to the rendertime filtering speed: Vlado may have made it faster (without saying. which would be typical... :P).
                      Lele
                      Trouble Stirrer in RnD @ Chaos
                      ----------------------
                      emanuele.lecchi@chaos.com

                      Disclaimer:
                      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                      Comment


                      • #26
                        So I'm trying to wrap my head around your methodology a bit more and in the process I've managed to cut over 60% render time off my hellish 16-hour-to-render-scene (just goes to show how little time I have to even test-render).

                        Even so I've re-read your posts many times over and I think I'm starting to understand it...a bit :P If I'm right in my understanding, you're able to quite accurately predict render times based on what you call Samples Per Pixel and your script is giving you a readout of this figure pre-render. Are you saying if you know what the SPP is in advance, you can accurately predict render time regardless of a pixel's content?

                        Secondly, are you balancing the SPP of each effect or just whatever figure brings about a clean pass for each effect?
                        When not using a fixed AA (4/4 or 8/8 for example) the SPP figures have a range based on adaptivity obviously, so should I be aiming for 512SPP as a minimum or maximum for each 'effect' or is this just all part of the game (as above)?

                        I'm trying to keep this charade of being learned going but I fear it's all falling apart at the seams :P I feel like any questions I have can simply be answered by doing lots of my own testing... Like you said before, playing with the script shows rather quickly what is happening under the hood.

                        One question I would like to ask with the ArchViz preset is about AA min/max. Your default settings of 2/4; why have you chosen these? In particular the minimum of 2? Would you recommend I find the min max AA settings that clean up my Alpha and texture edges and stick with those or do your settings play an integral role in the scripts basic workings?

                        On a side note: I think the operation you've got going with the VFB is sort of dangerous. It seems to be doing some scary things with my VFB history under certain circumstances. Any chance you can include a button in your script to turn that feature off?

                        Originally posted by ^Lele^ View Post
                        I managed to talk to Vlado, which very kindly allowed me to post here the images for that shot benchmarked in the first post.
                        Notice the preset was still not ideal due to the low min AA setting (hence some discernible geometric inaccuracy on the thin details. expectable, and cured using the VFX-Std preset. Which should also render marginally faster.)

                        A refresh of the results (the images attached show render elements for the faster of the renders, as set up by the script):


                        One thing is worth noting: Vlado MUCH prefers the approach he devised for rendering, which, to all intents and purposes, is exactly like the one i am about with the script: raising the "min shading rate" effectively feeds the AA sampler a cleaner image, making the faster, specialised rays work harder before the AA has a chance to intervene.
                        The only difference is that it's scene-wide, although there is nothing stopping you from changing individual shaders or lights (provided you don't set the GSM to 0, of course.).
                        The script is meant for extreme optimisation, and to gauge, as stated, the rendertime before the renderer starts, as the number of SPP (Samples per pixel) are known in advance, and there is a (roughly) linear progression between rendertime and SPP (ie. 256 SPP will take half the time as 512, and so on.).
                        James Burrell www.objektiv-j.com
                        Visit my Patreon patreon.com/JamesBurrell

                        Comment


                        • #27
                          Wow, James, you're as good an interlocutor as i have ever found.
                          Why the hell did i not have you as artist in the movies i TDed? :P

                          Are you saying if you know what the SPP is in advance, you can accurately predict render time regardless of a pixel's content?
                          I know this sounds VERY counter intuitive, and it's because your intuition is actually right.
                          It's right, in that no, NORMALLY, a pixel could be taking, for a given amount of sampling done to it, anything from a little time to forever or thereabout.
                          There OUGHT to be no way to accurately predict how long a render will take, when the sampling isn't fixed, and the scene contents, and shader and light settings, are unknown.

                          And this is why i am severely limiting stuff like glossy bounces, for instance.
                          A bit of VERY SIMPLE math may help.
                          Let's say that a glossy reflection has 8 subdivisions, or 64 SPP.
                          If the glossy was allowed to bounce off itself (ie. double mirror scenario) five times, without adaptivity at all, the pixel which sees the fifth bounce would have to trace 64^5 rays.
                          That'd be around one BILLION rays for that pixel alone.
                          That's why we ought to thank Vlado twice a day for the adaptive engine, by the way.

                          My approach, however, TRIES to do without the adaptivity wherever possible, but of course the risk of hitting infinte rendertimes is very high, that way.
                          Hence the limiting of glossies: regardless of the amount of glossiness, of what is reflected into it, a 2 bounce, 8 subdivs shader will only ever trace 4096 rays (8^2)^2.
                          So, in the worst case scenario, if you had a fully covered image with a 2 glossies bounce shader, the render (of the glossy effect) will ALWAYS take as much as the image size requires, regardless of anything else.

                          Of course, a real scene is often a LOT more complex than that, and that is why, even in the best of cases, it's a seriously, seriously bad idea to do without adaptivity at all (well, some OTHER renderer does precisely that. and the results do speak for themselves, lol.).
                          The example above holds for pretty much all the effects (maybe not for raytraced, physical SSS, due to the nature of the beast, but it may soon. err, i never wrote this!).

                          Secondly, are you balancing the SPP of each effect or just whatever figure brings about a clean pass for each effect?
                          When not using a fixed AA (4/4 or 8/8 for example) the SPP figures have a range based on adaptivity obviously, so should I be aiming for 512SPP as a minimum or maximum for each 'effect' or is this just all part of the game (as above)?
                          Now, as to which exact value of subdivs for a given glossy amount (let's stick with the one example) will give you clear results is practically impossible to know BEFORE the rendering starts: it's down to the glossiness amount, of course, but also to the chance of multiple glossy bounces (glossy shader reflecting another glossy shader, reflecting another glossy shader, for a 3 bounce max), and to the distance of reflected objects (in that if an object is close, for a given gloss amount, it will be sharper and require less samples to clean up than a farther object will).
                          Same applies to lights, GI (interior versus exterior is the most glaring case), and so on.
                          However, i have never seen any scene needing more than 2048 SPP to clean ANY solid-surface or shadow effect.
                          Feel free to find out which balance of SPP versus noise amount is good for you.

                          I am actually working with someone else to develop a few ideas i have had in mind for a year, now, to implement pre-render heuristics that would take care of just that: automagic sampling for lights, glossies and such, but there is absolutely no guarantee it will ever work, in principle or practice.
                          I'll know when we'll start laying down the prototype (the math, taking care of worst-case scenarios, seems sound, but hey, i'm NOT a Vlado which speaks Italian, so take all this with two pinches of salt).

                          Ultimately, the script tends to err on the safe side, letting adaptivity decide when it's right to stop tracing, with values based on extensive testing (of which i'll need a wee bit more, as you've seen with the script updates) and empirical results.

                          So, how, you asked, can i know in advance if how long a render will take?
                          If you got the bolded sentence above right, you'll understand by now that rendering a quick preview with a given (lowish) sampling amount will provide for the correlation you need to know how long the final render will take.
                          If one which had 128 SPP to begin with and two glossy bounces took 1 hour, one with 256SPP will take roughly twice (with an adaptivity of 0.5, counting on the second bounce being sampled a bit less as less "important") or exactly four times with an adaptivity of 0 (as then it's 128*128 vs. 256*256 for the second bounce).

                          I suppose this is just about the lay of the land, as far as my reasoning behind the script.

                          On a side note: I think the operation you've got going with the VFB is sort of dangerous. It seems to be doing some scary things with my VFB history under certain circumstances. Any chance you can include a button in your script to turn that feature off?
                          For what regards the VFB, if you could be more specific as to which odd behaviour it shows with your history (which HAS to be enabled and with enough disk space to save the current VFB image in vrimg format), i may try and correct it.
                          Of course, the checkbox will be coming with the next script update regardless(i'm currently busy testing some other stuff, but will get to it this evening, i hope).

                          One question I would like to ask with the ArchViz preset is about AA min/max. Your default settings of 2/4; why have you chosen these? In particular the minimum of 2? Would you recommend I find the min max AA settings that clean up my Alpha and texture edges and stick with those or do your settings play an integral role in the scripts basic workings?
                          The archviz preset with a 2-4 may well be an error on my part. i thought i left it at 4-4, as you can very well tell 4 rays per pixel will NOT give you nice, clean alpha edges.
                          I'll look into it as well, and update it.
                          On the other hand, i hold no truth whatsoever: you are not only free, but very much encouraged to find what combination of values floats your boat for a particular scenario.
                          And maybe then to let me know, eheh XD
                          Lele
                          Trouble Stirrer in RnD @ Chaos
                          ----------------------
                          emanuele.lecchi@chaos.com

                          Disclaimer:
                          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                          Comment


                          • #28
                            Quick test on a new job with the Arch Viz setup has given me very nice results. Almost 100% clean render, using HDRI & BF/LC. (in excellent time I should add)
                            It has introduced some wierd fireflies into the GI pass that werent there before.

                            I dont quite understand whats going on and *why* the render is cleaner with subds still @ 8 and MSR @ 2. Is the AA doing all the work?

                            edit: It cant be with a clr threshold of 0.1 so definitely baffled over here
                            Last edited by AlexP; 04-08-2014, 08:23 AM.

                            Comment


                            • #29
                              Ahah it's BLACK MAGIC!

                              I turn off "divide shading subdivs", so each AA step MULTIPLIES the base subdivisions.
                              In the case of the Archviz preset, the math is as follow (well, for how it shows up now. not user is identical to what i released, but the math is correct regardless):

                              8 subdivisions = 64 SPP.
                              (64 SPP * (4*4) (min AA squared) ) * 1.334 GSM * 0.75 (as adaptivity is 0.25, so 75% of the rays are cast regardless.) = 1024 SPP (and change).
                              Should the adaptive routines think more rays are needed, at the min AA level of 4 you'd get 1366 SPP.

                              Same applies for the Max AA, only in the forumla above 4*4 becomes 6*6, which leads to 2304 to 3073 SPP.

                              Let me know if i managed to get you more confused. ^^

                              Ah, and if you could show me what you mean (a crop is fine) about the fireflies in the GI, i may be able to understand where they come from: i definitely turn OFF max ray intensity, so that could be the case, if the HDRI is extremely bright in some areas. feel free to turn it back on by hand.
                              Lele
                              Trouble Stirrer in RnD @ Chaos
                              ----------------------
                              emanuele.lecchi@chaos.com

                              Disclaimer:
                              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                              Comment


                              • #30
                                Originally posted by ^Lele^ View Post
                                Ahah it's BLACK MAGIC!

                                I turn off "divide shading subdivs", so each AA step MULTIPLIES the base subdivisions.
                                In the case of the Archviz preset, the math is as follow (well, for how it shows up now. not user is identical to what i released, but the math is correct regardless):

                                8 subdivisions = 64 SPP.
                                (64 SPP * (4*4) (min AA squared) ) * 1.334 GSM * 0.75 (as adaptivity is 0.25, so 75% of the rays are cast regardless.) = 1024 SPP (and change).
                                Should the adaptive routines think more rays are needed, at the min AA level of 4 you'd get 1366 SPP.

                                Same applies for the Max AA, only in the forumla above 4*4 becomes 6*6, which leads to 2304 to 3073 SPP.

                                Let me know if i managed to get you more confused. ^^

                                Ah, and if you could show me what you mean (a crop is fine) about the fireflies in the GI, i may be able to understand where they come from: i definitely turn OFF max ray intensity, so that could be the case, if the HDRI is extremely bright in some areas. feel free to turn it back on by hand.
                                Ok that makes sense.

                                Obv all scenes are different, but I dont understand why this way of sampling is faster than say, just upping subdivs on lights & materials or upping MSR.

                                Additionally I dont understand why 4/6 AA isnt much slower. Also, I dont understand why the egdes are clean with clr thresh @ 0.1

                                Black magic indeed.

                                You're right about the fireflies, it was the max ray intensity, if I turn that back on they go away.

                                Comment

                                Working...
                                X