Announcement

Collapse
No announcement yet.

Render vs Post....whats your take?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Render vs Post....whats your take?

    Hey there,

    So I drove myself into a corner as it seems...I´m still working on my personal project with some underwater shots and while I rendered the biggest shot already using a lot of layers and compositing trickery, I decided to go for "all in render" for the next shots.
    Reason for this was mostly some opacity mapped leaves and dust specs, for which it is notoriously hard to get decent DoF and motion blur in post.

    Another reason was that I simply liked the underwater look with real environment fog compared to faking it with Zbuffer in post.

    So I started lighting with that in mind and I´ve gotten pretty far, focussing on getting the light right under these conditions. But now I´ve turned everything back on for a test render and Vraydisplacement in particular is totally killing render times.
    Right now I´m looking at almost an hour per frame for 720X405 px....So that will scale up to around 3-4 hours per frame I guess.

    The sequence itself isn´t too long with only 130 frames, but if there are any problems or fixes those rendertimes are very steep.

    So my question here would be:

    Whats your take on this?
    Are rendertimes like this acceptable in production?
    Ever since two or three releases ago I haven´t tweaked anything Vray, is it maybe time to get back under the hood?
    Or should I take the other pill, recreating my lighting without Environment fog, render everything in layers (maybe with the exception of particles) without in render DOF and MB and try and recreate the look in post?

  • #2
    In another thread, someone mentioned a box with the vrayscatterfog material was faster to render than environment fog and that a phoenix grid with no sim but a fully filled box of smoke rendered quicker again - might be worth a look!

    On the render times it depends, volumes are still nasty but vray is quite quick at them compared to a lot of other renderers. The big thing is if you're earning enough on the job to justify the times versus your cost of rendering them - you could look at one of the cheapy online render farm services like pixelplow or maybe take advantage of the vray cloud beta and get some frames for free?

    Comment


    • #3
      Thanks for the input, thing is: I´ve already substituted the vrayenvironmentfog with an empty phoenix grid and ccomparing it again it showed that it actually renders pretty much as fast as without it (possibly even a bit faster since a lot in the background is washed out).
      So right now I´m back to rendering in two layers: One with phoenix fog but without DoF and Moblur and one with just the particles with DoF and Moblur enabled, but all vraydisplacement and forest pack objects disabled, since I don´t really need them for the particles.
      I can´t rectify the use of an external farm, since this is a personal project and I doubt I will make any money directly out of it...

      My next step would probably be to "troubleshoot" the render to see what has the biggest impact on rendertimes and the least impact on visuals and eliminate or substitite that...which is most likely Vraydisplacement in places that don´t really need it...

      Comment


      • #4
        Originally posted by ben_hamburg View Post
        I can´t rectify the use of an external farm, since this is a personal project and I doubt I will make any money directly out of it...
        Which i think is why you should check out the V-Ray Cloud beta, which has free, unlimited rendering for as long as the beta runs. ^^
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #5
          Free unlimited rendering? Sheesh, I wish I was further with my short film...I´m already dreading the render times in the arctic macro shots, where I will have forests of huge snowflakes with tons of refractions and reflections and no way of faking DoF in post....
          For these particular shots now though it doesn´t really make sense, since there are so many proxies, xref scenes and cached particle flows involved, I´m guessing uploading would take longer than rendering, lol...
          Not to speak of the external plugins like "smartrefs" (can´t live without it anymore!) or forest pack that would make scene preparation and troubleshooting such a pain...

          I cut my render times more than in half by switching from raytraced to prepass based SSS2 materials and now I´m rendering three passes:

          1. Main pass with lots of ligh select elements (my new best friends!)
          2. Secondary pass with everything matte and Phoenix-Grid (set to "volumetric" and with light cache speedup enabled for environment fog. I tried faking the fog in post, but the zbuffer for a large scene like this is too hard to control and real atmosphere still looks more believable. I triedrendering 1+2 in one pass, but the light cache speedup messes with the light elements and without it rendertimes are too high.
          3. Third pass as described above for just the particles with real DOF and Motion blur.

          Again... hard to tell if rendertimes for a combined renderpass would be that much higher then rendering these separate with a bit of trickery, but rerendering some layers if there is the need will make things a lot easier down the road.

          Any additional thoughts?
          I´d still like to know what rendertimes are in big productions and if there is still more of a dozens of layers approach than an all in one approach...

          Comment


          • #6
            Big on our side for max and vray is about 4 hours for a responsible 2k frame on a big marvel environment (we like about 2 hours though), irresponsible is about 12-24 hours a frame for a fully cg interior wit lots of glossy everything and potentially lit by meshlights! For really heavy volumetric jobs at the minute we go for clarisse instead as it's got better memory handling for vdb caches. More recent vray's have proper instancing of vdbs so we won't get killed on memory when we upgrade!

            Comment


            • #7
              Originally posted by ^Lele^ View Post
              Which i think is why you should check out the V-Ray Cloud beta, which has free, unlimited rendering for as long as the beta runs. ^^
              Hi Lele,

              I'm relying heavily on rendering quite a lot of camera's via the batchcam 1.12 script to manage and submit all my camera's in my interiors.
              Is there a way to submit to vray cloud via this script? Would be a huge win, because I don't know any other online render service that offers anything remotely like this.
              I've stopped using online render farms because of this, and installed my own render nodes.


              Thanks!
              Pieter
              claar.be

              Comment


              • #8
                Originally posted by pietervanstee View Post

                Hi Lele,

                I'm relying heavily on rendering quite a lot of camera's via the batchcam 1.12 script to manage and submit all my camera's in my interiors.
                Is there a way to submit to vray cloud via this script? Would be a huge win, because I don't know any other online render service that offers anything remotely like this.
                I've stopped using online render farms because of this, and installed my own render nodes.


                Thanks!
                Interesting, but i thought i had seen this before in deadline...
                This is surely handy for those without it.
                To modify that script, you'd need the author's permission, i think.
                If there are specific functionalities we should be implementing in our submitter, please ask away!
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #9
                  Originally posted by joconnell View Post
                  Big on our side for max and vray is about 4 hours for a responsible 2k frame on a big marvel environment (we like about 2 hours though), irresponsible is about 12-24 hours a frame for a fully cg interior wit lots of glossy everything and potentially lit by meshlights! For really heavy volumetric jobs at the minute we go for clarisse instead as it's got better memory handling for vdb caches. More recent vray's have proper instancing of vdbs so we won't get killed on memory when we upgrade!
                  Wow...12-24 hours per frame does sound like someone is being called to the supervisor...

                  But may I ask if its still more a layerd approach and heavy comp work, or if its not unusual to render DOF and MoB and all FX in one pass and mostly do color correction in post?

                  Comment


                  • #10
                    Originally posted by ben_hamburg View Post
                    Wow...12-24 hours per frame does sound like someone is being called to the supervisor...
                    It only depends on the Work done in that time: if it's necessary, it's necessary.
                    Sure, it'd better not be a plane and sphere with AO, but it's ILM environments we're talking about...
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #11
                      Originally posted by ben_hamburg View Post

                      Wow...12-24 hours per frame does sound like someone is being called to the supervisor...
                      I read about similar rendertimes for Pixars Inside Out but it was for 4k I think.
                      German guy, sorry for my English.

                      Comment


                      • #12
                        Originally posted by ben_hamburg View Post
                        But may I ask if its still more a layerd approach and heavy comp work, or if its not unusual to render DOF and MoB and all FX in one pass and mostly do color correction in post?
                        Some stuff makes sense to do in 3d and other stuff in 2d. We're doing some under water stuff right now and we're rendering 3d motion blur (since it isn't that much of an extra hit these days) and doing a volume fog pass too. comp are doing all the blooming and the depth blurring - for the diffusion try doing a few different levels of blur and use the depth pass to apply the heavier blurs as you start going back into the shot. Ideally in nuke so you're not getting murdered. In terms of your particles, why not stack a few elements of floating stuff in 3d space in your comp and do a normal depth blur on them? I find the magic number for "levels of things" in vfx is about 4 - like if you want to make something look huge you've gotta have 4 levels of detail in there from big to small, if you want to sell shallow focus in the shot you'd have about 4 layers of cards of stuff hanging in front of camera all moving back in focus.

                        Comment


                        • #13
                          Oh yeah! Seems like GPU environment fog is now really quick judging by the facebook vray gpu group and Tomas from Dabarti's posts!

                          Comment


                          • #14
                            Right, just went through the nuke script of a really big underwater environment in here, the volume fog plays a small part of it but not as major as I'd have thought. The main underwater look is done by two things, all in 2d and driven by the depth pass. First thing is the colour drop off - do a colour correction where you add more blue and pull our reds and mask that via your depth pass as water is going to filter out everything but blue light the further in depth you go. The next thing was 5 levels of blur again masked by the depth pass - on a 2k frame we've 5 blurs between 50 and 400 ish, all getting gradually weaker as the blur goes up from 15% opacity back to 5% opacity - if you're using nuke, any of the nodes have a "mix" slider so you can blend the effect on and off with 0% being what's fed into it coming out, so no effect, and 100% being the fully effected result coming out.

                            Again this is a really wide shot which would be kilometers in depth so you wouldn't expect shallow depth of field on something this size. If you're doing something more human scale or macro you'd likely include some dof!

                            Comment


                            • #15
                              Well, yeah, of course, for ILM environment I guess thats ok...If I personally had rendertimes like this, I´d definitely get called to the supervisor...

                              Rendertimes with motion blur and DoF enabled in render still make a pretty big difference here, I don´t ave the numbers in mind anymore, but it was at least three times higher with them on a complex scene with forest pack and vraydisplacement all over the scene. And for me the difference between 1.5 or 4.5h still hurts too much, especially for a personal project, so I´m returning to the comp workflow as noted above.

                              I´m gonna try to figure out the blurring approach you mentioned, joconnel, I have mostly just dealt with the colorisation via Zbuffer so far. I´m working with after effects, but that sounds like its pretty much the same like in nuke, You´ve only mentioned the colorisation and the blur, I´ve always also used a Zbuffer based contrast correction, lowering the contrast up to 100% in the distance to simulate limited visibility.
                              maybe I just need 32 bit Zbuffers, but that part is actually what always seems to look better wth 3D environment fog.

                              Apart from that I also came to the conclusion, that sometimes the scene composition itself is flawed for underwater conversion. In my example shot I´ve put way too many parts of the composition that are supposed to be visible in the 2/3s of the image closest to the camera, so in a real world underwater scenario, 1/3 of that at least should already be in a much lower visibility range of the frame.
                              So in this case I kept tweaking and balancing between showing off my pretty assets and making the underwater look believable and although its still a cool shot overall, the general composition just seems off and still somewhat flat:

                              https://vimeo.com/278012939
                              Password: tardigradeWIP

                              I´m probably not gonna touch this shot majorly (composition wise) before I finish everything else, because I already spent way too much of my free time on it and there are still a lot more shots of this size, but I´d still love some pro feedback, even if its harsh and I don´t like it...


                              After Effects is also pretty bad at adding 2D elements into a 3D workflow and for all my underwater shots I´ve made the mistake of including long dollies with the camera, which makes adding 2D plates increasingly difficult, at least for my limited capabilities.
                              I´ve tried several approaches and I haven´t managed to get a working 3D camera out of 3ds Max and into After Effects.

                              Thats the main reason why I was hoping to do the rest of the underwater shots mostly in render and not in post, especially after early tests looked very promising.



                              Comment

                              Working...
                              X