Announcement

Collapse
No announcement yet.

Any way to force further bucket subdivision?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Any way to force further bucket subdivision?

    Is there any way to force Vray to subdivide buckets beyond the default? I've always been unclear about how this works beyond assuming it just happens automatically at a certain point dependent on image resolution and the number of CPUs available to Vray.

    I'm also unclear whether beyond that whether this process is connected to the user defined default bucket size (out of the box set to 64, I usually halve this to 32).

    I'm really tired of having to wait for Vray to finish rendering the last couple of buckets of a refraction heavy scene -- it's kind of ridiculous how long it's taking just to complete two buckets, and I've had to resort to using resumable rendering and killing and restarting the render to force subdivision and try to get these buckets completed faster. However, even when there are only two buckets left, Vray will not subdivide beyond the second level (I've never seen it do otherwise), so I'm limited to two procs and it's still taking forever for these to complete. If I could force further subdivision obviously these buckets would finish sooner.
    Last edited by SonyBoy; 05-10-2021, 08:20 AM.

  • #2
    Yes, this bucket issue is définitely annoying. I've had that plenty of times where a few last buckets take the same time as all the rest of the render. I always put a maximum of 16 for bucket size now, and i'm using more and more the progressive sampler (but it also uses less CPU at the end)
    www.mirage-cg.com

    Comment


    • #3
      Originally posted by SonyBoy View Post
      If I could force further subdivision obviously these buckets would finish sooner.
      That's unlikely.

      The reasons for long-rendering buckets are generally well known (when not talking about a bug, of course.), and currently there is nothing the renderer can do avoid completing the task it's been set to.
      Research work is being done, but there is no ETA on it, as it's very much at the edge of current knowledge.

      On the user side, however, there are a number of mitigating actions that can be taken.
      The foremost cause for hard-to-converge pixels (that is what the render is doing while you wait on those buckets/pixels) is generally high energy concentrated in a small spot.
      If the high energy is in a pixel -or small clump thereof-, converging to the noise threshold will take a long time, and it's possible it will *never* converge, no matter how many samples one throws at it, as the variance will always be ginormous for each sample, making "averaging" it to the set threshold hard to impossible.

      That this should happen in refraction makes sense, of course.
      But it's common in a number of other cases (geometric discontinuities will make sure of this. And DoF and Moblur too.).

      What can be done to ameliorate it is to change lights and shaders.
      Roughen the specular or refraction, so to distribute the incoming energy better, check the lights (or HDRIs!) intensity and use just as much as you need for the specular part (splitting lighting between the GI/diffuse and the specular/refraction is often very very helpful in moderating the intensities.).

      If neither of the above is doable, for whatever reason, what you're left with is clamping the max AA (raising noise threshold will help nothing under these conditions.), ensuring it won't run too long on those hard pixels.
      Notice you'll be left with half-finished sampling of said high energies, so expect visual issues.

      Of course it's potentially more work, but it has to be weighted against the gains in rendertimes, often quite big (i.e. we've witnessed 10++ times quicker to a given noise threshold, after slight massaging of the energies.).

      Feel free to share a scene via email, should you wish for it to be studied specifically.
      Lele
      Trouble Stirrer in RnD @ Chaos
      ----------------------
      emanuele.lecchi@chaos.com

      Disclaimer:
      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

      Comment


      • #4
        Thanks for the detailed explanation. I was aware of some of the reasons why this occurs (e.g. overbright pixels that take a long time resolve), but not others, and it's good to have it explained in technical terms.

        I'll certainly try some of the mitigations you suggested.

        However, of course some of these will add to render times themselves, such as adding glossiness to refractions and reflections.

        So far as splitting the HDRI lighting -- so in effect that would use one dome light for difffuse and GI, and a second for specular and reflections? Would you suggest identical maps in both or is there any benefit in editing these images in some way?

        Comment


        • #5
          Originally posted by SonyBoy View Post
          However, of course some of these will add to render times themselves, such as adding glossiness to refractions and reflections.
          Yes, but this too has to be taken in context: if it shortens the overall time by virtue of not hanging in a specific place, it's a win.

          So far as splitting the HDRI lighting -- so in effect that would use one dome light for difffuse and GI, and a second for specular and reflections? Would you suggest identical maps in both or is there any benefit in editing these images in some way?
          Not just HDR maps in the dome, mind you.

          Let me be more specific, although i'll be simplifying greatly the issue: very bright sources with GI and diffuse lighting are great because they provide for sharp shadows and loads of light to bounce. While there is still a speed-up for images with less energy distributed, the penalty isn't nearly as severe as with reflection/refraction/direct visibility.

          Imagine however that all of a sudden one such hyperbright source (the physical sun disk is in the millions float at midday, f.e.) was to be seen in one reflective pixel.
          Even with the lower than 1 reflectivity (fresnel and reflection color < 1), the pixel may well be tens of thousands of times brighter than its surroundings (we generally aim, after exposure, at most of the "subject" to be between 0.0 and 1.0.), and still only "show" as white.

          Averaging it with the surrounding parts which are within the Normal range becomes then an endless task, and one that can -in fact- never be achieved.
          Assuming a flat 9px kernel, averaging, 8 pixels at 0.5float with the one pixel at 10.000 float produces values of ~1100.0 float; nowhere near Normal ranges.
          In fact, any value within Normal range for the 8 pixels makes absolutely no discernible difference to the resulting average, showing how the single pixel can wreck havoc on any attempt at anti-aliasing.

          It follows that the only way - besides sample clamping - to make convergence happen is to diffuse those energies, and that's why making things rougher will render, in this specific case, quicker.
          In a way, part of the AA is done by the shader roughness.

          So, back to the top:
          For GI, usual lighting practices are fine (and one should really use IES lighting and physically plausible values whenever possible!).
          You will want to have the lighting which deals with reflections, refraction and direct viewing as under control as possible (read: as low as you can get away with without ruining the look.).
          This means colorpicking and clamping/scaling sun disks in HDR maps (always save them to 32BPC anyway, to avoid +inf values), and texturing/lowering the intensity of light fixtures (f.e. using a Softbox with frame active on directly seen light sources, and so on).
          Using the shaders' parameters to influence the look of highlights (for example, hotspot-to-tail ratio, or GTR gamma value) instead of lighting intensity is much preferable, under these premises.

          Finally: this is a hotly debated topic among users, so i would encourage testing, both synthetic and in practice. Do not take my word for it.
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #6
            Thanks for the followup. It certainly is a complex subject. You seem to be suggesting a non PBR workflow, however? Not that this is how I'm working and already do in fact tend to use shading parameters to get things "looking right".

            One day we'll just be able to hit render and it will all be perfect (just like how clients think it already works )

            Comment


            • #7
              Oh i mean minute changes. (*)
              Consider f.e. that some engine already adds roughness to surfaces that are out of focus, and that it lowers the shading quality for motion-blurred geo.
              V-Ray too can do similar, but not identical, things.
              F.e. it automatically reduces MSR for thin geo like hair, to favour camera rays, and so on.

              We could be forcing people to not use "straight" reflections and refractions, f.e., with a clamp to min roughness.
              It's not physical to have perfect speculars, and it can get troublesome, but as it's also a performance differentiatior, we prefer to leave it to the user.

              Also yes, the reasons for hard convergence are a few more besides what i mentioned, as a few things play into the probabilities/energies modern renderers have to contend with.

              (*) EDIT: Minute changes to shaders. The changes to the reflection/refraction/direct ligthing are well more substantial, yes.
              The aim is to be physically based, as the goal is to produce a pleasant image, in production, in a reasonable time.
              Last edited by ^Lele^; 07-10-2021, 07:01 AM.
              Lele
              Trouble Stirrer in RnD @ Chaos
              ----------------------
              emanuele.lecchi@chaos.com

              Disclaimer:
              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

              Comment

              Working...
              X