Announcement

Collapse
No announcement yet.

Adaptivity clamp

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Adaptivity clamp

    Hey guys,
    can you please explain what this new feature is doing?
    https://www.behance.net/Oliver_Kossatz

  • #2
    I'm interested in this one too ,and I have 3 questions too :
    1-Will this fix the issue of dancing hot pixels that's mentioned here :
    https://forums.chaos.com/forum/v-ray...sy-reflections
    2- I remember Lele explaining how using a low burn value (for Sampling only while keeping the image Raw) is not a good practice especially with the denoiser iirc , so what's the difference now with this option?
    3-Is this option the same as Corona Highlight Clamp ?
    https://support.chaos.com/hc/en-us/a...mping-3ds-Max-
    Last edited by M.Max; 29-11-2023, 04:30 PM. Reason: Fix spelling
    -------------------------------------------------------------
    Simply, I love to put pixels together! Sounds easy right : ))
    Sketchbook-1 /Sketchbook-2 / Behance / Facebook

    Comment


    • #3
      I am very curious about this too. Note the docs have been updated, and at least form the docs it doesn't sound like it's for fireflies, unless the more samples is indeed to get rid of fireflies.

      https://docs.chaos.com/display/VMAX/Image+Sampler

      Adaptivity clamp – Specifies an intensity limit for the adaptive bucket and progressive samplers to avoid excessive sampling of overexposed areas. Lower values mean a lower limit and potentially noisy overexposed areas. Higher values produce more samples in overexposed areas.​

      It sounds more like a setting to use FEWER samples for highlights to cut down on render time. That is great... IF we have solved the firefly issue once and for all?

      Comment


      • #4
        I've just found something interesting .
        I've opened an old scene which was rendered in Vray 6.1.2 and the Adaptivity clamp was set to 100 which is much higher than the default value (1.5) ,I think this info should be mentioned in the Docs too with some examples to show the difference .
        Anyway I will do my tests once I finish the project that I'm working on .

        -------------------------------------------------------------
        Simply, I love to put pixels together! Sounds easy right : ))
        Sketchbook-1 /Sketchbook-2 / Behance / Facebook

        Comment


        • #5
          The goal of this option is to reduce sampling in overexposed areas of the image. We are working on an optimized way to compute complex light sources and we noticed that V-Ray spends a lot of time on lampshades, highlights and other pixel areas that are very bright but putting more samples there doesn't improve the image too much since tone mapping and/or clamping hides the noise there anyways. Especially in more complicated scenes, sampling these areas of the image could take more time than sampling all the rest of the image.

          Here is a simplified scene where a glossy refractive tube is wrapped around a spherical light source. With the previous settings (adaptivity clamp ~100), V-Ray spends a lot of samples on the glossy surface even though these samples don't actually contribute to the final image too much. The new default of 1.5 renders this faster while keeping the look of the image the same.

          When loading old scenes, the value is set to 100 in order to produce the same result as before, just in case. For new scenes or when the renderer settings are reset, the default is 1.5.

          Note that this option only controls how many samples V-Ray puts in these areas. It is not the same as subpixel color mapping or clamping. As such it will not specifically help with fireflies, strong DOF or other difficult sampling scenarios. In parallel, we are looking into firefly filtering as well and hopefully this will come in one of the next updates (I've tried a few things over the years but none seemed to work very well; I think I finally landed on an approach that I like but it needs more work).

          Best regards,
          Vlado
          Attached Files
          I only act like I know everything, Rogers.

          Comment


          • #6
            Thanks for the explanation. Sounds like a reasonable optimization. I think we have all seen bright areas take a lot extra sampling like that.

            Looking forward to playing with this one.

            Comment


            • #7
              With the explanation I think there are some files I could check. We run into this situation quite a lot, especially with bright lights behind glossy refraction parts.
              https://www.behance.net/Oliver_Kossatz

              Comment


              • #8
                Here is a small test.
                Each rear light unit is based on a small sphere light, reflecting in the reflector housing, which in turn is refracted through slightly frosted glass.
                Render time V-Ray 6.00.20: 1h 29min
                Render time V-Ray 6.20.00: 1h 06min

                Quite a bit quicker, nice!

                Click image for larger version

Name:	rueli.jpg
Views:	806
Size:	126.2 KB
ID:	1196744
                https://www.behance.net/Oliver_Kossatz

                Comment


                • #9
                  Originally posted by kosso_olli View Post
                  Here is a small test.
                  Each rear light unit is based on a small sphere light, reflecting in the reflector housing, which in turn is refracted through slightly frosted glass.
                  Render time V-Ray 6.00.20: 1h 29min
                  Render time V-Ray 6.20.00: 1h 06min

                  Quite a bit quicker, nice!

                  Click image for larger version

Name:	rueli.jpg
Views:	806
Size:	126.2 KB
ID:	1196744
                  Let me clarify this a bit on the practical side:
                  Large overbright areas will benefit greatly from this: effectively V-Ray will be able to stop most of the sampling very early.
                  For these areas, the last bucket syndrome (or endless convergence) is gone, finished, ended.

                  High frequency hdr detail (say, about 1 to 3 px in size, like a grille in headlights) will not benefit from this much if at all: the neighbouring pixels, within sampling range, will force the bright pixels to be sampled again.
                  This kind of detail will still not converge quickly, and in one of your old scenes you had sent us for benchmarking, it had absolutely no appreciable improvement.
                  In these cases, currently the best bet is to clamp max AA (particularly if you're rendering stills.) to something lower than the usual 1 gazillion max subdivs, so the render can stop early without appreciable quality loss.

                  Lele
                  Trouble Stirrer in RnD @ Chaos
                  ----------------------
                  emanuele.lecchi@chaos.com

                  Disclaimer:
                  The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                  Comment


                  • #10
                    So what exactly is the algorithm for the clamp then?

                    If the brightness is greater than the clamp then break?

                    Or more like: if the abs(neighboring pixels - (max( clamp, current pixel))) > threshold then add samples?

                    Comment


                    • #11
                      Vlado is the one to answer on the details.

                      For the practical part, though, the sampler works on a 3x3 px kernel (otherwise what's there to antialias against?).
                      You can see this if you look at the samplerate RE refining and "deactivating" pixels while rendering progressive.
                      Where HDR, fine detail is present, often the surrounding area too is reactivated and oversampled.

                      This makes sense, because if you just sampled the lone pixel, disregarding the surrounding ones, its final converged value may still produce aliasing with its neighbours (early on, we don't "know" what final value the converged pixel will have.), and the sampler wouldn't know if it didn't check.
                      So, for a single unconverged pixel, the whole kernel needs rechecking, and often resampling.
                      Lele
                      Trouble Stirrer in RnD @ Chaos
                      ----------------------
                      emanuele.lecchi@chaos.com

                      Disclaimer:
                      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                      Comment


                      • #12
                        Yeah, it needs the surrounding pixel values to do anything useful. Just like to understand how the tools work to make the best use of them.

                        Comment


                        • #13
                          I wonder, if we are working with "overexposed" images because they are later tone mapped, I am assuming the Adaptivity Clamp of 1.5 would by for brightness values above 1.5 getting fewer samples. So even though those are good, valid areas that happen to be right, they will be getting fewer samples... and thus we should raise the Adaptivity Clamp for these shots, right? That's the idea behind it, yes?

                          Comment


                          • #14
                            I wonder, if we are working with "overexposed" images because they are later tone mapped, I am assuming the Adaptivity Clamp of 1.5 would by for brightness values above 1.5 getting fewer samples. So even though those are good, valid areas that happen to be right, they will be getting fewer samples... and thus we should raise the Adaptivity Clamp for these shots, right? That's the idea behind it, yes?
                            Yes, this is correct. Although you'd have to adjust the exposure down quite a bit in post to see a difference.

                            Best regards,
                            Vlado
                            I only act like I know everything, Rogers.

                            Comment


                            • #15
                              Thanks for this. Turns out the bigger issue in that shot was some flickering caustics. But I did increase the Adaptivity Clamp value a bit to make sure it covers the entire range. Thanks.

                              Comment

                              Working...
                              X