Announcement

Collapse
No announcement yet.

Adpative Noise really bad with GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Adpative Noise really bad with GPU

    Hi guys,

    I tend to avoid using adaptive rendering with GPU as the way it's been handling by the GPU for the moment is really bad.
    But it's a feature that is really important so I though I shouldn't still report my issue with it.

    When I render without noise threshold (maxing it out at 9999999) and use only render-time to limit my renders, I have my GPU rendering at 100%.
    When I use adaptivity, the GPU are slowing down progressively and at some point they are just sitting there and doing nearly nothing.
    Here is a quick screen grab anim of it in action :
    https://www.youtube.com/watch?v=8PDQJ-hIz9U

    For the same crop, in 2 minutes without adaptivity, it's often have better results than if I render with adaptivity for 10-20 minutes.

    That's why I nearly never use adaptivity but it's still a great way to track noise.
    To get ride of some noisy spots, adaptivity would be the only go for unfortunately, so I would still want to be able to use it.

    Is there a way to get it optimized so the GPUs are rendering 100% and still using adaptive rendering?

    Cheers

    Stan
    3LP Team

  • #2
    Hello,

    Our developers are aware of this issue. But, the fact that your GPU is loaded at 100% does not mean it render faster.
    When it works adaptive, at some point, when you have only small spots with higher noise the GPU is not utilized fully, but it calculates only the noisy spots.
    When you set time, the GPU work at 100% all the time, but for it calculates all parts of the image no matter if there is noise or not.
    What is your GPU and can you show us the images, that you are testing, how exactly do you compare the final result and render times?
    Tsvetan Geshev
    Technical Support Representative

    Chaos Group

    Comment


    • #3
      I have been observing the same issues since I am also very dependent on adaptive sampling. I guess the reason why the GPU slows down is due to the nature of path tracing meaning that 1 ray (or a ray-tree for that matter) can only be calculated by one thread (GPU core) simultaneously. If that is true (since the latter statement was just a speculation on my part), would it mean that increasing the GPU Ray bundle size or the GPU Rays per pixel would utilize more GPU power in the final adaption process?

      Comment


      • #4
        would it mean that increasing the GPU Ray bundle size or the GPU Rays per pixel would utilize more GPU power in the final adaption process?
        Yes, usually Bundle Size = 256 & Rays Per Pixel = 16 is fine /this leads to slower feedback in active shade mode, but better GPU utilization/
        Tsvetan Geshev
        Technical Support Representative

        Chaos Group

        Comment


        • #5
          Ah haaa,
          I found what it was,
          It's my PCI 16x to 1x extender.

          For the traditional benchmark or usual render when you let it go with a max PPP to achieve, the PCI 1x is enough and GPUs are working 100%. I see absolutely no time difference between PCI 16x and PCI 1x.

          As soon as the adaptive kicks in, the exchange between the cards and the MB is higher and the PCI 1x isn't fast enough.
          I just added my cards back on my MB and the difference goes from 3m24 to 2m20 when adaptive is used.

          GPUs are still not used at maximum speed but it's better.
          I guess it's hard to make it like CPU where it's used 100% no matter what noise level you throw it.

          I'll try to see if any of those PCI riser 16x to 16x would work better to deport the GPUs.

          Cheers

          Stan
          3LP Team

          Comment

          Working...
          X