Announcement

Collapse
No announcement yet.

Vray GPU, 3ds max, Displacement memory usage

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Vray GPU, 3ds max, Displacement memory usage

    Hello,

    Here are some notes displacement on vray gpu. It's a very very simple scene. The memory usage and resource numbers are very weird and seem to be running out of control. Does it really take so much resources to render displacement on the gpu vs the cpu? I reported this issue a few months ago too.
    Vray 3.60.03 for 3ds max 2017
    Windows 10 home
    Nvidia driver 385.28

    GPU Displace on GTX 1080, 8GB

    3.7 GB occupied, reported 3.25 GB free mem (~1 GB missing from vray's memory report, which seems to go missing as soon as vray starts rendering. Maybe some memory managment problem?)
    Note: on render start, ~10 GB CPU Ram is also used up, so a total of ~13.7 GB of cpu and gpu mem required.


    CPU Displace

    3ds max using 2.5 GB to render, from which ~1 GB is for 3ds max ui and loaded file, that means same render on the cpu needs roughly 1.5 GB of resources to render, vs ~13.7 when rendering on the gpu.


    Scene file: https://www.dropbox.com/s/vuqz9bx8ke...e_bug.zip?dl=0

  • #2
    Forgot to mention, regarding that apparently missing ~1 GB gpu memory, that seems to vanish as soon as the render starts. Multiple other gpu memory monitoring apps, such as HWMonitor or AIDA, show that 1GB gpu memory as available before rendering. As soon as vray starts rendering, that ~1 GB of gpu memory vanishes and doesnt show up in vray's statistics, but it shows up as 'used memory' in every app.
    Example:
    GTX 1080, 8GB
    1. gpu usage before render = 80MB according to HWMonitor and AIDA (gpu not connected to any display)
    2. gpu usage after rendering starts = 3.7GB used, 3.25GB free. According to vray's stats, used mem + free mem = total 6.95GB mem on gpu that has 8GB mem.
    3. While that 1GB doesnt show up in vray's statistics, Aida and Hwmonitor report ~4.8GB memory usage WHILE vray is rendering. There's that ~1GB, reported by other apps.
    4. I close vray and 3ds max, Aida and Hwmonitor report the original 80MB, or 1% gpu memory usage again.

    5. I prepare a very simple scene, a plane, a dome light. I wanted to see what memory usage would a super simple scene like this need. As soon as i start rendering, my initial 80MB, or 1% gpu memory usage jumps up to 14% even with such a super simple scene. That's 13% aka ~1GB increase as reported by HWMonitor, but not reported by vray's statistics. Is this the elusive missing ~1GB, and what does this all mean?

    Sorry if my explaining skills aren't very good. Looking forward to your replies.

    Maybe i should create a different forum topic on these elusive missing gigabytes of gpu memory.

    Comment


    • #3
      Any news on the rt gpu displacement memory usage? Anyone else experiencing this? I usually have to use displacement, but since i moved to gpu rendering i can't use displacement and i always struggle to find real practical alternatives.
      I found that very low quality displacement on the gpu is workable, but as soon as i crank the quality up a bit, memory usage runs out of control and scenes crash, the same scenes that render butter smooth on the cpu.
      I invested into gpus, and i didnt upgrade to a particularly powerful cpu, so i hesitate to often fall back to the cpu.
      I'm curious if anyone else experiences this, or is the problem on my end, and should i just do a fresh format C:

      Comment


      • #4
        Sorry, I somehow missed that topic. Will check and answer tomorrow.

        Best,
        Blago.
        V-Ray fan.
        Looking busy around GPUs ...
        RTX ON

        Comment


        • #5
          We have the same issue. Displacement goes mad in GPU. And especially as the resolution of your image gets higher. We basically can't render images over 2k if they have displacement.

          Displacement in GPU seems to be image size dependent... is that right? So if we increase the edge length enough that the scene won't run out of ram the displacement should work but unfortunately it looks so terrible that it can't be used.... so this is not a solution.

          This is a serious concern as a lot of cool new scanned shaders like megascans etc rely pretty heavily on displacement....

          Comment


          • #6
            Hello,

            Displacement on GPU indeed is handled differently than displacement on CPU. One major difference between both is that the GPU currently cannot calculate displacement normals at runtime and has to precalculate surface normals and store them alongside with the geometry (This is the Cache Normals option in the VRayDisplacementMod, which for GPUs is always ON). This is the reason for the much higher RAM usage on GPU. In order to keep displacement from going out of control and creating billions of polygons, when view-dependent option is ON, you have to tweak the Edge Length parameter. It tells V-Ray when a triangle has been subdivided enough and should no longer be subdivided/displaced. It works in terms of pixels that the triangle occupies on your final image, so if you render a 640x480 image and you crank the resolution 2 times, you can increase the Edge Length 2 times, so that the same amount of geometry is generated. Keep in mind, that the Edge Length parameter can take any number, not just integers, so that gives quite a lot of control over the displacement quality.

            As for the 1GB of untracked memory, it is normal. We allocate and use this memory for utility tasks in order to make sure the GPUs are kept busy at 100% even in small scenes. From user's standpoint this is free memory as when the scene grows bigger, and naturally keeps the GPUs occupied, we will decrease this pool to the point where it is completely gone.

            Best regards,
            Alexander Soklev
            Alexander Soklev | Team Lead | V-Ray GPU

            Comment


            • #7
              Thank you for the reply and advices

              Comment


              • #8
                Would it be possible to have an option for vray RT to automatically calculate edge length size according to your resolution? so instead of manually having to raise the edge length in your displacementMod settings when you go up in resolution, RT just does it automatically for you?

                I know the edge length is basically a quality level control setting, and automatically adjusting it could effect the quality of the displacement. But maybe there is some way of RT analyzing the displacement map itself and working out the required edge length to get optimum settings??? No idea if thats possible or not... just throwing it out there!

                Comment


                • #9
                  After some further tests, these are the conclusions i reached regarding vray displacement on the GPU, maybe i'm still missing something.

                  1. Vray displacement on the CPU is by far the best method, uses extremely little resources, it's very fast and efficient, or in other words, amazing.
                  2. Subdividing your model, for example with turbosmooth, and applying regular 3ds max Displace modifier with the bitmap, comes in second place. To reach the quality of vray displace on CPU though, you have to crank up subdivision levels to an extreme, this method uses Alot of resources if rendered either on CPU or GPU.
                  3. Third place is vray displace on the GPU. Uses a bit more resources (gpu/cpu ram) than #2 from above. It makes me think that it might be using a very similar tessellation method to #2 above (global subdividing, not adaptive). While vray displace on the CPU, if i remember right, uses a selective or adaptive tessellation method, aka more geometry where it's actually needed.
                  4. 4th method is a bit inconvenient, and that's the only reason why it doesnt come in second place, otherwise from a resource usage perspective, it's very good. This would imply that you subdivide your model and displace either in 3ds max or zbrush, then you'd have to decimate your model in zbrush or optimize your geometry to lower poly count as much as you can. While this method is more GPU friendly, it wont be so easy to tweak and update.

                  It would be awesome if at some point, displacement on the GPU will be able to match the performance and pure awesomeness of the displacement on the CPU.

                  Comment


                  • #10
                    Thank god I found this thread. My ocean scene when upped res to 1080p bombed out of memory and I could not work out why. Edge Length from 4 to 16 sorted it out on 1070Ti+1080Ti, now I just have to inch it lower until It starts failing.

                    Comment


                    • #11
                      A few things of note. Pay attention to the detail in your displacement map, tweaking that can reduce the grief, and watch the maps blur amount, if you go below .5 you may get artifacts depending on the content of your map. I am usually hitting 20 gigs for the displacement I have worked with, skin and terrain mostly. If you max out, disable static and the render will take longer, but you will save the hit on memory.
                      As salvadoresz mentions in (4) above, a mixture of mesh displacement and render displacement, and finally bump on top of that can be a nice hybred mixture workflow.

                      Archaine

                      Comment

                      Working...
                      X