Announcement

Collapse
No announcement yet.

How to prevent out-of-core rendering while keeping geometry instanced?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to prevent out-of-core rendering while keeping geometry instanced?

    Hi,

    this is what I have been curious about for a while. Help on docs.chaosgroup.com is not doing very good job at explaining what exactly happens, so I am asking here.

    Basically, a long time ago I was doing a complex Archviz project in Vray, and I observed my scene slowing down with growing complexity right up to the point where rendertimes were absolutely insane (dozens of hours). After a while of troubleshooting, I found out that switching geometry structure from "auto" to "static" made scene render about 10 times faster. That has left me quite shocked, as I would not expect V-Ray to ever start rendering out of core as long as there is available system memory, but I was obviously wrong. Lots of Vray Proxies may have something to do with it, as according to help, they are always considered dynamic geometry

    So ever since, I used exclusively Static mode to prevent V-Ray from ever rendering out of core, and slowing down drastically, right up until last year, when I discovered here on the forums, that Static mode actually does not do instancing, so all geometry is unique, as long as it's not distributed using some scattering plugin.

    So after finding out that, I've reverted back to using Auto, hoping that the performance hit I've encountered in that large scene was due to proxies, not due to the auto geometry mode being inefficient.

    But right now, I've done some tests on few of my scenes, none of which contain displacement or proxies, and yet I am getting about 10-25% performance hit using auto geometry mode compared to static one, even though the scenes are not very complex, and are very, very far away from filling up my memory (usually 3dsmax.exe process takes from 2 to 6 GB out of about 30GB of available free memory).

    So my questions are:

    1, I expected dynamic memory limit to be RAM threshold, which once you reach, Vray will switch to out of core rendering. That's according to help files wrong. It says that it's some sort of memory pool for dynamic objects. Should I set it lower or higher to have best possible performance? What exactly does dynamic memory limit do?

    2, How exactly does auto mode decide which objects will be compiled as dynamic geometry? What should I do to minimize amount of dynamic geometry in my scene?

    3, This is most important one - Is there any way to make Vray never ever render out of core, but at the same time not lose instancing capability? Static mode prevents out of core rendering, but it also kills instancing which is very important. I would basically like the mode, where instancing works, but no geometry is dynamic, so that my scenes do not suffer such performance hit, but once i run out of memory, then that's it, the process crashes. Is that possible? How can I achieve that?

    Thanks in advance.

  • #2
    First of all, let me start by saying that recent builds (V-Ray 3.20.3 and V-Ray 3.30) are much quicker when rendering dynamic geometry so some of your observations are not quite the same anymore. With that said, below are some answers to your questions.

    Originally posted by Recon442 View Post
    1, I expected dynamic memory limit to be RAM threshold, which once you reach, Vray will switch to out of core rendering. That's according to help files wrong. It says that it's some sort of memory pool for dynamic objects. Should I set it lower or higher to have best possible performance? What exactly does dynamic memory limit do?
    To make V-Ray never purge dynamic geometry, set the memory limit to 0. Otherwise, as soon as the dynamic memory limit is reached, V-Ray will start deleting geometry from its acceleration structures. The best value is probably a couple of GB lower than the physical RAM on your machine. An experimental mode is to set this to a negative value, in which case the limit is the physical RAM on the machine minus the specified value.

    2, How exactly does auto mode decide which objects will be compiled as dynamic geometry?
    V-Ray will convert to dynamic geometry any object that, together with all its instances, has more than 300,000 faces.

    What should I do to minimize amount of dynamic geometry in my scene?
    Technically you can override the type of geometry that a given object generates from its V-Ray Object properties. However in most cases, V-Ray does the right thing automatically. Dynamic object do use a different memory layout that saves memory and ultimately allows V-Ray to render larger scenes. Especially in the latest V-Ray builds, there should be no need to worry about this.

    3, This is most important one - Is there any way to make Vray never ever render out of core, but at the same time not lose instancing capability?
    Set the dynamic memory limit to 0, or to a sufficiently large value.

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #3
      I see,

      so I was partially right about dynamic memory limit after all. What then leaves me curious is why it defaults to just 4000MB. As pretty much any modern machine has at least 16-64 GB of available memory, this would mean a significant performance hit with default settings, especially in older V-Ray builds, if one used for example lots of instancing or proxies in his scenes... ?

      Another follow up question would be - setting memory limit to -2000 will mean V-Ray will start to delete geometry from acc. structure once it reaches all the free RAM memory (-2GB) on my PC, or all the physical memory (-2GB) in general? Meaning that if let's say 3GB are occupied by Windows and my PC has 32GB of RAM installed, will V-Ray calculate this limit from 32GB, or just 29GB of free available RAM?

      Lastly... those 10-25% performance hits when comparing static vs auto structure were tested in last 3.30.01 build, which according to you should run faster... I guess that performance hit should then be attributed to instancing itself... or?

      Anyway, thanks for clearing this up.

      Comment


      • #4
        Originally posted by Recon442 View Post
        What then leaves me curious is why it defaults to just 4000MB.
        It's not a bad starting value; f.e. the 28 million Lucy statue takes only about 1.3 GB to render (as a .vrmesh). Also keep in mind that this is for *unique* polygons; for instances the memory is pretty much the same as for just one instance of the object. Displacement/subdivision surfaces are a bit more problematic as they tend to generate lots of geometry very quickly and they may take up some more RAM. On the whole, it would be best if V-Ray could determine this value automatically, but it's not a trivial problem. We'll get there, I hope.

        As pretty much any modern machine has at least 16-64 GB of available memory, this would mean a significant performance hit with default settings, especially in older V-Ray builds, if one used for example lots of instancing or proxies in his scenes... ?
        Having just instances is probably not a big concern, like I said only one of the instances counts towards the memory limit.

        Another follow up question would be - setting memory limit to -2000 will mean V-Ray will start to delete geometry from acc. structure once it reaches all the free RAM memory (-2GB) on my PC, or all the physical memory (-2GB) in general?
        It's the physical memory. If we looked only at the free memory, it may turn out that V-Ray would reach the limit instantly. For example, on my laptop, about 80% of the RAM is always taken up by Chrome and other stuff. But if I start rendering, Windows will swap out all of that to make room for the render.

        Lastly... those 10-25% performance hits when comparing static vs auto structure were tested in last 3.30.01 build, which according to you should run faster... I guess that performance hit should then be attributed to instancing itself... or?
        Yes; instancing has some performance penalties after all. Usually, when one saves RAM, it's typically at the expense of performance (and vice versa).

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Thank you,

          now I know everything I needed to

          I've just tested bunch of scenes again with dyn. memory limit set to 0, and rendertimes are not pretty much same as with static. Also, the longer the rendertime is, the less of a difference there is.

          Thanks again.

          Comment

          Working...
          X