Announcement

Collapse
No announcement yet.

Possible to resim to downres a vdb?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Possible to resim to downres a vdb?

    Heya folks!

    I'm just starting on a new show and there'll be a few bits where we're falling through the atmosphere. We've got a tonne of vdb cloud files from houdini that I'll be playing with and I'm exploring lots of options on rendering speed and art direction, depending on how close we are and if there's any interaction. One thing is trying to use a solid cloud core with translucency and then scatter some wispy details along the edges, similar to the technique damien nenow used in "paths of hate" (https://vimeo.com/28004567) so I'm testing if we can set a large VDB to be a mesh rendered with fast sss and then scatter some other volume grids along the edges for soft details. Some of our VDB's are huge for this purpose so it'd be great to be able to downres them. Phoenix has upres features in the resimulation dialog but is it possible to downres any way? Even doing something like using a grid with a loaded vdb as an input to an overlapping grid with a lower cell count so we can quickly half / quarter our data.

    Cheers!

    John

  • #2
    Hello!

    Yes, it is possible to down-sample the cache during resimulation - just give negative number to the Amp. Resolution parameter, as it is noted in the docs here. You can also use the Load & Start function to down-sample the cache, depending on which suits you better in your specific use case.

    Originally posted by joconnell View Post
    Even doing something like using a grid with a loaded vdb as an input to an overlapping grid with a lower cell count so we can quickly half / quarter our data.
    This is also possible using a cascade simulation, but I wouldn't advise it for this purpose. Going this way would make sense if you want a new Phoenix simulation to interact with the contents of the VDB caches.

    I should also note that rendering of large sparse VDB caches has been greatly improved both in Phoenix 3.11 and in the Volume Grid of V-Ray Next, so if you are using an older version better grab the latest nightly.
    Nikola Bozhinov, Phoenix FD developer

    Comment


    • #3
      Originally posted by nikola.bozhinov View Post
      Hello!

      ... as it is noted in the docs here.
      Ugh, I scanned over that line but missed the important bit at the end - thanks for the second set of eyes.

      Originally posted by nikola.bozhinov View Post
      This is also possible using a cascade simulation, but I wouldn't advise it for this purpose. Going this way would make sense if you want a new Phoenix simulation to interact with the contents of the VDB caches.
      This is a high possibility alright, normally we kick these things back to the FX department that made the vdb's but I'm always for doing a bit more on my side!


      Originally posted by nikola.bozhinov View Post
      I should also note that rendering of large sparse VDB caches has been greatly improved both in Phoenix 3.11 and in the Volume Grid of V-Ray Next, so if you are using an older version better grab the latest nightly.
      Great to know. I don't think we can change up to vray next mid show but I'll certainly grab the latest phoenix and do some render tests to have a look at the improvements.

      Thanks Nikola!
      Last edited by joconnell; 05-07-2018, 09:17 AM. Reason: typo

      Comment


      • #4
        Again it's totally abusing phoenix as a vdb processor rather than doing sim things but the resolution seems to clamp at -0.95 which hurts my ocd senses - if I try a vdb in the latest nightly it needs a velocity channel which our static caches don't have, I'll have a look at other vdb processors since I kinda want to just do dumb photoshop image downres to 1/2, 1/4, 1/8 res of the main vdbs. Thanks again for the help!

        Comment


        • #5
          Ah yes, the resimulation need the velocity channel in order to work there is no way around that.
          However if we are talking about a single cache, not a sequence, perhaps the Load & Start function will do the trick. It will re-sample the cache in order to match the resolution set in the Grid rollout. This takes the contents of the file as initial state for the simulation and continues from there on, so it won't work for a sequence unless you use some scripting to do it file by file. It is important to set the Smoke dissipation, Smoke buoyancy and Gravity all to zero, so that the state of simulation doesn't change during that one frame.

          Regarding the Amp. Resolution parameter a value of -0.95 would mean 20 times reduction along each axis, so it the effect is not so small. The formula (for the Fire/Smoke simulator) is 1.0 + the value of Amp. Resolution per each axis.

          Improving the workflow for doing some basic post-processing of caches is on our todo list, but it is not likely to happen very soon.
          Nikola Bozhinov, Phoenix FD developer

          Comment


          • #6
            Oh thank you for the clarification on the resolution! I'll give that another run again as I'm currently getting killed by the memory overheads on about 16 very high res clouds - the vdb files vary between 20 and 500 megs

            Comment

            Working...
            X