Announcement

Collapse
No announcement yet.

Questions about upres!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Questions about upres!

    Heya folks!

    I'm running loads of cloud things at the minute and ideally we'll be flying right by them on a 2k frame so we need lots of resolution. As per the chaos workflow I'm running my main sim at around 6 steps per frame but i'm wondering is it possible to have the wavelet run at a lower step amount for speed? Since I've got a stable main sim, would it be possible to run the wavelet at step 1 since it's being guided by the stable velocities of the main sim?

    Also would it be possible to mask the wavelet upres in some areas? In most clouds you get detailed areas and smooth areas based on whether an area is newly formed and still evolving or hasn't moved in a while and is starting to diffuse. If we apply a wavelet it's putting extra detail across the surface of everything in a seemingly even fashion - can we do something like a really harsh wavelet threshold to cut off certain areas or mask the wavelet strength by vorticity so that we keep smooth areas as they are (with interpolation upres) but add in another sub level of detail in the active areas?

    Does wavelet care about history / previous frames? If I'm simming 30 frames to get to a static shape that I like, do I need to process all 30 frames like doing another sim or can wavelet skip directly to frame 30 and process that?

    I'm off to play with particle sources next so we can look at non-flat bottomed clouds or stranger shapes, no doubt back with loads more questions after!

  • #2
    Also just looking at displace in the rendering rollout and it does some interesting things! Do you think it'd be possible to have that functionality in the viewports rather than just at render time? Over in houdini land they do a lot of setups by filling an initial mesh shape with smoke and then doing a single frame advect with some curl noises and then further distortion using a dumb perlin noise. Phoenix kind of looks like it can do some of that with it's render time displace, do you think we'd be able to collapse those results out to a vdb? It'd be great to have viewport amplify and then displace functionality, kind of like a volume modelling kit!

    To be even greedier, it'd be awesome to have some kind of distortion preview in the grid so we'd have an idea of the scale of whatever procedural we're using is!

    I know it's probably not intended for heavy distortion use too, but I notice that anything that displaces outside of the grid gets clipped off too!

    Cheers,

    John

    Comment


    • #3
      Hey,

      You could reduce the steps per frame and run the wavelet and you might get away with it, depending on the simulation, but in general - the more dynamic settings differ between the original simulation and the resimulation, the more potentially horrible-looking stuff you'd get...

      Wavelet resim does care about the order of frames indeed - this is one of the factors for getting deformed results when altering dynamics between sim and resim - because the velocity field and the density field would start to go each their own way with time.

      Not much masking is possible in Phoenix at the moment - we have some of it going on for FLIP sims where you could have birth volumes for particles. I'm looking to add more control over this at some future point, but it's gonna take some time...

      Indeed the preview is lacking all functionality related to textures - mapping volumes with textures, displacing, etc. It's a mix of reasons between performance, abilities of the Max and Maya viewports and also in Maya texture sampling is quite slow and Phoenix engages a V-Ray license for reading the textures much faster, which is also something we gotta get rid of in the future because it is a very bad idea.

      .. and the displacement is render-time only for now. All these things are going into the TODO list though (some of them already are).

      Cheers!
      Svetlin Nikolov, Ex Phoenix team lead

      Comment


      • #4
        Thanks for the feedback Svetlin!

        Maybe saying viewport was misleading on the upres? A better 3dsmax type example if we pretend our vdb was a mesh would be to take our existing grid, put a turbosmooth modifier on it to increase the poly density, put a displace modifier on it to add detail and then collapse. In phoenix this might work in some kind of single frame post processing sim?

        I get your point on the texture preview though, I'm sure it's not quick to go through all that data! In terms of the preview of noise, I'll do a quick test to see if a particle flow approach corresponds well, similar to forceviewer, which used to give you this type of thing:

        Click image for larger version

Name:	Sample_ForceViewer.jpg
Views:	183
Size:	31.7 KB
ID:	1013281

        Phoenix is already giving us very impressive results and fast, congrats to you and your team

        Comment


        • #5
          Hey,

          I do think we are talking about the same thing, but it has many faces Increasing the simulation detail per frame would not easily lead to believable animations, this is why the wavelet resim processes the entire sequence so it could achieve better and more.. "fluid" result. The guys battling problems with machine learning that increases fluid simulation detail also currently struggle to get believable upres for animations. Per-frame solutions such as the render-time displacement can be combined with advecting UVWs by the preceding simulation, so that the texture would not be fixed in world space, but would move with the fluid which would make it look better in animation and this is in our TODO list. However, baking caches from render-time displacement or modifying (sculpting) the volumes in any other way and again baking caches with the result is still something I have to figure a clear way to do (except, of course, simulating these sequences with varying the source settings or using a mapper, which would consume way longer time if you want to just create variations of the same sim) - I was thinking about extending the cache_converter tool's functionality with such features, but of course it would be friendlier to have that inside the host app's viewport. And also - improving the preview so that it could (optionally) show texture modulation or displacement would also help the workflow at least a bit. Hopefully in the near future we would have some of these functions

          Cheers!
          Svetlin Nikolov, Ex Phoenix team lead

          Comment


          • #6
            Yep, I'm definitely thinking single frame sculpting rather than animation stuff, I take your point on the animation side of it! I'd say you guys have a pretty huge to-do list already from all the clever thinker in chaos, never mind user feedback

            Comment

            Working...
            X