Announcement

Collapse
No announcement yet.

Deep renders

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Deep renders

    Hi,


    We're investigating Deep compositing in Nuke.
    We render EXRs with VRay Next, 3DSMax 2020.

    Here are a few questions you may probably be of help with:

    Is it wrong to think that deep fragments are computed for each Render sample ?

    We hardly understand the deep.front and deep.back delta value for a solid surface: Is it the result of fragments merging when optimizing (VrayOptionRE), keeping the closest and farest values of the merged fragments ?


    How come solid surfaces sometimes produces low alpha fragments, closest from Camera ?
    We understand that merging fragments with similar ZDepth values can clean up these "almost transparents" fragments. Is it wright ?

    Sadly, it seems that depending on merge threshold, these fragments are kept, following a "depth banding" pattern. Could this be avoided ? (these patterns can be seen on linked pictures representing the number of fragments per pixel. The number at the end of nomenclature being the Merge threshold).

    Into Nuke, these "extra" samples are OK for simple DeepMerge... But the resulting depth channel computed from DeepToImage is corrupted, and many other usages of DEEP information are getting wrong because of that.

    What strategy would you recommend ?

    Thank for support
    Attached Files
    http://tatprod.com/

  • #2
    Up !
    Nodoby is working with Deep Workflow ?
    http://tatprod.com/

    Comment


    • #3
      Since deep compositing is still only supported in Nuke unfortunately, I think, not a lot of people that frequent this forum, use it. I would want to use it but still pretty much attached to Photoshop. Maybe one day, as I don't see any deep support coming to photoshop any time soon.
      Last edited by Vizioen; 02-10-2019, 06:17 AM.
      A.

      ---------------------
      www.digitaltwins.be

      Comment


      • #4
        Hi,

        Deep fragments are computed for each camera sample. After then deep fragments are merged according selected deep merge mode in VRayOptionRE.
        As part of the merge process, deep.front and deep.back are the set to the minimum and maximum depth of the fragments being merged.
        Initially front and back depths may be equal (deep.front=deep.back) or not, for example in volume samples front and back depths depend on the ray marching step size.

        The number of final deep fragments remaining depends on the merge mode.
        It seems that your images are generated using deep merge "By Z-depth". This mode divides the space around camera in intervals by their Z-Depth and merges all samples within the same interval.
        For a solid surface merge by Z-depth usually results in one deep fragment per pixel, with the exception of pixels at the borderline between two intervals. There will be two samples for such borderline pixels.
        The "By render ID and Z-depth" merge mode may help to avoid these two-fragment pixels at the cost of higher CPU usage.
        The "by Render ID" merge mode also resolves this problem and is not computationally intensive, but it is not usable for self-overlapping objects.

        As for why the front pixel is not fully opaque - this is actually the expected behavior. For a fully opaque pixel that consists of several deep samples, only the last sample one is fully opaque.
        Remaining deep samples are (and should be) partially transparent. This allows correct compositing of an object in-between these deep samples.
        More than one fully opaque sample in one deep pixel does not make much sense, since the closest one will hide all of the remaining samples behind it.

        Best Regards,
        Vasil Minkov
        V-Ray for 3ds Max developer

        Comment


        • #5
          Thank you for your answers Vasil,

          They seem to confirm our understanding was good.
          We'll investigate the "By render ID and Z-depth" solution, but it may merge unwanted fragments, losing a lot of precious details, unless we spend a lot of time setting up IDs...

          The way "By Z-depth" works, with, dividing the space from the camera clearly causes the "Banding" effect we fight against, causing close fragment not to merge as they don't belong to the same "area".

          An another "By Z-depth" merging, with slightly different thresholds may have worked better ?

          Am I wrong to think another "Merge Rule" could have been "By Z proximity", with a distance threshold ?
          This could preserve a great amount of fragments, and get rid of these useless "borderline" fragments.
          Would you consider such an option ?
          http://tatprod.com/

          Comment


          • #6
            Originally posted by pingus View Post
            ....
            We'll investigate the "By render ID and Z-depth" solution, but it may merge unwanted fragments, losing a lot of precious details, unless we spend a lot of time setting up IDs...
            ....
            The "By render ID and Z-depth" merge mode usually preserves more details than Z-Depth merge mode, since it does not use intervals. It rather uses Z-Depth proximity between each two neighbor samples.
            In this mode, the threshold value is roughly in screen based pixel units. The threshold is a trade off between more detail + more samples and less detail + less samples.
            The default of 1.0 provides good detail but may produce some extra samples here and there. You may want to use slightly larger value - maybe 2.0, in order to filter-out all unwanted deep samples.
            .


            V-Ray for 3ds Max developer

            Comment

            Working...
            X