Announcement

Collapse
No announcement yet.

Deep compositing with render passes workflow

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Deep compositing with render passes workflow

    I wanted to start a thread to hash out how deep compositing (EXR 2.0) can be used in a render pipeline. I'll probably have more questions as the thread progresses, but I will start with one:

    First some background theory: One of the ideas of a deep compositing workflow is that it allows you to render out elements separately without needing to worry about holdout mattes. So for example in Avatar they need to render out lots of plants in a jungle, and they want to render their characters. So in a deep workflow they could render these out separately in deep and then comp the characters deep render into the jungle deep render since both know where they are relative to each other in Z. So far so good (btw, if you're not familiar with this, there's a nice explanation of this here)

    So now the question: in the above workflow, how would you get contact shadows, occlusion, and GI bounce from the characters onto the environment and visa versa?

    The old workflow would be to render the characters seperatly, but have the entire environment present in the scene with all the materials piped through wrapper materials. This would give you GI bounce and shadows. The AO can be setup also to only cast from the characters onto the environment. However all of that assumes that all of the objects (albeit with wrapper materials on them) are present in a single scene. This defeats the whole purpose of the above workflow which is to avoid needing to have everything rendered in a single scene.

  • #2
    I might be wrong but i don't think these matter. Deep image only have an extra z information. Everything should work the same way. The shadow for example will have the z information of the floor they are projected on.
    Portfolio: http://www.cgifocus.co.uk

    Comment


    • #3
      Originally posted by Yannick View Post
      I might be wrong but i don't think these matter. Deep image only have an extra z information. Everything should work the same way. The shadow for example will have the z information of the floor they are projected on.
      If rendered out separately as I described (character on one render layer and environment on another) they would not cast shadows on each other since they are not present in the render layer. Same goes for GI and AO.

      Additionally the wrapper material does not work with a deep render. Specifically the shadow from the wrapper material produces very noticeable artifacts in comp (deepMerge node). This is easy to replicate with a simple test (sphere on ground). The same scene rendered with regular exr works fine. I believe this has to do with how deep files handle the alpha.
      Last edited by sharktacos; 24-03-2014, 08:23 AM.

      Comment


      • #4
        sorry i am not following.
        How can you get you character shadow on your environment without your character?
        Whatever pass you do, if yo want the interaction between your character and environment, they both need to be present.
        For example if you do an AO pass for your character, you still need the environment in your render layer with visibility off. You will have your character with your deep information, Same apply to the environment.
        Portfolio: http://www.cgifocus.co.uk

        Comment


        • #5
          Originally posted by Yannick View Post
          sorry i am not following.
          How can you get you character shadow on your environment without your character?
          Whatever pass you do, if yo want the interaction between your character and environment, they both need to be present.
          For example if you do an AO pass for your character, you still need the environment in your render layer with visibility off. You will have your character with your deep information, Same apply to the environment.
          Exactly. That's why when I read this:

          EXAMPLE OF USAGE


          Deep compositing is now being used extensively by a few major studios (Weta Digital, Animal Logic, Dr. D). Weta Digital created much of the technology while working on Avatar. The forest scenes were a particular challenge, where the geometry was so dense and the volumetrics so extensive that it was impossible to render shots as one pass. In this specific situation, with all varieties of plants wrapping around each other, tangling with leaves intermingling, it is impossible to render holdout mattes for traditional compositing; the number of required holdout mattes would be staggering and a huge burden on the pipeline and compositing team. Deep compositing allowed Weta to render the forest in pieces as certain components were finalized, and then combine everything together with proper edge filtering.

          I don't get how it makes any sense. How would they get interactive shadows if the other elements were not there?

          Comment


          • #6
            They probably break it down to element that dont interact together. Like a foreground/background forest, then a some environment in the middle that interact with the characters.

            have you seen this: http://vimeo.com/37310443
            Portfolio: http://www.cgifocus.co.uk

            Comment


            • #7
              Originally posted by Yannick View Post
              have you seen this: http://vimeo.com/37310443
              Yes. In that example there are no interactive shadows, GI, or reflections.

              Comment

              Working...
              X