Announcement

Collapse
No announcement yet.

Z Depth / Vray ZDepth / Z Buffer / ...?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Z Depth / Vray ZDepth / Z Buffer / ...?

    Hi there,

    I do have a stupid question. I just did a testrender of 20 teapots to testrun the z-depth. I save RGB.exr and under settings of exr format i add under Render Elements the VrayZDepth(also added under Render Elements), Change the Type to RGB and Format to Full Float 32. I send this to the Compositor and he says Vray isnt doing it right. He uses Fusion 6.4. Is there a way to find out who screws it up? The compositor uses the exr in a Boolean node and then puts it in a depth blur node.

    Here is the exr for those of you who want to test it

    www.k-arts.de/FTP/z-channel-test.zip

    Cheers,
    Ralf
    Last edited by k-arts; 19-06-2013, 07:47 AM.

  • #2
    Is it not doing it right because B/W is flipped or because BW is clipped as opposed to Max ones? (havent used for a max so I'm writing from memory.)

    Thanks, bye.
    CGI - Freelancer - Available for work

    www.dariuszmakowski.com - come and look

    Comment


    • #3
      Thx for your help, but I tried it with those options on and off, clamp zdepth / invert zdepth, but no luck so far. Anyone else?? Maybe someone is willing to explain the correct approach to get the zdepth into Fusion 6. I read some strange eplanitions about getting an correct zdepth into fusion using the sampleinfo in an extratex element and using the blue channel to extract the Z Info?!?! What the heck?
      Last edited by k-arts; 19-06-2013, 10:50 AM.

      Comment


      • #4
        Just remember that Z-depth elements CANNOT be anti-aliased, otherwise they get wrong Z-depth values on the smoothed pixels.
        Render your Z-depth with the fixed sampler at a rate of 1.
        I have not used Fusion 6 but in After effects the best results seem to be when rendering the Z-depth at double res [Fixed sampler!!] & pre-compose the Z-depth in AE with a 50% scale. This does the resizing internally in AE & gives much better results.

        Hope this helps

        Comment


        • #5
          Be sure you are rendering to a floating point format (ideally 32bit float EXR). Then you can set the near and far clipping planes for the Z channel (min and max). Look at your scene and see how far way things get from the camera and set the far out past that a little. I usually leave the near at zero, though you could move it out some if things are far away from your camera. Note that since you are using floating point you really could just leave near at 0 and set far to something big enough to envelop your scene (100m, etc.)

          In Fusion you will need to scale the Z channel as needed and then use a depth blur, or a Frishluft Lens Care (which looks way better than Fusion's built in blur). For Fusion's Depth Blur you will have to use the Bol tool to copy the Red (or luminance) channel into the Z channel of your image (or adjust the EXR loader settings). Then you can adjust the Z scale in the Depth Blur node (possibly to 1.0 or even 0.01 or something).

          The Fusion Depth Blur is never going to be great. It will always blur the edge even of sharp objects. Use Lens Care for OFX in Fusion for superior results.

          Your Fusion guy or girl may be complaining because you saved out an RGBA depth buffer. You could change that to mono in the export settings. He or she could simply adjust the EXR loader settings to load one of those channels as the Z channel.

          Don't expect to get beautiful soft DOF... The Fusion Depth Blur will look like crap... You really have to get LensCare for anything more than a tiny blur.

          Comment


          • #6
            Anti-aliasing a Z-Depth is a very interesting topic...

            In THEORY it should not be anti-aliased. In practice (and extensive testing and years of experience post blurring) it almost always looks best if it is anti-aliased.

            To get the BEST result you need to render out the Z buffer and the image a 2 or 4 time normal size (yes, BIG, and slow). Then apply the DOF effect AT THAT BIG SIZE (again big and slow). Then, after the DOF blur, scale everything down.

            In fact, you can even get slightly better results just by scaling up both your (normal sized) image and your ZBuffer right before you apply DOF, and then scaling it back down. This is easy in node based compositors. Slightly more convoluted in AE, though still possible. Yes, this softens the image a little bit, but this is usally a good thing with a limited DOF shot anyway.

            Scaling down a non-anti-aliased ZBuffer in AE before using it *is* a form an anti-aliasing it. AE is blending those pixels together like an oversample.

            Comment


            • #7
              FWIW, I was able to use Fusion's depth blur on your EXR. It looked as bad as Fusion's depth blur always looks. But it did work just fine. If your compositor (human) is having problems have them post to the Fusion-L mailing list where they will get more info.

              Comment


              • #8
                Thank you I was assuming that the Z Depth wasnt sampled at all. But hey, learning something each day However I have not the option to switch to Aftereffects. Composting is handed out to an external freelancer, and he does it in Fusion 6.4.I doubt that Fusion and Vray arent compatible ? Anyone else?

                Comment


                • #9
                  So it seems my compositor has to invest into a new Depthtool Joelaff could you tell me the settings of the depthblur you used. When trying to get the blur going it seems that the whole image is being blurred? Thx again for you massive support

                  Comment


                  • #10
                    Your ZBuffer in your EXR was not anti-aliased...

                    You can try it both ways, but, as I mention above, in practice anti-aliased depth buffer almost always produce superior results (less edge crawling in the final result). This may vary some based upon which blur filter (node, plugin, etc.) you use.

                    Comment


                    • #11
                      I take that back... yours was anti-aliased, but not clamped.

                      You have to clamp the Z-Depth, otherwise the AA remains jaggy against areas where the alpha is zero (because AA blends pixels together and the depth of pixels with zero alpha is infinite. Anything blended with infinity is still infinity... Thus the edges are jaggy where the alpha is zero.

                      Comment


                      • #12
                        Note that your Depth channel was weird... It went from positive to negative. Now, a Fusion artist can still massage those values into something usable like the attached comp does. It first clamps everything less than the lowest value in there (about -2.5). Then it adds 2.5 to the image with a brightness adjustment in order to bring all values positive. Then it scales the values to the 0-1.0 range with a gain of 0.3

                        I have included a LensCare node in there as well (requires you have LensCare OFX installed in Fusion).

                        I did not gamma correct these at all. I assume they are actually 1.0.


                        Here is the output from the Fusion blur... Note the halo around the outside of even the sharp teapot... This is why Fusion's depth blur sucks...

                        Click image for larger version

Name:	Fusion.jpg
Views:	1
Size:	120.5 KB
ID:	847780


                        Here is the same thing from LensCare for OFX (in Fusion). Note that the jaggies on the top of the sharp pot are primarily from the unclamped depth buffer causing there to be no AA right there... Note that you also have to compare moving footage to decide if you want to AA your depth buffer. On a still sometimes no AA looks better, but it usually crawls weird on animation...YMMV... Usually the areas where image overlaps other parts of the image look better with AA in the depth buffer, and areas where the image is against the background (alpha) often look better without AA in the depth buffer. I have rendered it both ways and blended those in the post using a choked or spread alpha matte. It all depend o your shot, how much DOF youa re trying to achieve, how much time you have to mess with the comp, etc.

                        Click image for larger version

Name:	LensCare.jpg
Views:	1
Size:	109.0 KB
ID:	847781

                        Here is the Fusion comp. You would want to clamp your Z buffer and to get it to range from zero to just past the max depth in your scene (by manually settings those values in the settings in Max). Once you do that then the CustomTool (which is not strictly necessary at all) and the two BC nodes would not be needed. Note that LensCare likes white to be close to you (though it lets you invert it).

                        DepthComp.zip

                        Note that LensCare has its own issues with objects on alpha when you use LensCare's gamma correction... But that is a different story.
                        Last edited by Joelaff; 19-06-2013, 01:18 PM.

                        Comment


                        • #13
                          Thank you so much to clear this out, great afford When my Z Channel looks weird, what are your settings to get it out the proper was. Are you rendering it with the rgb, or are you rendering it in a seperate renderjob? Do you leave the defaults on in the VrayZDepth, except the min/max values? Thank you again for your time
                          Ralf

                          Comment


                          • #14
                            Originally posted by k-arts View Post
                            Thank you so much to clear this out, great afford When my Z Channel looks weird, what are your settings to get it out the proper was. Are you rendering it with the rgb, or are you rendering it in a seperate renderjob? Do you leave the defaults on in the VrayZDepth, except the min/max values? Thank you again for your time
                            Ralf
                            I prefer to render the depth to a separate EXR, rather than the same file. This makes it easier to optimize in the comp, and makes it easier to manipulate the Z Channel in the comp.

                            These are the setting I use... Note that you have to set the MAX depth based on your scene. That should be just past your farthest object. (Use a tape measure tool, or the camera target distance to figure it out.) Note that you never want to animate your ZDepth settings or the blur would change as the settings change.

                            This can be rendered with the RGBA (at the same time). You can also add a second layer without filtering if you want to experiment with that. Max/VRay will gladly create both in one pass along with the RGBA.

                            Click image for larger version

Name:	ZSet.jpg
Views:	1
Size:	53.2 KB
ID:	847783

                            If you want to save the depth channel in the EXR file with the RGBA then try these settings. Note that Save Region can optimize loading and processing in Fusion for renders that do not take up the whole frame. Load this into Fusion... Set the Z Channel to be loader in the loader setting (in Fusion).

                            Note for LensCare you really want the Z depth separate so that you can pipe it into the LensCare node (it does not read the Fusion Z channel, but rather uses a regular channel for Z Depth). This can be split out with the Bol tool in Fusion if they are all in the same file. But then you are processing the Z channel in the rest of the Fusion flow for no reason. this is why I like a separate file.

                            Click image for larger version

Name:	EXRSet.jpg
Views:	1
Size:	68.6 KB
ID:	847784

                            Comment


                            • #15
                              Thx you are da bomb

                              Comment

                              Working...
                              X