Announcement

Collapse
No announcement yet.

zcoverage in g-buffer channels

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • zcoverage in g-buffer channels

    ok apart from the render passes manager we all dream about...
    i could live without it for a while
    but.. zcoverage is crucial for any postprocessing.
    I still don't understand why it's missing and was not requested before .. I'm pretty sure vlado could fix this in a snap

  • #2
    Re: zcoverage in g-buffer channels

    I'm pretty sure vlado could fix this in a snap
    Nope, I can't - it's actually very problemmatic. We'll see though...

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #3
      Re: zcoverage in g-buffer channels

      Originally posted by theo
      ok apart from the render passes manager we all dream about...
      i could live without it for a while
      but.. zcoverage is crucial for any postprocessing.
      I still don't understand why it's missing and was not requested before .. I'm pretty sure vlado could fix this in a snap
      What is the difference between zconverage and "standard" z-depht?

      -Tom

      Comment


      • #4
        the zbuffer or depht buffer channel stores the distance to the camera of objects in the scene.
        the coverage channel stores anti-aliasing data at the edges of objects in the scene.
        and is therefore essential for any postproduction job using the z-depth channel or materials channel or objects channel etc...
        without it, no proper depth blur, no depth fog, no texture replacing, no material colorcorrecting etc.. without aliasing artifacts

        here's exemple of what you get if you don't have a coverage channel
        when color corecting using material id channel:

        scaled 400% so you can see the antialiased pixels surrounding the "characters" not getting color corrected

        .. and on a side note coverage could be 8bit for me
        zdepht tho should definetely not be limited to 8bit, you obviously want more than 256 steps of depth in your scene.

        Comment


        • #5
          a question regarding this z-coverage. When you select RLA file in max you can check z-buffer and coverage as well, then you get these layers, how do you use it with after-effects in order to remove this horrible aliasing you get when you apply a depth of field on your animation ? Is there any way to achieve a really nice DOF just using RLA and after effects with z-buffer and coverage?

          You can use artifacts to remove it but it's not the point... Hope you understand what i try to explain... Thanks a loooooooooooooooot for any insight

          Comment


          • #6
            sorry for brining this back up but has anything changed since this was last posted? coverage channel as something we can output seperately so we dont need to use rla or rpf or whatever?

            ---------------------------------------------------
            MSN addresses are not for newbies or warez users to contact the pros and bug them with
            stupid questions the forum can answer.

            Comment


            • #7
              Originally posted by Da_elf
              sorry for brining this back up but has anything changed since this was last posted? coverage channel as something we can output seperately so we dont need to use rla or rpf or whatever?
              The information from a coverage channel cannot be stored in a regular image format, as it can contain a varying amount of information for each pixel - e.g. where objects overlap, you get several values per pixel, and where you have a single object - just one. So .rpf and .rla are the only choice.

              Best regards,
              Vlado
              I only act like I know everything, Rogers.

              Comment


              • #8
                Originally posted by vlado
                The information from a coverage channel cannot be stored in a regular image format, as it can contain a varying amount of information for each pixel - e.g. where objects overlap, you get several values per pixel, and where you have a single object - just one. So .rpf and .rla are the only choice.
                That is because of the layers in those formats which would allow to split the coverage data?

                Lele

                Comment


                • #9
                  Originally posted by vlado
                  . So .rpf and .rla are the only choice.

                  Vlado
                  What about EXR?
                  My Youtube VFX Channel - http://www.youtube.com/panthon
                  Sonata in motion - My first VFX short film made with VRAY. http://vimeo.com/1645673
                  Sunset Day - My upcoming VFX short: http://www.vimeo.com/2578420

                  Comment


                  • #10
                    EXR images have layers (plainly my ignorance...)?

                    Lele

                    Comment


                    • #11
                      Originally posted by studioDIM
                      EXR images have layers (plainly my ignorance...)?
                      .exr images have layers in same sense as photoshop files have layers. The term "layer" has a somewhat different meaning in the .exr files compared to the g-buffer and .rpf/.rla files.

                      In theory, one could store the g-buffer layers into an .exr file, yes. However, in that case, in the .exr file each object will have to be in its own multiple layers (for z-depth, material color, normals etc), even if it takes only 1 pixel of the image. So that would be quite a lot of layers, even for a few objects.

                      Best regards,
                      Vlado
                      I only act like I know everything, Rogers.

                      Comment


                      • #12
                        wouldnt "channels" be a better term to use? then again layers is also right i guess in a way

                        ---------------------------------------------------
                        MSN addresses are not for newbies or warez users to contact the pros and bug them with
                        stupid questions the forum can answer.

                        Comment


                        • #13
                          Another way to describe the multi-layered G-Buffer is to say that each pixel has a number of layers. Each of this layers has not only the usual Color channels (RGB) but may also have special channels like Coverage, Z-Depth, Normal, Transparency, Object ID... You can think of the Coverage and Transparency channels as defining the opacity of the layer in the Photoshop sense. So by combining all layers of a pixel (using Color, Coverage, Transparency) it is possible to calculate the color of the pixel in the final rendered image.

                          The main reason why this G-Buffer layers are not usable as is (in the Photoshop sense) is that layer 3 of pixel A has no relation to the layer 3 of another pixel B. For example all pixels of a single layer in a Photoshop document have the same Z-Depth - some layers are above it, some below. But in a G-Buffer layer each pixel has a different Z-Depth. The next G-Buffer layer contains pixels that would be above it and below it in the Photoshop sense. So a G-Buffer Layer is not above or below another G-Buffer layer when looking at them as a whole (only for each pixel on its own this is the case).

                          What is possible is to extract specific mattes or objects from the G-Buffer (with proper antialiasing) and convert it to something usable like Combustion and psd-manager do. But this doesn't preserve the data structure of the G-Buffer as a whole.
                          Daniel Schmidt - Developer of psd-manager

                          Comment

                          Working...
                          X