Announcement

Collapse
No announcement yet.

Format of render passes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Format of render passes

    Hello

    I am new to the forum and new to both 3dS and VRay. Hopefully there is some kind of documentation that I am unaware of that can answer all of these questions. There are quite a lot of them and they are all related to the format of the render passes and how they relate to produce the final image. If this is in the wrong forum please let me know.

    VRayNormals - Looking at the images it appears that these are in View-Space but how are they to be interpreted? Assuming 8bit channels (for png) is 0 = -1, 255 = 1 and 127/128 = 0?
    VRayZDepth - Setting the parameters as zdepth min= N, zdepth max = F, clampz depth = false and invert zdepth = false I assume that the values can be interpreted as 255 = F, 0 = N and 128 = N+(F-N)/2. Is this correct? Furthermore, is it possible to get greater accuracy using 8bit channels? Since all channels store the same value anyway would it be possible to use say the R and G channels to get 16bit accuracy?
    VRayReflection, VRayRawReflection, VRayReflectionFilter - A little background: I rendered a scene first with reflection and these three render elements (let's call the rgb rendered Final). I then rendered the same scene but turned off reflections (let's call this NoReflection). Based on http://vray.us/vray_documentation/vr...elements.shtml I assumed the following would be true:
    VRayReflection + NoReflection = Final
    VRayRawReflection * VRayReflectionFilter = VRayReflection

    But it is not. What am I doing wrong?

    I found this thread discussing the same thing:
    http://www.chaosgroup.com/forums/vbu...lection+filter
    It is a bit difficult to follow but it appears that blendmaterials can cause issues, I am not using these.

    Thanks in advance

    Edit:
    About the ReflectionFilter*RawReflection=Reflection. I ran a few tests with different settings for the render elements. Basically, if I render to 32bit float channels instead of 8 bits and turn off all color mapping for the render elements then the above equation is correct. Why is this? The problem is that with those settings the RawReflection values sometimes go as high as 4 which will obviously not work for png images. Furthermore, I don't quite understand how all these things relate. For example I tried using linear color mapping with multiplier=1.0, burn factor = 1.0 and gamma = 2.2. With these settings I expected the following:
    Reflection = (RawReflection^(1/2.2)*ReflectionFilter^(1/2.2))^2.2
    This is not the case. How would I have to alter the values based on the color mapping parameters to get the correct reflection? I really lack a lot of understanding of the details and I can't seem to find a detailed documentation.

    Again, thanks in advance
    Last edited by Brunstgnegg; 26-10-2011, 01:33 AM.

  • #2
    Render to EXR to get correct math.

    Comment


    • #3
      Originally posted by IVOR_IP View Post
      Render to EXR to get correct math.
      Thank you for your reply. If you read my edit this only works for me if I don't use color mapping. Furthermore, I need the images to be png 8bit format so there should be some way of color mapping the values so that they end up in the 0-1 range and then converting them back from the rendered image. I need to understand how these things relate and the mathematics involved but I can't find any documentation.

      I also have another question that relates to the previous ones. I rendered the element VRayNormals and looking at it it seemed that the normals are in view-space (or camera-space). However, if the normals are in view-space then all normals of a flat surface should be equal but they are not. What space are the normals in? How can I transform them to view-space? Again there is probably some document detailing all of this and if someone could direct me to it that would be great.

      Note: Just in case I am asking the wrong questions what I want to do is use the VRayZDepth pass to find the view space position of each pixel and the VRayNormals to find the normals. I have succeeded in neither, I use the scanline ZDepth instead as it renders the depth rather than the distance and I'm stuck when it comes to the normals.

      Thanks in advance

      Comment


      • #4
        converting to 8bit int 0-1 and then back to float will cause loss of data and you will most likely not get anywhere with png if you plan dissecting your image that much.

        VRayNormals are -1/+1 and in camera space. Hence the normal will be different for a flat parallel plane from the left edge to the right edge and from top to bottom. You can use SamplerInfo to generate World or Object space normals. If you want to transform them you will have to use the fov of the camera.

        What do you mean with "depth vs. distance" You can use the min/max and invert/clamp settings in the z-depth to output pretty much any format needed.

        Regards,
        Thorsten

        Comment


        • #5
          Originally posted by instinct View Post
          converting to 8bit int 0-1 and then back to float will cause loss of data and you will most likely not get anywhere with png if you plan dissecting your image that much.

          VRayNormals are -1/+1 and in camera space. Hence the normal will be different for a flat parallel plane from the left edge to the right edge and from top to bottom. You can use SamplerInfo to generate World or Object space normals. If you want to transform them you will have to use the fov of the camera.

          What do you mean with "depth vs. distance" You can use the min/max and invert/clamp settings in the z-depth to output pretty much any format needed.

          Regards,
          Thorsten
          Thank you for your reply!

          I realize there will be loss of data but I hope the results will be sufficient using 8-bit format. If not, I will have to change the implementation later. This is not a pressing concern however so I will leave that for later.

          You say that the normals are in -1/+1. Does that mean that I could use the following formula X_N = (R-12*127 and similarly for y and z? That is what I'm currently doing and the length of each normal appears to be 1. (For 8 bit channels)

          I may have misunderstood what camera space is. I assumed that camera space was the same as view space i.e. a translation and rotation from world space. I didn't think transforming to view space would make the normals relative to the field of view since the field of view is not used until the scene is projected. "If you want to transform them you will have to use the fov of the camera." How? . You don't have to give me a step by step approach or anything but I don't understand what space the normals are in so I don't know how to use the fov to transform them.

          Regarding "depth vs. distance". According to this post by Vlado:
          http://www.chaosgroup.com/forums/vbu...depth+scanline
          "V-Ray calculates the Z-buffer as the distance from the camera position, whereas the scanline renderer calculates this as distance from the camera XY plane."

          It is a lot easier to find the view space coordinates using the distance from the XY plane rather than the distance from the camera.

          Regarding SamplerInfo. I had not heard about it before this post. A quick google search later and it seems that it is for Maya? Is there anything equivalent for 3DS Max? Could you direct me to any info/tutorials?
          Last edited by Brunstgnegg; 02-11-2011, 07:00 AM.

          Comment


          • #6
            The same as for "depth vs. distance" holds true for the normals. The normal is relative to the view vector (which is cast from the camera into the scene and not from the plane).

            Mind sharing what you plan to do with the normals and the coordinates and using what application? That might help easing the way hehe.

            I was referring to VRaySamplerInfo, which is a VRay renderelement

            To generate the viewspace normals as in transformed by the inverse camera matrix you can use the SamplerInfo Element set to Normal or position, reference mode and select your camera.

            Regards,
            Thorsten
            Last edited by instinct; 02-11-2011, 09:15 AM.

            Comment


            • #7
              /*
              Thank you for your reply. If you read my edit this only works for me if I don't use color mapping. Furthermore, I need the images to be png 8bit format so there should be some way of color mapping the values so that they end up in the 0-1 range and then converting them back from the rendered image. I need to understand how these things relate and the mathematics involved but I can't find any documentation.
              */

              Zdepth is a an artificially clamped representation of deep space, when you enter the min/max, you are remapping and clamping that distance from camera to pixel in world space.

              Worldpass/position pass/etc is a completely different animal, it is the direct encoding of pixel space in world coordinates onto a float3, so if you are saving it as an 8-bit png, you are literally doing a sawtooth encoding of that pixel, it would not do anything unless you want to produce some kind voxel looking image.

              As far as Vraynormal, a flat surface would not generate equal values, that would only happen if surface occurs normal to picture plane.

              Comment


              • #8
                The same as for "depth vs. distance" holds true for the normals. The normal is relative to the view vector (which is cast from the camera into the scene and not from the plane).

                Mind sharing what you plan to do with the normals and the coordinates and using what application? That might help easing the way hehe.

                I was referring to VRaySamplerInfo, which is a VRay renderelement

                To generate the viewspace normals as in transformed by the inverse camera matrix you can use the SamplerInfo Element set to Normal or position, reference mode and select your camera.

                Regards,
                Thorsten
                I still don't quite understand what space the normals are in . Out of curiosity I would love it if you could direct me to some reading regarding this! However, the VRaySamplerInfo normals in camera space are exactly what I need! Thank you for your help!

                As for what I'm doing, I am attempting to add reflections as a post processing effect, kind of like a deferred shading screen-space technique. I realize VRay can render reflections but because of the way the software I'm working with functions those can't be used.

                Zdepth is a an artificially clamped representation of deep space, when you enter the min/max, you are remapping and clamping that distance from camera to pixel in world space.

                Worldpass/position pass/etc is a completely different animal, it is the direct encoding of pixel space in world coordinates onto a float3, so if you are saving it as an 8-bit png, you are literally doing a sawtooth encoding of that pixel, it would not do anything unless you want to produce some kind voxel looking image.

                As far as Vraynormal, a flat surface would not generate equal values, that would only happen if surface occurs normal to picture plane.
                I am attempting to approximate the view position using the depth data. To start I am using 8-bit accuracy but I may have to either use better accuracy or start using a position pass. Time will tell. I think I have the normals figured out now at least.

                Thank you both for your help!

                Comment


                • #9
                  I guess it boils down screenspace vs. cameraspace. VRay outputs in cameraspace for both normals and depth. Depth is a Vector from the Camera position through the virtual "screen" plane to the actual object. Hence the normals "lean towards the center" so to say. In Screenspace the vector would be cast perpendicular to the screenplane.

                  Now as for doing deferred shading operations there are a lot of ways to calculate and both screen, camera, world and object space can be used for all kinds of shading. With the VRaySamplerInfo you can take quite a bit of a shortcut tho. You can output both reflected and refracted normals using the different mode. This way you don't have to reflect your eye vector with the surface normal in post, but can just use the reflected vector to do a lookup in the environment map or cast it into a scene if there is any.

                  I found it a lot easier to use world space position and normals to do post relighting and reflections actually.

                  See http://vimeo.com/19078334 for an example. Besides the obvious sampling artifacts it also matches the original max rendering pretty damn close for both lighting and reflections (given you implement proper BRDF distribution of course)

                  In Regards to 8bit i would definitely say it is not sufficient for the kind of things you try to do. In real world scenes not even half float was enough to yield a good result here. I had to go 32bit full float for the auxilliary passes.

                  Regards,
                  Thorsten

                  Regards,
                  Thorsten

                  Comment

                  Working...
                  X