Announcement

Collapse
No announcement yet.

animated camera, zDepth, Camera-Parameter and Ray-Tracing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • animated camera, zDepth, Camera-Parameter and Ray-Tracing

    Hello,

    I think it's best to explain what I have done so far and where I'm heading, so that my following questions become clear

    Starting of with the example "vray dome camera" from the sdk I wrote my own vray camera. I already tested it and the render output looks quit good.

    My final goal is to render zDepth pictures of a static scene with an animated camera.
    Therefore I wrote a maxScript to move the camera through the scene, generating a new animation frame for each movement. The result is an animation with 0-X frames.

    The whole animation is then rendered with vray.
    Settings:
    lights off
    almost everything off
    no frameview
    just write the zDepth images to the disk


    Basically this works but I have some questions:

    zDepth:
    1) Is the zDepth the length of the ray (3d-point to camera center) or the z-component of the 3d-point in camera coordinates?
    2) Is the zDepth taken from the camera center or from the image plane? I remember to have read that vray computes the zDepth from the image plane but can't find it anymore to verify it.

    Camera-Parameter:
    3) I would like to get the camera-parameter of my camera ("projection-matrix"). Is there a way to fetch them?
    If I understand the dome camera correctly it's a sphere:
    1. The image plane lies at a distance of 1.0 from the center (focal length = 1.0).
    2. Pixel aspect ratio is 1.0
    3. The aspect ratio is calculated from the image.
    4. The fov is (like in OpenGL?) along the y-axis from [-fov/2, +fov/2] around the optical axis.
    There is one thing that I'm not sure on how to handle it:
    The sphere is arched but the image plane is flat. Are the rays become thinner with increasing distance from the optical axis/image center? Or do the rays get "bend" after going through the image plane (e. g. one ray per pixel, direction defined afterwards)?

    Ray-Tracing:
    4) The region rendering is about 0.2 [s] which is acceptable.
    Sadly the whole frame takes about 2.2 [s] which is way too long.
    For every frame the info window shows "Building statical raycast accelerator..." which takes about 1.0 [s]
    and "Building geometry" which is faster but slow compared to the region rendering time (maybe ~ 0.5 [s])
    I don't know what "Building statical raycast accelerator..." does. Is it sufficient to be only computed once for the whole animation sequence (analogous to the light cache)? Again, the scene is static!

    Sorry for this rather long posting but I need to have this questions answered to continue my work

    Best regards,
    bob

  • #2
    zDepth

    Since no one replied I will post my results for anyone who might be interested. If I state something incorrect please feel free to correct me.

    The zDepth is linear distributed between zDepth min. (0.0) and zDepth max. (1.0) -> [(zDepth max + zDepth min) / 2 = (0.5)].
    The depth value is the length along the ray, i. e. rays with equal zDepth values form a sphere around the camera center (the point, all rays pass through).
    Most image formats don't support floating point (except OpenEXR), therefore the zDepth floats are mapped to the adequate representation, e. g. for unsigned char to 0.0->0; 1.0->255
    Rays are not bend. The direction is determined by the image plane and the camera center. Therefore the angle between the rays become smaller the further they are away from the image center.
    Last edited by bob44; 16-01-2009, 06:37 AM.

    Comment


    • #3
      You are correct for all points.

      Best regards,
      Vlado
      I only act like I know everything, Rogers.

      Comment

      Working...
      X