Announcement

Collapse
No announcement yet.

Determining image coordinates from view-space position when using vert/hor shift

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Determining image coordinates from view-space position when using vert/hor shift

    Hello everyone

    I am currently trying to implement a post processing effect using (among other things) V-Ray's VRaySamplerInfo Point in camera space. Thus I have a simple way of going from image coordinates to view-space positions. For the other direction however I've run into some problems. My attempt so far is to simply use a modified projection matrix such as the one given here:
    http://www.opengl.org/sdk/docs/man/x...erspective.xml
    but with the following changes to accomodate for the fact that VRay shows FOVX rather than FOVY:

    f = cotangent(FOVX/2)
    M[0,0] = f
    M[1,1] = aspect * f

    For the z_near and z_far I use any two dummy values which I know will cover my whole scene, like 0.1 and 100000 since I'm not interested in the projected z value anyway.
    I then simply map from [-1..1] to [0..1] and then to [0..width] or [0..height]

    This works perfectly so long as the parameters vertical shift and horizontal shift are 0.

    How does the vertical and horizontal shift relate to the FOV of the image? Where can I find documentation on this? Any tips on how to accomodate for shift?

    Thanks in advance

  • #2
    Not for same thing, but I'm interested into this too. I often import 3d meshes with 3dsmax camera into Toxik2012, and when we use vertical shift, indeed it doesn't render the same. So if I could find a way to skew the 3dsmax camera into Toxik like the renders, it would rocks !

    Comment


    • #3
      Ok, after several long and incredibly stupid hours I finally managed to figure it out.

      First of all, vertical and horizontal shift in V-Ray does two things. They distort the image, making objects on one side of the given axis appear smaller and objects on the other side appear larger. This is handled using the bottom row of the projection matrix m_41 and m_42 for horizontal and vertical shifts respectively. Shifts also compress the field of view of the given axis depending on the absolute value of the shift. This is handled using m_11 and m_22 for horizontal and vertical shifts respectively.

      You need the following parameters from V-Ray to create the matrix as detailed below:
      - fov_x - horizontal field of view as WITHOUT vertical shift
      - h_s - horizontal shift
      - v_s - vertical shift
      - a - aspect ratio w/h

      Note: Since I am only interested in the 2D coordinates given a view space position I don't care about near and far planes as they are only used for the projected z_depth.
      Note2: The matrix projects to a cube with corners (-1,-1,-1) and (1,1,1) like OpenGL

      f=1/tan(fov_x/2)

      m_11 = f*sqrt(h_s^2+1)
      m_12 = 0
      m_13 = 0
      m_14 = 0
      m_21 = 0
      m_22 = f*a*sqrt(v_s^2+1)
      m_23 = 0
      m_24 = 0
      m_31 = 0
      m_32 = 0
      m_33 = -(f+n)/(f-n)
      m_34 = -(2nf)(f-n)
      m_41 = -h_s
      m_42 = -v_s
      m_43 = -1
      m_44 = 0

      Hopefully someone will be able to make use of this.

      Note: These results come from testing, not theory. I can't guarantee that they will always work.
      Last edited by Brunstgnegg; 17-11-2011, 12:24 AM. Reason: I realized that I came across as more knowledgeable than I am

      Comment


      • #4
        Did you get more advanced into this ? I'm still trying to match a max or vray cam with vertical shift to the one exported into nuke so geometries into nuke and renders match together. Not familiar at all with matrix but allways open to learn Dunno if we could write a chan file from max cam vshift values and connect that to nuke camera matrix so they match ?

        Comment


        • #5
          10 years later, is there a way to bring in the vray vertica shift information from max to nuke?

          Comment

          Working...
          X