Hi,
I'm trying to convert the VRayNormals result of a scene into a meaningful representation in a Computer Vision application. I can't make any sense out of the rendering result, however.
My results are saved as 16-bit/channel PNGs. As far as I understood (and also following Wikipedia), the normals are encoded the following way:
sampled image values: R,G,B
range: possible image values (65536 for 16-bit, 255 for 8-bit)
normal: X,Y,Z
X = ((R/(range-1)) - 0.5) * 2;
Y = ((G/(range-1)) - 0.5) * 2;
Z = ((B/(range-1)) - 0.5) * (-2);
Assume the rendering of a plane lying in the x/y-plane with z = 0 with a projective camera. For any point on the plane, the normal is the same [0,0,1]. I would therefore assume, that the renderer assigns all rendered points on the plane the same normal with respect to the camera (equivalent to the same image color basically). Here's the scene:
However looking at the results of the renderer, this is not the case and causes me headache:
It's clearly visible that the color changes. Doesn't the VRayNormals renderer create normals with respect to the camera look-at direction? I would assume that all normals reside in a common reference coordinate system, namely the camera coordinate system. However, it seems that the location of a pixel in the image has an influence on the normal, which contradicts this assumption.
Can someone give me a hint what is happening here?
Cheers
I'm trying to convert the VRayNormals result of a scene into a meaningful representation in a Computer Vision application. I can't make any sense out of the rendering result, however.
My results are saved as 16-bit/channel PNGs. As far as I understood (and also following Wikipedia), the normals are encoded the following way:
sampled image values: R,G,B
range: possible image values (65536 for 16-bit, 255 for 8-bit)
normal: X,Y,Z
X = ((R/(range-1)) - 0.5) * 2;
Y = ((G/(range-1)) - 0.5) * 2;
Z = ((B/(range-1)) - 0.5) * (-2);
Assume the rendering of a plane lying in the x/y-plane with z = 0 with a projective camera. For any point on the plane, the normal is the same [0,0,1]. I would therefore assume, that the renderer assigns all rendered points on the plane the same normal with respect to the camera (equivalent to the same image color basically). Here's the scene:
However looking at the results of the renderer, this is not the case and causes me headache:
It's clearly visible that the color changes. Doesn't the VRayNormals renderer create normals with respect to the camera look-at direction? I would assume that all normals reside in a common reference coordinate system, namely the camera coordinate system. However, it seems that the location of a pixel in the image has an influence on the normal, which contradicts this assumption.
Can someone give me a hint what is happening here?
Cheers
Comment