I'm trying out a camera projection workflow using 'Camera Map Per Pixel' node. Basically I've rendered an HD image of an interior, projected that image back on to my scene using the above node, and rendered a sequence from an animated camera.
My question is what render settings I should use to render out the sequence. So this a normally rendered image with GI, direct lighting etc. What I've done so far is to plug that 'Camera Map Per Pixel' node in to the diffuse slot of a VrayMtl, set this as override material and turned off all lights, GI, glossy effects, filtering. My sequence comes from a RawDiffuseFilter RE. I've turned off all map filtering
The problem I'm getting is that the resulting image (the projected one) is much grainier/noisier than the source.
There's a setting in the CMPP node for a 'z-buffer' maybe it has something to do with that? I added a z-depth RE of the scene but it didn't seem to do anything. Would I be better off just using scanline or something?
Original image:

Projected image:
My question is what render settings I should use to render out the sequence. So this a normally rendered image with GI, direct lighting etc. What I've done so far is to plug that 'Camera Map Per Pixel' node in to the diffuse slot of a VrayMtl, set this as override material and turned off all lights, GI, glossy effects, filtering. My sequence comes from a RawDiffuseFilter RE. I've turned off all map filtering
The problem I'm getting is that the resulting image (the projected one) is much grainier/noisier than the source.
There's a setting in the CMPP node for a 'z-buffer' maybe it has something to do with that? I added a z-depth RE of the scene but it didn't seem to do anything. Would I be better off just using scanline or something?
Original image:
Projected image:
Comment