In the thread with the DeNoiser submission script (by Lele) he was talking about using VraySamplerInfo for an unfiltered z-depth pass.
He also wrote "And once you have points working, what's stopping you from bringing camera transforms in, and do, hem, things...?" I didn't really understand...
So Lele (or anyone else) do you mean camera transforms can be stored in a render element? Like having XYZ as an RGB value (presumably uniform across the frame)?
I just want to know if it's possible to store the camera and camera target's positions within render elements? With VraySamplerInfo or however....
He also wrote "And once you have points working, what's stopping you from bringing camera transforms in, and do, hem, things...?" I didn't really understand...
So Lele (or anyone else) do you mean camera transforms can be stored in a render element? Like having XYZ as an RGB value (presumably uniform across the frame)?
I just want to know if it's possible to store the camera and camera target's positions within render elements? With VraySamplerInfo or however....
Comment