I wasn't sure how to name the thread, so let me explain what I want to achieve:
Google has open-sourced a technology called Seurat, which allows baking a "volume" of pre-rendered 360 degrees panoramas (plus their depth buffers) to a low-poly mesh. This allows viewing high-quality rendered 3D scenes on even mobile devices (especially VR headsets), and even a limited amount of moving around in the scene (hence rendering a "volume") with 6 degrees-of-freedom VR headsets.
Now, the Seurat technology requires the images used for the calculation to be rendered without anti-aliasing, so basically setting V-Ray's sampler subdivs to 1. The problem with this is, of course, that rendering this way creates an extremely noisy image.
Is there a good way to render an image that gets the proper amount of subdivions to render noise free, and still have no anti-aliasing (so that each pixel of the rendering clearly has a corresponding pixel in the unfiltered Z-pass)?
Google has open-sourced a technology called Seurat, which allows baking a "volume" of pre-rendered 360 degrees panoramas (plus their depth buffers) to a low-poly mesh. This allows viewing high-quality rendered 3D scenes on even mobile devices (especially VR headsets), and even a limited amount of moving around in the scene (hence rendering a "volume") with 6 degrees-of-freedom VR headsets.
Now, the Seurat technology requires the images used for the calculation to be rendered without anti-aliasing, so basically setting V-Ray's sampler subdivs to 1. The problem with this is, of course, that rendering this way creates an extremely noisy image.
Is there a good way to render an image that gets the proper amount of subdivions to render noise free, and still have no anti-aliasing (so that each pixel of the rendering clearly has a corresponding pixel in the unfiltered Z-pass)?
Comment