This is an official communication from the Chaos' RnD team: we need your help!
- The work we are doing:
MSR, or Minimum Shading Rate, expresses the number of secondary rays (to compute f.e. lighting, shaders and so on) that are cast for each primary one (also called Camera rays: shot to find geometry to work on in the scene, f.e.).
This is a number currently set per-render, and identical across all pixels, which is a sub-optimal approach as some areas would benefit from a lower number, while others would work better with a higher one.
For example, parts which are very motion-blurred, or defocused, or have a lot of sub-pixel detail, would benefit from a very low MSR number, while other areas which are fairly uniform (flatly lit flat walls, f.e.) would benefit from a higher number.
The idea is to gather as much relevant information from the scene to be rendered (f.e.: geometric complexity, shading complexity, defocus amount, velocity, lighting complexity, etc.) via a prepass (analogous to the one V-Ray for Maya already has.), and then via some set of heuristics determine the best value for each pixel during the rendering phase.
- The reason we need help:
This is because to make sense of the data collected we seem to be requiring machine learning: we've found that due to the number of variables into play for any given pixel, coming up with an algorithm that would work all the time is hopelessly impossible (intricately lit flat wall, slightly defocused and motion blurred, f.e.).
Training a computational AI to the exclusive task of finding the right MSR value for a given pixel instead shows promise: a promise that will be more fulfilled the more, and more diverse, the training material will be.
In other words, if we only used our 3d material, the resulting "AI" will only be familiar with the conditions found in the pixels those scenes had.
This is why we need all kinds of 3d content you may have: product or architectural visualization, VFX or commercial work, technical lighting analysis and simulation.
We'll then invest our resources to render those scenes with all the settings variations needed (a lot of rendering, by any metric.), and then to collect the per-pixel data with which the Machine will use for Learning.
- A few disclaimers:
*) The renders as whole images will be used only to create the data above (f.e.: directly ingested). No "generative AI" funkyness going on anywhere in the process: we'll be looking with human eyes at renders only to make sure they are actually rendering as they should, to measure actual noise levels in output, and to verify that data was not spurious (f.e. a pixel rendered too quick, and wrong.) should we have doubts looking at graphs.
*) No data cross-pollution, sharing, or handling outside of the tasks outlined above will happen: we'll load the scenes one by one, feed them to the process, and delete them as the task is done.
*) You will have Vlado and me as points of contact at all time: should you wish to stop the usage of your data, an email will suffice.
*) We are bound, and covered, by the EULA shared with you. We however feel it's right to be completely transparent in this, hence this long-winded post.
*) We are very grateful for any material you'll send our way: we understand the value it has for you, and very much appreciate the non-trivial effort.
- How to go about it:
It'll then be our diligence to use and delete it as soon as this specific task is done.
The scenes can be in the source Max format, or directly as vrscene format, but should be set up so to render as they did in their production tasks, so to be faithful representations of the specific workloads.
Thanks again for your time and help!
Lele, Vlado and the RnD team.
Comment