Announcement

Collapse
No announcement yet.

A call for help from Chaos' Research and Development

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A call for help from Chaos' Research and Development

    This is an official communication from the Chaos' RnD team: we need your help!
    • The work we are doing:
    We are working on a way to automate the choice of MSRvalue per pixel to maximise performance without user intervention, and regardless of the scene being rendered.
    MSR, or Minimum Shading Rate, expresses the number of secondary rays (to compute f.e. lighting, shaders and so on) that are cast for each primary one (also called Camera rays: shot to find geometry to work on in the scene, f.e.).
    This is a number currently set per-render, and identical across all pixels, which is a sub-optimal approach as some areas would benefit from a lower number, while others would work better with a higher one.
    For example, parts which are very motion-blurred, or defocused, or have a lot of sub-pixel detail, would benefit from a very low MSR number, while other areas which are fairly uniform (flatly lit flat walls, f.e.) would benefit from a higher number.

    The idea is to gather as much relevant information from the scene to be rendered (f.e.: geometric complexity, shading complexity, defocus amount, velocity, lighting complexity, etc.) via a prepass (analogous to the one V-Ray for Maya already has.), and then via some set of heuristics determine the best value for each pixel during the rendering phase.
    • The reason we need help:
    We usually achieve our goals with our own 3d scenes (both produced in-house, and bought from public collections, along with the few user-donated ones.), but we are finding ourselves in dire need for yours.

    This is because to make sense of the data collected we seem to be requiring machine learning: we've found that due to the number of variables into play for any given pixel, coming up with an algorithm that would work all the time is hopelessly impossible (intricately lit flat wall, slightly defocused and motion blurred, f.e.).
    Training a computational AI to the exclusive task of finding the right MSR value for a given pixel instead shows promise: a promise that will be more fulfilled the more, and more diverse, the training material will be.
    In other words, if we only used our 3d material, the resulting "AI" will only be familiar with the conditions found in the pixels those scenes had.
    This is why we need all kinds of 3d content you may have: product or architectural visualization, VFX or commercial work, technical lighting analysis and simulation.

    We'll then invest our resources to render those scenes with all the settings variations needed (a lot of rendering, by any metric.), and then to collect the per-pixel data with which the Machine will use for Learning.
    • A few disclaimers:
    *) Nothing of the input content will be learnt (f.e. 3d setup methods and approaches.). Only the time each pixel took to render, and the per-pixel information mentioned above are collected. The ML will happen on 5x5 pixel kernels, each entirely unaware of anything else in the image.
    *) The renders as whole images will be used only to create the data above (f.e.: directly ingested). No "generative AI" funkyness going on anywhere in the process: we'll be looking with human eyes at renders only to make sure they are actually rendering as they should, to measure actual noise levels in output, and to verify that data was not spurious (f.e. a pixel rendered too quick, and wrong.) should we have doubts looking at graphs.
    *) No data cross-pollution, sharing, or handling outside of the tasks outlined above will happen: we'll load the scenes one by one, feed them to the process, and delete them as the task is done.
    *) You will have Vlado and me as points of contact at all time: should you wish to stop the usage of your data, an email will suffice.
    *) We are bound, and covered, by the EULA shared with you. We however feel it's right to be completely transparent in this, hence this long-winded post.
    *) We are very grateful for any material you'll send our way: we understand the value it has for you, and very much appreciate the non-trivial effort.
    • How to go about it:
    Ensure you are logged into your Chaos account on our website first, then head to the dedicated form set up for the task, and upload the archive of your scene (100Gb is the per-submission limit. Feel free to split the scene in multiple submissions should it be needed.).
    It'll then be our diligence to use and delete it as soon as this specific task is done.
    The scenes can be in the source DCC format (Max, Maya, SU, Houdini, etc.), or directly as vrscene format, but should be set up so to render as they did in their production tasks, so to be faithful representations of the specific workloads.

    Thanks again for your time and help!
    Lele, Vlado and the RnD team.
    Last edited by ^Lele^; 24-04-2024, 04:08 AM.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

  • #2
    Just thinking outside of the box, but as you guys have chaos cloud maybe you can try and get some scenes that way? With permission of course. The scenes are uploaded there anyway. Just ask permission to use the scenes for analysis, maybe before uploading?
    A.

    ---------------------
    www.digitaltwins.be

    Comment


    • #3
      Originally posted by Vizioen View Post
      Just thinking outside of the box, but as you guys have chaos cloud maybe you can try and get some scenes that way? With permission of course. The scenes are uploaded there anyway. Just ask permission to use the scenes for analysis, maybe before uploading?
      We can in theory and by EULA use that material, yes.
      But there are a number of caveats to doing that (material selection, permission requests, and so on.), and given there is machine learning in the mix, we thought it best to be transparent with our intentions, and to have an explicit submission process.
      Lele
      Trouble Stirrer in RnD @ Chaos
      ----------------------
      emanuele.lecchi@chaos.com

      Disclaimer:
      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

      Comment


      • #4
        Hi Lele,
        are Vray for Cinema4D scenes ok too? If that’s the case I’ll be happy to share a couple works.
        3D Scenes, Shaders and Courses for V-ray and Corona
        NEW V-Ray 5 Metal Shader Bundle (C4D/Max): https://www.3dtutorialandbeyond.com/...ders-cinema4d/
        www.3dtutorialandbeyond.com
        @3drenderandbeyond on social media @3DRnB Twitter

        Comment


        • #5
          Originally posted by sirio76 View Post
          Hi Lele,
          are Vray for Cinema4D scenes ok too? If that’s the case I’ll be happy to share a couple works.
          Absolutely, many thanks!
          We thought we'd start posting in general and Max first, and to extend to the rest of the forum sub-sections later, but we will gladly take any format you have right away!
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment

          Working...
          X