Announcement

Collapse
No announcement yet.

A call for help from Chaos' Research and Development

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A call for help from Chaos' Research and Development


    This is an official communication from the Chaos' RnD team: we need your help!
    • The work we are doing:
    We are working on a way to automate the choice of MSRvalue per pixel to maximise performance without user intervention, and regardless of the scene being rendered.
    MSR, or Minimum Shading Rate, expresses the number of secondary rays (to compute f.e. lighting, shaders and so on) that are cast for each primary one (also called Camera rays: shot to find geometry to work on in the scene, f.e.).
    This is a number currently set per-render, and identical across all pixels, which is a sub-optimal approach as some areas would benefit from a lower number, while others would work better with a higher one.
    For example, parts which are very motion-blurred, or defocused, or have a lot of sub-pixel detail, would benefit from a very low MSR number, while other areas which are fairly uniform (flatly lit flat walls, f.e.) would benefit from a higher number.

    The idea is to gather as much relevant information from the scene to be rendered (f.e.: geometric complexity, shading complexity, defocus amount, velocity, lighting complexity, etc.) via a prepass (analogous to the one V-Ray for Maya already has.), and then via some set of heuristics determine the best value for each pixel during the rendering phase.
    • The reason we need help:
    We usually achieve our goals with our own 3d scenes (both produced in-house, and bought from public collections, along with the few user-donated ones.), but we are finding ourselves in dire need for yours.

    This is because to make sense of the data collected we seem to be requiring machine learning: we've found that due to the number of variables into play for any given pixel, coming up with an algorithm that would work all the time is hopelessly impossible (intricately lit flat wall, slightly defocused and motion blurred, f.e.).
    Training a computational AI to the exclusive task of finding the right MSR value for a given pixel instead shows promise: a promise that will be more fulfilled the more, and more diverse, the training material will be.
    In other words, if we only used our 3d material, the resulting "AI" will only be familiar with the conditions found in the pixels those scenes had.
    This is why we need all kinds of 3d content you may have: product or architectural visualization, VFX or commercial work, technical lighting analysis and simulation.

    We'll then invest our resources to render those scenes with all the settings variations needed (a lot of rendering, by any metric.), and then to collect the per-pixel data with which the Machine will use for Learning.
    • A few disclaimers:
    *) Nothing of the input content will be learnt (f.e. 3d setup methods and approaches.). Only the time each pixel took to render, and the per-pixel information mentioned above are collected. The ML will happen on 5x5 pixel kernels, each entirely unaware of anything else in the image.
    *) The renders as whole images will be used only to create the data above (f.e.: directly ingested). No "generative AI" funkyness going on anywhere in the process: we'll be looking with human eyes at renders only to make sure they are actually rendering as they should, to measure actual noise levels in output, and to verify that data was not spurious (f.e. a pixel rendered too quick, and wrong.) should we have doubts looking at graphs.
    *) No data cross-pollution, sharing, or handling outside of the tasks outlined above will happen: we'll load the scenes one by one, feed them to the process, and delete them as the task is done.
    *) You will have Vlado and me as points of contact at all time: should you wish to stop the usage of your data, an email will suffice.
    *) We are bound, and covered, by the EULA shared with you. We however feel it's right to be completely transparent in this, hence this long-winded post.
    *) We are very grateful for any material you'll send our way: we understand the value it has for you, and very much appreciate the non-trivial effort.
    • How to go about it:
    Ensure you are logged into your Chaos account on our website first, then head to the dedicated form set up for the task, and upload the archive of your scene (100Gb is the per-submission limit. Feel free to split the scene in multiple submissions should it be needed.).
    It'll then be our diligence to use and delete it as soon as this specific task is done.
    The scenes can be in the source Max format, or directly as vrscene format, but should be set up so to render as they did in their production tasks, so to be faithful representations of the specific workloads.

    Thanks again for your time and help!
    Lele, Vlado and the RnD team.

    Last edited by Svetlozar Draganov; 24-04-2024, 04:06 AM.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

  • #2
    Sounds cool!
    Can we upload several scenes per submission?
    James Burrell www.objektiv-j.com
    Visit my Patreon patreon.com/JamesBurrell

    Comment


    • #3
      Originally posted by Pixelcon View Post
      Sounds cool!
      Can we upload several scenes per submission?
      You could, there is a 100Gb limit per submission, shared across a maximum of 100 files.
      Or you could submit multiple ones, for easier later tracking.
      Lele
      Trouble Stirrer in RnD @ Chaos
      ----------------------
      emanuele.lecchi@chaos.com

      Disclaimer:
      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

      Comment


      • #4
        I sent around a dozen scene files.
        Please get in touch with me if something goes wrong.
        Have fun with those!

        https://www.behance.net/Oliver_Kossatz

        Comment


        • #5
          I would like to provide some of my scenes
          What kind of projects would be better to sent?
          Big, small scale? Any would be enough?
          Carlos Rodriguez
          RTstudio
          Tutorials

          Comment


          • #6
            Carlos, sorry for the delayed reply!
            You can send us what you think represents the work you do best.
            Size or complexity won't matter, we should be able to handle it all. (/me breaks a sweat.)
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment


            • #7
              Originally posted by ^Lele^ View Post
              Size or complexity won't matter, we should be able to handle it all. (/me breaks a sweat.)
              "One Vray to render them all"

              Comment


              • #8
                Hi
                I have an scene that it takes a lot of time "loading corona assets" even that im not using any corona objects or assets, i do have some maps "corona color correct, corona select" that didnt troubled me before
                This message only happened to me when using Corona
                I have submitted this scene for research and development the ticket is 252693
                Carlos Rodriguez
                RTstudio
                Tutorials

                Comment

                Working...
                X