Announcement

Collapse
No announcement yet.

What about rendering SH coefficients into an image for later re-lighting in Nuke

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • What about rendering SH coefficients into an image for later re-lighting in Nuke

    Hi Vlado, hi dev team,

    i am new to the whole SH stuff, but i am wondering if vray would be able to render SH coefficients(or whatever it is called) into a bunch of exr channels...i don't know how much information this would be per Pixel, but can u image implementing a feature like this, or is this a totally stupid idea? Is there maybe some information that cannot be stored per pixel?

    Would be really awesome to finally be able to really relight inside of nuke...

    cheerio
    Oli
    Last edited by ultrasonic; 30-10-2011, 12:03 PM.
    OLIVER MARKOWSKI - Head of 3D at RISE | Visual Effects Studios

  • #2
    It's just a bunch of floating-point numbers, so they can certainly be stored into OpenEXR channels. However I'm not sure how well this will work with AA - you'll probably need deep compositing to get it right.

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #3
      Originally posted by vlado View Post
      you'll probably need deep compositing to get it right.
      You're absolutely right...but since deep compositing is about to rule the planet i think there will be huge possibilities as soon as OpenEXR2.0 or your own vrimg2nuke-plugin comes to light...maybe you could just add the possibility to write SH out to an image as soon as you finished implementing a deepimage output plugin....i think the community will greatly appreciate this and this would be one more major reason to render with vray! It's all about marketing

      cheers & thx in advance!
      Oliver
      OLIVER MARKOWSKI - Head of 3D at RISE | Visual Effects Studios

      Comment


      • #4
        Do you know what format of the SH is necessary for Nuke? Are there some docs on this?

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Has anyone here actually done any deep data productions ?
          Everyone i talk to that seemingly has seemed to indicate that they reduced the layers to the absolute bare minimum (down to RGBAZ) due to filesizes and performance limitations. While i find the possibilities intriguing it seems somewhat limiting in other ways.
          I took a look at some of the sampledata for 6.3. It isnt even a complex example (imagine atmospherics or other participating media, yikes) and already adds up to 520mb (!) for a single 2K frame.
          That's 3 Layers: Buildings (blue boxes), Balls (red spheres moving between them) and ground. Note that this is only RGBAZ + corresponding deep data. How on earth should network and even local discs handle that ?? Let alone the performance drag and doing anything deep based. I must be missing something big, as i don't really see that working in a production. Without even adding in the data explosion due to additional renderelements.

          Regards,
          Thorsten

          Comment


          • #6
            From what I understand, the examples in Nuke store every single camera ray, which is quite wasteful. A more compressed version, like the G-Buffer in 3ds Max, is way more forgiving to file sizes.

            Best regards,
            Vlado
            I only act like I know everything, Rogers.

            Comment


            • #7
              Originally posted by vlado View Post
              Do you know what format of the SH is necessary for Nuke? Are there some docs on this?
              I am currently investigating this whole SH stuff...need to talk to my friend Johannes Saam about that, since he seems to be the right guy to ask on that matter....SH and DeepImg as well...i'll keep you posted...
              OT: Can u give us some insight when we will be able to play with deep image data from vray?

              Originally posted by instinct View Post
              Has anyone here actually done any deep data productions ?
              Nope...maybe we should ask some guys from animal logic or Dr.D...
              Originally posted by instinct View Post
              I must be missing something big, as i don't really see that working in a production. Without even adding in the data explosion due to additional renderelements.
              If i understood it right the best approach of storing deep data is in the form of a function per pixel...i haven't seen any footage that is using this but i am hoping that this will decrease filesize (dramatically)
              Last edited by ultrasonic; 31-10-2011, 09:04 AM.
              OLIVER MARKOWSKI - Head of 3D at RISE | Visual Effects Studios

              Comment


              • #8
                A function per pixel sounds interesting. That would mean you'd have to do an arbitrary curve fit along the ray...hm. That sounds pretty crazy actually. Or as per usual i am understanding it wrong hehe.

                @Vlado: Thanks for the heads up, what order of magnitude are we talking tho? Are we talking about making RGBAZ usable? Or about adding another 50 channels to the equation?

                Kind Regards and thanks for the input,
                Thorsten

                Comment


                • #9
                  Originally posted by instinct View Post
                  A function per pixel sounds interesting. That would mean you'd have to do an arbitrary curve fit along the ray...hm. That sounds pretty crazy actually. Or as per usual i am understanding it wrong hehe.
                  http://www.deepimg.com/in-depth/integrated/
                  I really need to get deeper into the matter....but hey...it's just evolving...there will be lot's of improvements in the future...hopefully
                  OLIVER MARKOWSKI - Head of 3D at RISE | Visual Effects Studios

                  Comment


                  • #10
                    If i understood it right the best approach of storing deep data is in the form of a function per pixel...i haven't seen any footage that is using this but i am hoping that this will decrease filesize (dramatically)
                    You can't store all data this way (f.e. object IDs, material IDs, matte masks and so on). Further on, Pixar has a patent on the particular format for storing the said per-pixel function.

                    Originally posted by instinct View Post
                    @Vlado: Thanks for the heads up, what order of magnitude are we talking tho? Are we talking about making RGBAZ usable? Or about adding another 50 channels to the equation?
                    I mean arbitrary number of channels. Take a look at the .vrst shademap files written by the V-Ray stereoscopic helper - they are pretty much a deep raster format (and they are what we are reading in Nuke too).

                    Best regards,
                    Vlado
                    I only act like I know everything, Rogers.

                    Comment


                    • #11
                      Originally posted by vlado View Post
                      Take a look at the .vrst shademap files written by the V-Ray stereoscopic helper - they are pretty much a deep raster format (and they are what we are reading in Nuke too).
                      With WE u mean the developers at chaosgroup, or did i miss a release of the nuke plugin? If no, when do you plan to release it? This year or next year?
                      OLIVER MARKOWSKI - Head of 3D at RISE | Visual Effects Studios

                      Comment

                      Working...
                      X