Announcement

Collapse
No announcement yet.

New camera allows post depth of field processing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New camera allows post depth of field processing

    Some new technology recently unveiled, claims to use raytracing techniques to capture all of the light coming towards a lens, one benefit is it allows you to manipulate the depth of field on the photo after the fact

    Z:\jobs\fright_night\MV\shots\010\renders\cg\Look_ Dev\lgt\MV_010_Look_Dev_lgt_0003

    Since Vray is essentially a virtual camera, could this ever be implemented as a way to get perfect post DOF? (My guess is yes, but it would take forever to render).

    B

  • #2
    I'd imagine the amount of samples it'd have to store per pixel would lead to massive files and all the optimizations would go out the window - Discarding irrelevant data is one of the first major speedups!

    Comment


    • #3
      Fascinated, I did a quick re-read-up on Plenoptics, and then gave up at this point:

      A team at Stanford University used a 16 megapixel camera with a 90,000-microlens array (meaning that each microlens covers about 175 pixels, and the final resolution is 90 kilopixels) to demonstrate that pictures can be refocused after they are taken.

      90 kilopixels (!)

      Oh.. I only reply really to mention the link in the original post is to a local drive, and I'm assuming it was about Plenoptics.

      Comment


      • #4
        Originally posted by shadevfx View Post
        could this ever be implemented as a way to get perfect post DOF? (My guess is yes, but it would take forever to render)
        Yeah this is already possible and is known as deep composting. Do not know if it possible to do with vray (probably is but at a very very technical level) but at the moment there is not a industry standard format for it. This will change though as Weta announced that they will be making it possible with the release of openexr 2.0 and that they will also be licensing the tools they use for deep compositing in house to the Foundry.

        More about deep compositing can be found here

        http://www.fxguide.com/fxguidetv/fxguidetv_095/

        Best,

        Rich

        Comment


        • #5
          You can try Lenscare to simulate DOF as a post process.
          Ivan Tiunov
          Red Screw + Production

          Comment


          • #6
            Be aware that the amount of data is massive. In some recent discussions on that topic it seemed that most shops that use it already basically do not really render a lot of layers (as in renderelements) or anything to do relighting or anything fancy...basically just layer and comp. if you take PTex as an example (which is the only currently existing quasi standard for that kind of data (beyond wacky stuff in rla/rpf)) you might easily end up with 200mb for a very simple (=few fragments per pixel on average) for RGBA + Z only. That's quite a drag on both performance and storage needs. Keep in mind that you need to multiply (!) that with the amount of renderchannels as you need ALL fragments in ALL elements (if you properly want to use them.

            To put it short. It is not something fancy that will get rid of all your problems...it will solve some, but it will come at a price hehe.

            That aside DeepComp will be part of Nuke 6.3 (http://www.youtube.com/watch?v=FbkW295yQJ0) and you can find more info an samples on www.deepimg.com for example.

            Regards,
            Thorsten
            Last edited by instinct; 29-06-2011, 03:33 PM.

            Comment

            Working...
            X