Announcement

Collapse
No announcement yet.

Depth of field with "non-realistic" scene units...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Depth of field with "non-realistic" scene units...

    Hi there,

    since I´m doing a lot of medical animations with tiny objects looking big on screen, I developed a habit of making my scene units a lot bigger (1 unit=1 meter usually).
    This gives me more freedom with the camera, however I can´t get proper DoF from my camera.
    Which is usually fine, since I always used zdepth element and comped the DoF in post.
    Recently I had to do more and more refractive materials and of course the zbuffer in comp here doesn´t really work very well.
    I can try to render in passes, but thatapart from beeing a hassle to setup, it still wouldnt give me correct results most of the time (like in the case when the backside of an object is visible through the front side).
    Since I´m comping in AE and it´s not capable of deep compositing yet, I figured I might go back and try some real DoF once in a while.
    But I can´t get any good DoF because of my scene scale now (where my objects are usually several meters away from each other and the camera).

    Cna I somehow override ths effect or work around it?

  • #2
    Are you trying to get the DOF effect from V-Ray Physical Cam or from Render Setup/Camera Rollout?
    A quick solution would be to rescale the World Units in order to match the real scale of the objects:
    Click image for larger version

Name:	d14VWCw.jpg
Views:	1
Size:	26.0 KB
ID:	854626
    Another solution would be to attach the original scene as a Xref to another scene and again scale it up or down to match the real scale.

    Can you show us how the DOF effect looks like now and an example of how you would like to have it?
    Svetlozar Draganov | Senior Manager 3D Support | contact us
    Chaos & Enscape & Cylindo are now one!

    Comment


    • #3
      Well, I haven´t really ever used real DoF, so I´m not even sure I´m doing it right. But yes, I´m trying to use DoF from VrayCam, but if there is a workaround where I can just use DoF from camera rollout, I´d be fine too.
      I don´t have a scene at hand right now, but what I want is just some very strong DoF like from a macro-shot and what I get is basically nothing.
      I`m afraid rescaling the scene won´t work, since that would mess up my camera and shaders (mostly SSS). Imagine a shot with blood cells, each of them around 3-4 meters big, so the camera can move around freely.
      If i´d do them in "real scale" or even just in centimeters (lets say 3-4 cm per cell), I wouldn´t be able to get close enough to fly around them with the camera.

      Comment


      • #4
        Ok, here are some quick test renders:

        1 Unit=1 cm, f-number 1

        Click image for larger version

Name:	DoF_1cm_f-number_1.jpg
Views:	1
Size:	46.9 KB
ID:	854627

        1 Unit=1 m, f-number 1

        Click image for larger version

Name:	DoF_1m_f-number_1.jpg
Views:	1
Size:	53.7 KB
ID:	854628

        1 Unit=1 m, f-number 0.01

        Click image for larger version

Name:	DoF_1m_f-number_0.01.jpg
Views:	1
Size:	46.7 KB
ID:	854629

        The last one ( what I want to achieve) actually looks pretty close, but it´s also the limit to how low I can go with f-number, so that´s the maximum I can get whereas i can get even more with fake DoF in post...

        Comment


        • #5
          Yes 0.01 is the absolute min for the FOV value, you may consider to use the DOF effect from Render Setup/Camera rollout, it should give a wider range.
          Svetlozar Draganov | Senior Manager 3D Support | contact us
          Chaos & Enscape & Cylindo are now one!

          Comment


          • #6
            Yeah, should have read the manual...:

            These parameters control the depth of field effect when rendering with a standard 3ds Max camera or with a perspective viewport. The parameters are ignored if you render from a VRayPhysicalCamera view.
            I´ve always used the vray physical camera, so thats why I didn´t get any DoF when trying to override the camera DoF settings...

            Comment


            • #7
              Just another quick question...I know I mostly started NOT using real DoF, because I was afraid of the render times.
              I just did some more quick tests and I´m having trouble getting the DoF noise free.
              Usually I just deal with min and maximum values in the adaptive DMC sampler, but no matter how high I set the maximum samples, I can´t get rid of the noise in the DoF.
              Only way to do get it clean seems to be to unlock the noise threshold from Global DMC noise thrshold and set it to color threshold.
              I can see a pretty noise free image at about 0.002.

              But that seems like a big waste of time, since I don´t need such low color threshold for anything else but the DoF to get a noise free image...usually the defaults of "use DMC sampler thrsh." and 0.01 work perfectly for me...

              Comment


              • #8
                The maximum amount of samples in the Image Sampler depends from the Color Threshold parameter as well.
                You can set this parameter to a very high value but if the Color Threshold value is too high V-Ray will never fire those samples anyway.
                You have to tweak both parameters together.

                The default value of 0.01 is a good starting point but if you want to get high quality DOF you have to decrease it to 0.005 at least.
                VrayStereoscopic/ShadeMap could speed up rendering of DOF/MB effects a lot - have you tried it?

                http://docs.chaosgroup.com/display/V...eMapParameters
                Svetlozar Draganov | Senior Manager 3D Support | contact us
                Chaos & Enscape & Cylindo are now one!

                Comment


                • #9
                  Ok, I thought as much, nothing is for free in this world...

                  Shade maps works with Mono output aswell? So I would create a stereoscopic helper and set it up to only output one view?
                  And how am I supposed to only render the shade map? there doesn´t seem to be a button for that...

                  I also just read that tehre is in issue with opacity mapped/refractive elements...does that mean only for stereoscopic rendering or also if I use this as a hack for faster depth of field?

                  Another Issue: I often do some heave editing in post, but I figure passes like wirecolor are pretty useless when they are blurred by DoF...

                  Any advice ona proper workflow here?
                  Or is this just something I have to consider before rendering and adjust my workflow accordingly?

                  I also frequently add "god rays" via another pass with standard lights and standard volumefog (I found Vray fog to not preduce this effect as predictably and easy as with standard volume fog), but I guess I´d have to render those with DoF aswell in this workflow.
                  Last edited by ben_hamburg; 13-11-2014, 09:06 AM.

                  Comment


                  • #10
                    Yes ShadeMap doesn't support refractive objects - this is a limitation so far and if you have such objects in the scene it would be better to render without shademap.
                    The shade-map is calculated during the actual rendering - you have to switch to Render Shade Map mode, select where to save the file and then hit Render btn:
                    Click image for larger version

Name:	X9nv3gA.jpg
Views:	1
Size:	25.5 KB
ID:	854633
                    ShadeMap is view-depended yes, if the camera moves you have to calculate a separate ShadeMap for each frame.

                    DOF will also affects render elements like wirecolor - I think it would be better if you extract separate mask via MultiMatte render element. It would be very difficult to isolate objects in wirecolor render element since all the objects are in one layer - with MM element you can extract each object in a separate layer.
                    Svetlozar Draganov | Senior Manager 3D Support | contact us
                    Chaos & Enscape & Cylindo are now one!

                    Comment


                    • #11
                      Well, in that case it´s a good idea, but doesn´t help me, since I actually only need DoF in scenes with refractive objects...
                      I guess I´ll just have to spend more time optimizing those scenes then and in return I´ll at least save render times in post.
                      I do a lot of medical animations with scenes consisting almost exclusively of VrayFastSSS Materials and in those scenes I usually don´t have to deal with optimization at all - they render blazingly fast with adaptive subdivision sampler and If the scale is high enough and glossys not too low even completely noisefree...So again...please don´t get rid of adaptive subdivision sampler...

                      I´ve actually never used the multimatte element, I´ll check it out for the next time I´ll have to use real DoF.

                      Comment

                      Working...
                      X