OK, this might be an crazy idea, but since I have been thinking about it, I might as well share it.
Instead of have to use a ton of AA samples to clean up the blurred parts of DOF, why not fake it good with a GPU accelerated filter in the Frame Buffer? You render a depth pass of course, and then use that to enable a feature like NUKEs ZDefocus:
NUKE filter_nodes/zdefocus
If we get really fancy, the feature might make it possible to change the focus point on the fly in the frame buffer and change the aperture/DOF too. It might require a different sort of depth pass, but I'm not clever enough to know any of that sort.
I understand that a proper rendered DOF is more realistic, but it might not always be needed, and we all love speed. Right?
Instead of have to use a ton of AA samples to clean up the blurred parts of DOF, why not fake it good with a GPU accelerated filter in the Frame Buffer? You render a depth pass of course, and then use that to enable a feature like NUKEs ZDefocus:
NUKE filter_nodes/zdefocus
If we get really fancy, the feature might make it possible to change the focus point on the fly in the frame buffer and change the aperture/DOF too. It might require a different sort of depth pass, but I'm not clever enough to know any of that sort.
I understand that a proper rendered DOF is more realistic, but it might not always be needed, and we all love speed. Right?

Comment