Originally posted by Macker
View Post
In NONE of the shows where i have been a TD the rendering is filtered by default.
In case of a noisy image, it splatters the noise across pixels, making it comper's hell to try and remove it.
In case of a clean enough image, it still creates endless issues with alpha masks, and lowers (i'm talking of blurry, entirely-positive component filters.) image detail frequency (which required loads of work to put in in the first place).
There are, however, cases where GEOMETRY is very thin, and tends to shimmer no matter the AA sampling, very much like it would in real life (everything has a set resolution. a camera, the eye, you name it.), and then, and only then, after talking to the Comp Sup, and the relevant comper for the shot, filtering is introduced, of the blurry/reconstructive type, and this purely because at rendertime we can do so sub-pixel, and with greater accuracy than post could.
Further, giving Post a clean, sharp image (not. ever. they rebuild the beauty pass from REs.) to work with means all the blurring they will necessarily add (ie. film grain) will be done on the exact rendered pixels, and not on a mushy mess.
And this is for when blurring, positive component filters are used.
It's of a couple of days ago a good laugh with someone which was bit**ing about artists feeding him NEGATIVE RGB beauty passes to fix: in comes Lanczos/Catmull-Rom around bright areas (Canon's DSLRs in around 2009 had it by default on their captured raw).
Of course, this is for a full 3d -> post pipeline, and i have the impression "post" as I am intending it here is not done at all, by most of the readers of this thread.
I guess it's always, ultimately, a case of what floats one's boat.
If it's a transatlantic one has to steer, however, small mistakes pile up very very quickly, and in no time at all, one'll see the writing "Titanic" on each and every wall.
As for filtering's impact on render times,it's dependent from the filter kernel type and size, image resolution and, because of how filtering at rendertime works, the actual shading complexity (as it has to shade all the pixels within the kernel, before being able to do the filter maths on them and finally return the single pixel actual value.).
And it IS sizeable and impact, as i'm showing in the next few images:
A word of note: broad, blurry filters (soften, at 6 pixels, f.e.) DO lower noise level due to massive kernel size. Much as you'd obtain Gaussian Blurring your render in post with a size of 6 (which would be a heck of a lot quicker, too.).
MUCH better to sample properly, and reduce noise that way, as the rendertime headroom is MASSIVE (30 secs, versus 79secs. nearly 2.5X slower.).
Comment