Eh, again we're comparing a real world scenario, where only your brain says "surface" and a perfectly smooth, idealised surface lit by a perfect spherical light.
The scratched teapot has in fact mapped glossiness, so you're comparing apples to oranges when showing the disney ggx ref.
This said, as i mentioned before, it's often simpler to have these effects driven with two layers, than it is to accurately remap their gloss.
USER necessity, not particularly a nature's request.
Would it give you perspective if i told you the VRScans scanner, from idea to working prototype, took 8 years to develop to someone i can't hope to shine shoes to, when it comes to knowledge? :P
That's someone good dealing with all the above issues and neutralizing them one by one, until a proper description of the whole set of material reactions to light can be captured per point.
From such a capture, going back to textures plus mapped, principled BRDF, would lose the very reason for such scans, as far as BTF to BRDF if concerned, and in a very measurable way too.
Some complexities haven't been mathematically modelled yet, and may not be for a long while (if nothing else some effects are too specific, and may require a huge effort in development for a very small return), regardless of layers used and manual fresnel curves drawn (see the fresnels graphs in the disney paper, to name one...).
The scratched teapot has in fact mapped glossiness, so you're comparing apples to oranges when showing the disney ggx ref.
This said, as i mentioned before, it's often simpler to have these effects driven with two layers, than it is to accurately remap their gloss.
USER necessity, not particularly a nature's request.
Would it give you perspective if i told you the VRScans scanner, from idea to working prototype, took 8 years to develop to someone i can't hope to shine shoes to, when it comes to knowledge? :P
That's someone good dealing with all the above issues and neutralizing them one by one, until a proper description of the whole set of material reactions to light can be captured per point.
From such a capture, going back to textures plus mapped, principled BRDF, would lose the very reason for such scans, as far as BTF to BRDF if concerned, and in a very measurable way too.
Some complexities haven't been mathematically modelled yet, and may not be for a long while (if nothing else some effects are too specific, and may require a huge effort in development for a very small return), regardless of layers used and manual fresnel curves drawn (see the fresnels graphs in the disney paper, to name one...).
Comment