Announcement

Collapse
No announcement yet.

Cross-polarization and dielectric specular

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Cross-polarization and dielectric specular

    In theory, the specular of a dielectric material has no color (i.e. it is white). However, if one takes a cross-polarized photo, and subtracts it from a parallel polarized photo, the resulting specular reflectance is not white, but has pretty intense colors.

    For example here is an image of amber. You can see that the derived spec is blue not white. It appears that the spec color in these photos is typically the opposite of the diffuse (the complimentary color):
    Click image for larger version

Name:	amberD.jpg
Views:	1
Size:	128.5 KB
ID:	884753

    I'd love to understand better what is going on here from a science perspective. Is this colored spec due to an error/limitation in the photography process, or do dielectric materials actually have colored spec?

  • #2
    The results are exactly what is expected...
    Roughly what you are doing would translate to RGB 255:255:255 (white) - RGB 255:255:0(yellow) = RGB 0:0:255 Which is blue.
    If your object was green instead, you would get a RGB 255:255:255 - RGB 0:255:0 = RGB 255:0:255 purple

    I suggest you do this with desaturated images if you just want to look at the specular.

    Comment


    • #3
      I understand what you are saying regarding the math of RGB values. However, when the same is done with a render subtracting the diffuse lighting pass from the beauty, the result would be a white specular. Perhaps the issue is that the photos were 8-bit jpg and not floating point exr? If that's the case then maybe the photos need to be taken raw?


      Further, when taking cross polarized photos of human skin I also got blue specular. However in scientific applications of this same technique they appear to get different results. For example the ICT lab's digital Emily project got (almost) white specular. So there appears to be a difference in the approach here.

      Here's an example from the ICT lab which shows (almost) colorless spec:
      http://gl.ict.usc.edu/Research/Digit...ges/Slide6.PNG

      As opposed to this one from IR which shows blue spec:
      http://ir-ltd.net/wp-content/uploads...w-1024x602.jpg

      I'm wondering why the Debevec/ICT folks are getting different results?
      Last edited by sharktacos; 12-06-2016, 06:49 PM.

      Comment


      • #4
        I realize that it is certainly practical to make the specular color in a dielectric material white. I'm wondering however if, on a physics level, this is a simplification, and the specular reflectance is in fact not white, but rather the inverse of the diffuse (or more accurately, the subsurface) color? Or to state the question differently, does the fact that these cross polarized photos show colored specular reflection indicate a limitation in the photographic technique, or rather reveal a simplification in the model rendering uses to simulate dielectric materials? Does anyone know the actual physics for dielectrics here? Is it color + white or color + inverse color?

        Comment


        • #5
          I'd say that the specular is white (or, the light color, to be more precise). Especially in the amber example, the yellow color comes from what's below the surface, while the specular is just a surface reflection.
          You are seeing white because the specular intensity is much stronger than the yellow hue underneath. So, the camera sensor only picks up white.
          If you shoot with HDR, you may be able to derive the correct specular by subtracting the photos as you did.
          V-Ray for Maya dev team lead

          Comment


          • #6
            As Mihail pointed out, the specular color is white. Your original experiment seems to be flawed (if anything, you need to use HDR linearized images for the subtraction math to work properly).

            Best regards,
            Vlado
            I only act like I know everything, Rogers.

            Comment


            • #7
              Your original experiment seems to be flawed (if anything, you need to use HDR linearized images for the subtraction math to work properly).
              No GanzFeld, no party.
              Lele
              Trouble Stirrer in RnD @ Chaos
              ----------------------
              emanuele.lecchi@chaos.com

              Disclaimer:
              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

              Comment


              • #8
                Originally posted by sharktacos View Post
                I realize that it is certainly practical to make the specular color in a dielectric material white. I'm wondering however if, on a physics level, this is a simplification, and the specular reflectance is in fact not white, but rather the inverse of the diffuse (or more accurately, the subsurface) color?
                VRScans measurements seem to indicate that the desaturation at grazing angles is quite common indeed.
                We are talking of perfectly opaque surfaces, here, however, and the sample you use has many TR(RRRRR...)T paths which skew the final color, even where the specularity is overpowering the SSS below, you can assume some of those SSS paths will be emitted into that specular area, hence colorize it, if a little.

                The fact that at grazing angles speculars desaturate is exquisitely a consequence of statistics (i believe the original maths comes from particle physics, Neutron Optics, in particular, but i may be wrong on the specifics): it's more pronounced for rougher materials and less for shinier ones, as the scattering of light at the surface (or slightly inside it) will tend to produce all outbound wavelengths due to the very chaotic paths photons will take, and the many scattering, and WL-changing, events the photons will have to go through before being able to hit the sensor at that shallow angle, while it's statistically easier for a photon to come back unscattered to the sensor at 0degrees angle, preserving its incoming wavelenght.
                The rougher the microscopic structure, the more "bouncy" the photon path becomes inside the surface interface (ie. the few molecules light travels inside an "Opaque" material), the more it will be likely to be altered by the random walk, and conversely, for very shiny, and hence orderly, material structures, there will be more chances that even at grazing angles the photon will be rebounded nigh untouched.
                Notice that by nature, this concept can be extended with little difficulty to most light-matter interaction, provided the feature scale and the light wavelengths match somehow: microfacet theory assumes (currently) a 2D distribution (although some papers around go all the way to a dense medium of microflakes. utter love for the approach, whether practical or not it may prove to be.).
                F.e. Rayleigh Scattering in the atmosphere isn't dissimilar at all, just light travels in a much rarer medium, and for much longer, than our few molecular layers for a "solid" surface.
                Or MIE Scattering.
                The former needs particles to be a lot smaller than the photons' WLs, while the latter requires particles of similar size to the Photon WL. One produces the red to blue hues of a typical sunset, where blue has travelled further, and the other produces the constant whiteness of clouds (largely) regardless of lighting conditions, as all photons scatter about the same amount, regardless of entry point, and the result is photons scattered along the whole visible spectrum on exit, their entry characteristics all but a (distant.) memory.

                Now i can brace for corrections. ^^
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #9
                  Thanks for the tip, I'll do some tests with HDR images to see if that works.

                  On a related note, something I observed that I thought was fascinating is how common the double-lobed spec "tail" can be observed on... just about everything. For example here you can see it on both the grapes and apples:

                  Click image for larger version

Name:	polarizeFruit.png
Views:	1
Size:	324.7 KB
ID:	863371

                  In fact, if anything, it seems to me that the tail of 2.0 of GGX seems to be too small, and often real objects have a much larger tail. Even if one lowers the GTR tail falloff (thus making it larger) it's hard to get this kind of look. Perhaps that's why the approach of combining two spec lobes is still popular (for example the ALsurface uses two GGX spec lobes).
                  Last edited by sharktacos; 03-09-2016, 06:56 PM.

                  Comment


                  • #10
                    no we don't.
                    Without knowledge of the light source, you're just showing us pretty pictures.
                    This said, again you pick objects with a strong SSS component: your "spec" is well bled by SSS paths.
                    The standard VRayMtl doesn't sport just GGX, it proposes a full GTR domain.
                    Use its Gamma to alter the tail-to-lobe ratio and distribution shape.

                    And with this we're back to the cat chasing its tail (pun intended) as the double lobe is used just as much to compensate for poor CG light fixtures (which do not mimic the real ones. Nothing is a point light, nothing emits as a perfect sphere, in real life, that we can see every day.), as it is for artistic purposes, and shortcomings of single models in specific scenarios: in the grapes, one gets two distinct surfaces layered with a mask, the powdery and the shiny, which are easier to control separately than to principle with a single one and gloss maps.
                    The 2012 Disney paper, for example, from which the infamous Shader of theirs comes, only once mentions Nygan's approach in passing (double layered lobe: one line) from MERL observations and measurements, and indeed there is no trace of it anywhere in their shader (one specular term, plus an optional clearcoat, and an optional sheen.), while ample attention is paid to the shortcomings of all fresnel implementations at grazing angles (where the fresnel curve gracefully rises, real measurements do all but.).
                    Which seems to be well confirmed by the VRScans measurements, along with the desaturation of back-scattering/retro-reflections once view angles reach 80 degrees and over for rough materials.

                    Notice that these ubershaders are pre-packaged hacks which eyeball a few specific cases well, they are not, inherently, the solution to all possible light interactions between a material's varied mediums.
                    The ability to principle such a shader (Layers with interactions between sub-groups and groups well defined and different between and among each) is currently missing from any renderer i can throw a stick at, short of a custom write.
                    The tendency to a layer explosion is strong, so it's often all-right to simplify and make believe, package and open source. ^^
                    Last edited by ^Lele^; 03-09-2016, 11:15 PM.
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #11
                      FWIW, the sole light source is an LED light panel. Not sure I agree that an apple has a particularly strong SSS component. I'd say it's stronger on an orange. But that's an interesting theory that the SSS is bleeding into the spec. How would that happen with cross polarization exactly?

                      As for the GTR tail, I really can't speak to the physics of it, but I can say that from an artistic perspective (that is, from making observations of how things look and trying to match it) I don't find that the GTR model works very well. The tail on many things is a lot longer that one can get with GTR, even when adjusting the tail.
                      Last edited by sharktacos; 03-09-2016, 11:12 PM.

                      Comment


                      • #12
                        Sorry, i edited the previous one. and edited it, and edited it. XD
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #13
                          Originally posted by sharktacos View Post
                          FWIW, the sole light source is an LED light panel. Not sure I agree that an apple has a particularly strong SSS component. I'd say it's stronger on an orange. But that's an interesting theory that the SSS is bleeding into the spec. How would that happen with cross polarization exactly?
                          I'm talking of nature, which makes absolutely no qualitative difference between components: they are just photons with a specific four-vector description (WL and polarization).
                          Your "specular" term isn't 1.0 (ie. it doesn't mask the whole underlying surface and volume interactions), and as such it's blended with those.
                          Principling is simplifying, by definition.
                          There is no such a thing as a surface, in nature, nor 1.0s and 0.0s.
                          We only get the huge benefit of being limited in needs by the contents of a pixel, and so we can skip and hop until we find ways to represent nature more simply at this scale.

                          I mention VRScans again, because the way those capture a shader is quite enlightening: zooming in to the 1:1 capture res from afar shows VERY different lighting reactions as the visible scales change, something a principled shader is ill suited for.
                          Lele
                          Trouble Stirrer in RnD @ Chaos
                          ----------------------
                          emanuele.lecchi@chaos.com

                          Disclaimer:
                          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                          Comment


                          • #14
                            Originally posted by sharktacos View Post
                            Thanks for the tip, I'll do some tests with HDR images to see if that works.

                            On a related note, something I observed that I thought was fascinating is how common the double-lobed spec "tail" can be observed on... just about everything. For example here you can see it on both the grapes and apples:
                            Another annoying thing is if one of the pixels of your sensor fills up with photons, they bleed into the surrounding pixels and you'll get bloom.

                            Comment


                            • #15
                              Originally posted by joconnell View Post
                              Another annoying thing is if one of the pixels of your sensor fills up with photons, they bleed into the surrounding pixels and you'll get bloom.
                              Right, and that makes it hard to do the quasi-scientific thing of isolating out phenomena (like with x-pol photos).

                              I should clarify that when I say lots of things have an observably larger tail than you can get with a GTR brdf, I don't mean specifically what can be observed in an x-pol photo, but rather what can be observed with any photo, and even what can be observed just by looking at stuff. For example look at this photo of scratched red plastic next to a tea pot. The main highlight is from a florescent light:


                              Click image for larger version

Name:	IMG_5220.png
Views:	1
Size:	167.3 KB
ID:	863379

                              On the red plastic we have a very pretty bright and sharp spec, with a rather large "tail" presumably caused by the micro scratches on the plastic, in contrast to the ceramic teapot which has a bit of bloom, but otherwise virtually no tail. That kind of large tail on the plastic is easy to duplicate with a double lobed spec, and quite hard to do with a GTR/GGX. That's all I'm saying.

                              Compare that with this image from the Disney Principled brdf paper:
                              Click image for larger version

Name:	disney.jpg
Views:	1
Size:	7.0 KB
ID:	863381

                              On the left we have actual chrome with a long tail, compared with GGX which as a tail, but it's a lot shorter. Far right is Beckmann. They write "GGX has a much longer tail than other distributions but still fails to capture the glowy highlight of the chrome sample."
                              Last edited by sharktacos; 04-09-2016, 09:07 PM.

                              Comment

                              Working...
                              X