Announcement

Collapse
No announcement yet.

Cross-polarization and dielectric specular

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Eh, again we're comparing a real world scenario, where only your brain says "surface" and a perfectly smooth, idealised surface lit by a perfect spherical light.
    The scratched teapot has in fact mapped glossiness, so you're comparing apples to oranges when showing the disney ggx ref.
    This said, as i mentioned before, it's often simpler to have these effects driven with two layers, than it is to accurately remap their gloss.
    USER necessity, not particularly a nature's request.

    Would it give you perspective if i told you the VRScans scanner, from idea to working prototype, took 8 years to develop to someone i can't hope to shine shoes to, when it comes to knowledge? :P
    That's someone good dealing with all the above issues and neutralizing them one by one, until a proper description of the whole set of material reactions to light can be captured per point.

    From such a capture, going back to textures plus mapped, principled BRDF, would lose the very reason for such scans, as far as BTF to BRDF if concerned, and in a very measurable way too.
    Some complexities haven't been mathematically modelled yet, and may not be for a long while (if nothing else some effects are too specific, and may require a huge effort in development for a very small return), regardless of layers used and manual fresnel curves drawn (see the fresnels graphs in the disney paper, to name one...).
    Last edited by ^Lele^; 05-09-2016, 12:53 AM.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #17
      Originally posted by sharktacos View Post

      On the red plastic we have a very pretty bright and sharp spec, with a rather large "tail" presumably caused by the micro scratches on the plastic, in contrast to the ceramic teapot which has a bit of bloom, but otherwise virtually no tail. That kind of large tail on the plastic is easy to duplicate with a double lobed spec, and quite hard to do with a GTR/GGX. That's all I'm saying.
      Yep I getcha. I'm looking into the photographic measuring side of it at the minute, one of vlado's responses about you needing to use linear hdr's to really compare is something I'm chasing. I bought a copy of raw digger and various colour and luminance charts in an effort to strip the photo look of my camera's raw files and remove any weird colour bias things in an effort to make it more accurate on luma and chroma. It'll no doubt be very annoying and lengthy but hopefully improve things overall!

      Comment


      • #18
        Lele the point was that the red plastic photo has a large tail that is similar to the large tail of their chrome sample. As they say, GGX does not match either of these. Moral: stuff in nature often exhibits a much longer tail than you get with GTR/GGX.

        The issue with GGX is that as the tail falloff is lowered (resulting in a larger tail) we get a sort of sheen/diffuse/glow thing happening. You can see that in the image below.

        Click image for larger version

Name:	wedge.jpg
Views:	1
Size:	140.5 KB
ID:	863386

        That is nice for doing things like cloth, but also means that you can't get a long tail independent of this. I wonder if it would be possible to have these two phenomena be separated in the BRDF so it would be possible to increase the tail size without adding sheen? Looking here it seems that the sheen is the result of a happy accident of having the shadowing-masking function being the same for all tail falloff values. So could a future BDRF have an option to either use the same shadowing-masking function (resulting in sheen) or using varying shadowing-masking functions (resulting in a variable tail size without sheen)?
        Last edited by sharktacos; 05-09-2016, 09:56 AM.

        Comment


        • #19
          Originally posted by joconnell View Post
          Yep I getcha. I'm looking into the photographic measuring side of it at the minute, one of vlado's responses about you needing to use linear hdr's to really compare is something I'm chasing.
          Yeah I was planning on doing an HDR shoot over the weekend until I realized that the equipment department forgot to put the head in the tripod. Doh!


          I bought a copy of raw digger and various colour and luminance charts in an effort to strip the photo look of my camera's raw files and remove any weird colour bias things in an effort to make it more accurate on luma and chroma. It'll no doubt be very annoying and lengthy but hopefully improve things overall!
          That's fascinating. Never used those programs. Please share what you find.

          I'm thinking that if I shoot bracketed exposures to get HDR I may not need to shoot raw. But I'll do both and compare, just to be safe.

          Comment


          • #20
            Yep indeed. With your camera it's job isn't to give you accurate rgb data, it's to make a nice picture or something that's very similar to what our eye sees so there's a bit of a filmic toe and shoulder on it that needs to get stripped, there's also a few imbalances from r,g,b efficiency to be cleared out (think this is an individual camera thing) before you get close to the hue that was actually in front of the camera. There's an annoyingly large amount of stuff in it.

            I think the guys that write rawdigger are the people that wrote libraw originally which a low of people use to get the pre-debayer linear data from your raw files. They offer a lot of diagnostic stuff to try and get the data as accurate as possible through various profiling steps and through those you can make good estimates of what the likes of adobe camera raw or lightroom is doing to your photos and try to make yourself some nice defaults that have all the bits we don't want added on.

            Comment


            • #21
              Got it. Are you using a Canon per chance?

              My method is to Read the RAW image in Photoshop, keeping everything linear in it's conversion tool, set the mode the 32bit, and save the image as an EXR. I had assumed that with these settings there was no hidden voodoo done to the RAW image to "help" it. Is that not the case?
              Last edited by sharktacos; 05-09-2016, 10:54 AM.

              Comment


              • #22
                Originally posted by sharktacos View Post
                Lele the point was that the red plastic photo has a large tail that is similar to the large tail of their chrome sample. As they say, GGX does not match either of these. Moral: stuff in nature often exhibits a much longer tail than you get with GTR/GGX.

                The issue with GGX is that as the tail falloff is lowered (resulting in a larger tail) we get a sort of sheen/diffuse/glow thing happening. You can see that in the image below.
                That's also largely the measured behaviour: at high enough surface roughness, front specular becomes retro-reflectivity.
                I disagree with your assumption, and i fail to see, so far, how you deducted it, for the reasons explained in the previous posts.
                Repetition isn't argumentation.
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #23
                  This is ONE GGX layer with mapped Gloss AND IoR (as it'd be its want for dirty/scratched materials).
                  I didn't even have to bother the rest of the GTR domain, leaving its variable out of the equation.
                  Yet, you get frontal near 0.0 gloss as tail to the near 1.0 gloss spec.

                  Click image for larger version

Name:	ggx_mapped.png
Views:	1
Size:	180.1 KB
ID:	863387
                  Lele
                  Trouble Stirrer in RnD @ Chaos
                  ----------------------
                  emanuele.lecchi@chaos.com

                  Disclaimer:
                  The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                  Comment


                  • #24
                    Yes, I can get that result using maps too. However what I am after is something where:
                    1. The shape of the spec is sharp and clearly identifiable (regardless whether one uses a typical spec shape such as a disc or rectangle, or maps the light with an HDR image of a light).
                    2. The tail of the spec is much longer (at least 2x longer) than one gets with a GGX.

                    Like this:
                    Click image for larger version

Name:	spec.png
Views:	1
Size:	53.3 KB
ID:	863388

                    Comment


                    • #25
                      *I* made ggx look as *I* wanted.
                      Whatever your concept of tail is, and your expectation of behaviour, I haven't understood it.
                      What you show is the behaviour of the Gamma in the GTR model.
                      Instead of taking arbitrary pictures, which you then arbitrary post-produce, to fit whichever idea you're having, give us measured scenes, controlled renders, and a modicum of direction: i still do not understand what the QUESTION is and what the actionable PROPOSAL would be from that.
                      GGX tail is too shorT for your needs?
                      That is why i begged Vlado to implement the whole GTR domain, not just the specific GGX case.
                      That's another DIMENSION to explore, a new variable to an otherwise 2d brdf space (if we consider just Gloss and IoR), and i can't see a table of gloss/IoR/gtrGamma which shows it all, anywhere.
                      It's because if you needed 100 renders at 0.01 increasing gloss for a given Ior to cover the 0-1 gloss range, you'd also need maybe 100 (or a few thousands if you cover the unit by the hundredth, 0.0 to at least 10.0, closer to beckmann for distribution) for Gamma (which is in fact quite sensitive to some combinations at the third decimal place.).
                      For each IoR you are exploring, and because of shadowing, each would be quite unique in reaction to light.

                      I can create the look in that example no problem, within the GTR domain, if the GGX lobe-to-tail is too tight for me.
                      Again, below is ONE GTR layer.
                      No diffuse.
                      All you see is spec, at 0.999 gloss.
                      Are you so sure that between a gamma of 10 and a gamma of 0.0 there is just no combination of gloss and IoR which would satisfy your needs?

                      Click image for larger version

Name:	tail_2.jpg
Views:	1
Size:	12.8 KB
ID:	863391
                      Last edited by ^Lele^; 05-09-2016, 11:32 PM. Reason: bad gamma on the first picture.
                      Lele
                      Trouble Stirrer in RnD @ Chaos
                      ----------------------
                      emanuele.lecchi@chaos.com

                      Disclaimer:
                      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                      Comment


                      • #26
                        Originally posted by sharktacos View Post
                        Got it. Are you using a Canon per chance?
                        I am indeed. As regards lightroom, it'll apply metadata settings from the camera even if you're keeping all the values in lightroom flat - you've to do some measuring with the pre-debayered images to see what the difference is with what lightroom presents you - your standard image will have a logarithmic toe and shoulder on it (which might be increasing the extra saturated blues on your specs) to mimic the human eye since we see logarithmic and not linear. Here's one page which shows a handy approach using a munsell grey chart - http://www.mit.edu/~kimo/blog/linear.html - what's cheaper though is the kodak q13 chart - http://www.imatest.com/images/Canon_...SO400_384W.jpg which probably has enough swatches for good accuracy. They use that in their tests on the rawdigger site.

                        Comment


                        • #27
                          Originally posted by joconnell View Post
                          I am indeed. As regards lightroom, it'll apply metadata settings from the camera even if you're keeping all the values in lightroom flat - you've to do some measuring with the pre-debayered images to see what the difference is with what lightroom presents you - your standard image will have a logarithmic toe and shoulder on it (which might be increasing the extra saturated blues on your specs) to mimic the human eye since we see logarithmic and not linear. Here's one page which shows a handy approach using a munsell grey chart - http://www.mit.edu/~kimo/blog/linear.html - what's cheaper though is the kodak q13 chart - http://www.imatest.com/images/Canon_...SO400_384W.jpg which probably has enough swatches for good accuracy. They use that in their tests on the rawdigger site.
                          Thanks for the link and great find, John. Guess the calibration is still in progress and evolving.... Curious to know the difference between dcraw and RawDigger. This sounds stupid, but is the RawDigger gives you raw-er images than the dcraw?
                          always curious...

                          Comment


                          • #28
                            Yeah indeed! I dropped out of the scene for a little while to work in dneg so hadn't my equipment with me, it's very much still a thing. I'll be popping up a few more posts when I get a chance to test things. The selfshadow folks have done a lot of dirty work for me too - their artist friendly hdri guide from siggraph has a lot of things based around a chart and lux meter plus a few sneaky photoshop scripts to try and set accurate luma data on hdrs. Hopefully it all gets finished over the next month or two

                            Comment


                            • #29
                              That's great. Yeah, I couldn't wait for the course note and went ahead to get it from Sabestian. Haven't got the time to go through the notes as it's pretty comprehensive. On the other hand, Thomas updated a nuke script that is based on the course note regarding HDR calibration. I really want to update my workflow to make the calibration more of an absolute result. Look forward to hearing what you find.
                              always curious...

                              Comment


                              • #30
                                Ooh! Thanks for the pointer, much prefer that to photoshop!

                                Comment

                                Working...
                                X