Announcement

Collapse
No announcement yet.

Biased vs Unbiased

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Oh yeah, the scarf volumetric rendering is quite impressive!
    Click image for larger version

Name:	scarf - Copy.jpg
Views:	1
Size:	316.6 KB
ID:	859468

    So is the half gig of volume data ^^
    Then again, scanning surface properties isn't trivial, nor is the generated data small...
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #17
      im surprised its only half a gig. anyway, give it a couple of years and half a gig will be nothing.

      Comment


      • #18
        Originally posted by vlado View Post
        <shrug> I'm sure that a solution will present itself sooner or later...

        Best regards,
        Vlado
        Moar Power!

        Click image for larger version

Name:	morepower.png
Views:	1
Size:	168.3 KB
ID:	859622

        GPUs are getting us there in the Titan-X class of cards. I had an IMG R2500 demo back at siggraph 2014 that was "fun" in that you could resolve GI caustics in realtime (with simple glass shaders and minimal polies). I suspect brute force is going to be the winner in the end whether that's x86, CUDA or OpenRL. I'm not optimistic about beam-tracing and other more exotic approaches myself as much as I would love for a huge breakthrough, I think it would require too great of a fundamental rethink in existing tools.
        Gavin Greenwalt
        im.thatoneguy[at]gmail.com || Gavin[at]SFStudios.com
        Straightface Studios

        Comment


        • #19
          Originally posted by ^Lele^ View Post
          Oh yeah, the scarf volumetric rendering is quite impressive!
          [ATTACH=CONFIG]27903[/ATTACH]

          So is the half gig of volume data ^^
          Then again, scanning surface properties isn't trivial, nor is the generated data small...
          Have a look at these :

          https://shuangz.com/projects/procyar...cyarn-sg16.pdf


          https://shuangz.com/projects/procclo...zoom=100,0,888




          Edit :
          Wait, just realized that this post is from 2016... I'm a bit late to the party, sorry for that
          Last edited by CCOVIZ; 04-02-2019, 05:11 PM.

          Comment


          • #20
            Well, it's not like you came late to the party and insulted the host, the stuff you posted is quite interesting indeed. ^^

            A 1280 image, 128 SPP, in 4 or 8 hours, and after all the pre-processing needed (cfr. the paper).
            The amount of individual nodes (strands) is enormously high (in the tens of millions), and then proper light transport needs to happen across them to come out with a good-looking solution.
            Not to mention it says nothing at all on how patterns and clothing would be created (i understand there are numerical patterns for machine control. it just looks to me as an impossibly long process.).
            It does however look stunning, volumic and proper, i'll admit.
            My impression is that this is great Academia material, but not any nearer to production usability as before the papers came out (and i think they both admit to it by the papers' end).

            In the while, the material scanner got a broader scanning area, doubled resolution (now down to the single wool fiber thickness.), made the volumetric area taller, and quartered (or less, for some) materials file size, while the creation process has gotten quicker.
            The far and medium-close fields results look very good, even if individual strands do no stick out of the surface (not on their own. one could however add proper hairs.).
            Check out the new samples here (notice some come directly from companies which were able to use this in production.).
            I suppose the bias in VRScans (the 2.5d approach instead of the wholly 3d), especially when manually augmented by fuzz/hair, where needed, is what makes the difference in usability.
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment


            • #21
              I have to agree, the material scanner results are great but still no fuzziness (fly away fibers) and it still relies on data acquisition (who own a scanner at home). As for the render time, hardware used here is pretty old (Xeon from 2010). And there is still room for optimizations. Those guys managed to make it run real-time on a single GTX 1080, with some real-time adaptations of course :

              http://www.cs.utah.edu/~kwu/RTFR/rtfr.pdf



              The main strength of this approach is that it can be fully procedurally generated. Those papers are focused on the fiber model and not on the yarn structure generation, indeed. But they also say that it can be applied to any spline based yarn pattern. There is a bunch of techniques out there, for both woven and knitted patterns (OK, I have to admit that knitting pattern generation models are still not so user-friendly). For example, that's exactly what the tunderloom shader is already doing for woven patterns, you just have to create a yarn map in a pattern generator and you're done :





              see the shark tacos experiment page here: https://docs.sharktacos.com/texture/fabricShader.html

              Now just imagine a procedural fiber model on top of that. That would be insane!

              Max Tarpini started to build a fiber model for Arnold (still early WIP at the time of the publication) and the first results speak by themselves :

              https://twitter.com/max_tarpini/stat...66764598546432

              Fabric appearance of cloth is still a real bummer in the CG industry, there is definitely something to do in that area. I've collected a bunch of papers on that particular subject recently and I was quite impressed by the recent advances.
              Last edited by CCOVIZ; 05-02-2019, 07:40 AM. Reason: added youtube video

              Comment


              • #22
                I agree that parts of those results look great, in particular the structure of them.
                The shading, however, is lacking either in quality (when it's too simplified to be rendered in a decent amount of time, and it then requires manually placed falloffs, throwing us back into the arbitrary principling. That's Irawan's main issue, imho. Then hey, they're super good at IKEA, but that's no news.) or in cost, like for the very pretty Arnold stuff.
                It's always very noisy, and bound to sizzle like crazy if the camera moves.
                It'd look very very expensive a model judging from the images, but then it's a 1993 Hanrahan-Krueger diffuse model, so who knows.

                I would also have doubts on the "fully procedural generation": one works to a target, and both methods need source material accurate enough (read: at scanning level accuracy, where the threading is clearly visible for analysis) plus some serious amount of principling from the user to get anywhere near good looking.
                Not that they cannot, but the work to throw at each new piece of cloth is huge, it seems to me, and not that easily automatable.

                My contention would stil be the same: give us a sample, we'll give you lifelike shading in little to no time or user effort.
                Scan size is very much a moot point, for even with loom-based approaches, if one wanted a loom big enough to accommodate non-tiled patterns across wide areas one would run into unsolvable memory and management problems, currently.

                Have you tried the VRScans (especially the new ones) in any depth?
                For while it's true there are no 3d hair sticking out, it's also true there is a lot of what happens in the volume directly above the surface getting captured, and then reacting to light properly.

                BTF Scanning is to principling what photography was to painting: while both will maintain their spots for a long time to come, it's photography which went further by leaps and bounds, and with accelerating pace, in the past century, and all the more so in the past few decades.
                It's not unthinkable, if very very difficult to achieve currently, to imagine a true volumetric representation of the scanned sample, nor it's impossible to think of making the size of the scanner much bigger.
                I'm not sure creating trillions of individual strands will ever find the right amount of memory and compute power to be accomplished effectively, at scales beyond that of the demo.
                Last edited by ^Lele^; 05-02-2019, 07:46 PM.
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #23
                  No, I never tried Vrayscans TBH. But I'm more of DIY guy when I can. I'm also a freelancer and well, it's not that cheap..

                  I totally get your point Lele and I can't disagree with any of your arguments. The only thing that bothers me is that you thoroughly stick to the paper methodology. I'd totally skip the measurement fitting part, who need that much accuracy? I can understand that from a scientifical point of view but to be honest we do not need that.

                  The algorithm is parameter driven and they are measuring those to feed the program and match the real fibers aspect almost 1:1. But we still have control over those parameters. Don't you think that could totally fit in an artist workflow?
                  Click image for larger version  Name:	2019-02-06_15h59_22.png Views:	1 Size:	53.2 KB ID:	1025594



































                  And I'm pretty sure this could be simplified while still getting "good enough" results. What I want to highlight here is that I'm more interested in the approach rather than in the implementation. Basically, input a pattern -> generate the yarn structure -> Build a parameter based fiber model on top of that (even a simplified one) and let the user play with those parameters to customize the appearance of the cloth. Click image for larger version  Name:	2019-01-23_18h29_03.png Views:	1 Size:	2.49 MB ID:	1025595
                  Click image for larger version  Name:	2019-01-23_18h30_31.png Views:	1 Size:	2.11 MB ID:	1025596


                  The thenderloom shader is open source and I remember reading on this forum that you were considering it to be part of a future release. That would be a good base to start from.

                  As for the RAM footprint, considering the numbers exposed in the paper and the amount of detail you get in the end, it's pretty lightweight TBH. And well, that's the price for quality. Displacement is a memory hog, but it's there and we use it. Also, don't tell me I could use Forest to scatter fibers on a surface, I've already tried that and to get good enough density, the RAM usage jump to the sky

                  Finally, I can't figure out why this would be more prone to flicker with camera motion. I mean, except the fact that it's procedural, we're not that far from the results the Fstorm guys manage to obtain with Geopattern (ultra detailed geometry scattered over the base mesh while keeping continuity between patches) and it performs really well in animations : Click image for larger version  Name:	1?token-time=2145916800&amp;token-hash=i5cxlkBzl3oEoswSQBMSOAC1Wwr2iTSHeLq87Xl9TWs%3D.jpg Views:	1 Size:	581.9 KB ID:	1025597



                  I know you're way more rational than I am and also way more aware of the technical intricacies involved. I a big dreamer to be honest. But I'm still convinced that a load of improvements can be made in that area with today's hardware computational power and researches.
                  Last edited by CCOVIZ; 06-02-2019, 09:44 AM. Reason: formatting

                  Comment


                  • #24
                    Yes, it would be extremely flexible.
                    But it isn't necessarily what companies look for.
                    From the standpoint of a single artist, i understand perfectly the joy in experimentation, discovery, and eventual success with a one-of-a-kind art piece as result.
                    But companies do not want the variance associated with different artists, and would much rather prefer any of their employed people could get the job done, in time, and to expected standards, wherever, whenever that may be.

                    It's in this respect that i was making the analogy with painting and photography: it takes a considerably lower skill set to use a camera than to paint with oils, f.e., and it's also *much* cheaper to accomplish, and repeatedly, consistently so.
                    There is more than ample room for both approaches, and i imagine there will always be (or it'd be a sad day indeed.), so we needn't get hung up on one approach or the other.

                    RE: flickering: geometry which is much smaller than a pixel will simply need a huge amount of sampling to be animation-consistent, without the ability to be easily pre-filtered like a bitmap (rich that the pixels may be) would.
                    Alessandro's approach, as you linked above, would suffer from the same issue, invisible with a still image: pull out with the camera, and hellow sizzling, or -while it still can be of any use- insane sampling.
                    Which is why one would rather develop multi-scale BRDFs.

                    You should really, really, really give a go at vrscans.
                    There is a free timed demo, i believe.
                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #25
                      I finally found an article from a while back:
                      https://3dtotal.com/tutorials/t/tuto...-related-links

                      Further, there's no way a principled shader will be able to accommodate for this kind of responses without manual user input, i'd wager (and notice it's measured *without* the textured pattern, for the sake of getting a readable curve out of it):

                      Click image for larger version

Name:	Fabric_Blue_(S)_albedoCurves.0000.jpg
Views:	318
Size:	645.5 KB
ID:	1026018
                      or

                      Click image for larger version

Name:	Fabric_Grey_Volumetric_(S)_albedoCurves.0000.jpg
Views:	319
Size:	604.4 KB
ID:	1026019
                      Lele
                      Trouble Stirrer in RnD @ Chaos
                      ----------------------
                      emanuele.lecchi@chaos.com

                      Disclaimer:
                      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                      Comment


                      • #26
                        Also, as we were looking at wool knits, here's a checkerboard-tinted set of (OLD!) fabric tests, you'll notice the BTF response quality even with my horrible texturing and lighting skills on display, and the horrible watermark everywhere, i'm sure.

                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #27
                          I'll give Vrscans a try. I have to agree that it looks better than a standard textured object with the current BRDF model but, to be honest, it does not come even close to the procedural fiber model. Yes, it's probably way cheaper and it deserves to exist as an in-between quality trade but put that Fstorm renders next to this. For cloth with tiny flyaway fibers, VRSCAN is probably the best solution (if not too close or too high res) but for the rest, I'm still not convinced. Of course, this my personal opinion and I suppose the lighting used in your example doesn't really promote the full capability of VRscans. And I'll repeat you're still dependent of third party, even if it is chaos group

                          Anyway, that's a never-ending topic, I'll try those scans and maybe I'll be surprised.

                          Edit : Just saw your last post! That's already way better but...
                          Last edited by CCOVIZ; 08-02-2019, 12:49 PM.

                          Comment


                          • #28
                            last post about vrscans in here (sample code: vlr68 )
                            While this has edge displacement active, it's not what's giving it the fuzzy look, that's the capture of the BTF itself (which includes a volume above the sample too).
                            Renders were 3 minutes (!) to 0.04 NT on a ryzen 1900x @ stock speeds, sample size is 70MB on disk. for a 24 x 24cm area.
                            Lele
                            Trouble Stirrer in RnD @ Chaos
                            ----------------------
                            emanuele.lecchi@chaos.com

                            Disclaimer:
                            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                            Comment


                            • #29
                              And of course I lied. (Sample code jvl60 ).
                              The first trio shows multi-scale behaviour, and why it's nigh impossible to principle correctly.
                              Followed by some other sample close up, with insanely intricate threading structures, again *very* hard to replicate manually (particularly with mixed thread materials)
                              Rendertimes, sample sizes in area and on disk are all very similar to the previous post's sample.
                              Lele
                              Trouble Stirrer in RnD @ Chaos
                              ----------------------
                              emanuele.lecchi@chaos.com

                              Disclaimer:
                              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                              Comment


                              • #30
                                more on the threading (i adjusted the dim light a bit.)
                                Lele
                                Trouble Stirrer in RnD @ Chaos
                                ----------------------
                                emanuele.lecchi@chaos.com

                                Disclaimer:
                                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                                Comment

                                Working...
                                X