Announcement

Collapse
No announcement yet.

Video of a game engine but some good points on observation of materials for cg

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Video of a game engine but some good points on observation of materials for cg

    Heya Folks,

    This is a tech demo of a game called Metal gear solid 5 which is using a pretty impressive engine that has a lot of real time approximations of things we use each day in vray. The video goes through a lot of their production process from observation, lighting, specularity and so on which all apply to things we do every day in trying to make things look real. Some interesting stuff on data gathering.

    http://www.youtube.com/watch?v=0qhPoT4coOI

  • #2
    At least there is a good description of proper linear workflow there

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #3
      Yeah it's good alright - why does it have to be so confusing? It's literally only to correct for a monitor curve and that's it!

      On a side note I liked seeing what they were doing when they were shooting textures. You had a few good examples when you were looking at leaf materials about judging what colour something should be and how specular it was to get a good, semi calibrated texture and guess sensible reflectivity - are there any good papers or written breakdowns of these things? I'm always looking for more ways to get accurate data to start with - even if something looks a bit boring when you use real world values, it's always great to be able to start off from real for VFX stuff!

      Comment


      • #4
        amen to that.
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #5
          Originally posted by joconnell View Post
          Yeah it's good alright - why does it have to be so confusing? It's literally only to correct for a monitor curve and that's it!
          That's what I thought too, until I realized that it's a bit more complicated. The gamma encoding is also necessary for 8-bit textures and output images, as it allows more "visually different" colors to be represented with the available 256 values (since the human eye does not have linear response to intensity). It would have been nice if everyone was using floating point textures and outputs, but it's not always practical.

          Best regards,
          Vlado
          I only act like I know everything, Rogers.

          Comment


          • #6
            However, it's coming.
            At least in film, a fully half-float color pipeline is there or thereabout, from references onwards (well, 14bpc for full-frame raws, but it's often good enough), textures (often painted directly at 16 bit), and of course render outputs.
            The ACES workflow is also coming as a standardisation method for different input sources converging to Comp-land (a modern, device-independent color gamut. Much like sRGb was back in the days.), and that also requires higher than 8bpc depth to work properly.
            Hopefully within the next ten years, higher depth monitor displays will be cheaper and more available, putting the last nail in the coffin to the 8bit paradigm...
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment


            • #7
              Well, things look different when you have gigabytes of Mari textures and making them 8-bit saves twice the disk space and network bandwidth

              Best regards,
              Vlado
              I only act like I know everything, Rogers.

              Comment


              • #8
                Ah!
                In any event i always end up with double or triple versions of any texture, in any given production.
                A working file set, a flattened, arbitrary output format (call it any flavour of tiff/exr) and the final, render-prepped tiled exr.
                The only place where any space saving would be greatest is if one worked the base file at 8 bit (when possible, we start from a camera raw file, so bit depth reduction is not quite what we look for), or if for some reason one output the flattened version as 8 bit.
                Which would make no sense for an exr, and little as well for a tiff (again, due to information wasting, and later re-expansion when converting to tiled).
                Besides, it's always fx/mesh caches and hyper-high-res proxies eating up the most space...
                Even in the projects where i was bugging you via email begging for help on how to render just that amount of texture stuff in RAM, the textures space was dwarfed by that used by xMeshes.
                But that is in film.

                I appreciate Archviz/Productviz comes from a slightly different angle to the same problem, chiefly due to the strict binding to 8bit displays also for the perusal of the final result (or from RGB to CMYK for printing, whichever the approach).
                Still, i'd love to see in my lifetime a full 16-half or even better 32bit per channel system, from capturing to display.
                A man can dream... :P
                I know someone from the Vancouver Uni presented one prototype of such an hdr device a few years back, but can't quite remember the coordinates to find the paper/presentation...
                10bit IPS monitors of today are pretty, at 1024 levels of gray, but i feel it's still quite a half-step technology, for the amount of money one such a monitor will cost me (say, around 1k USD plus tax for the hp 30", anything else going quickly upwards of that).
                Lele
                Trouble Stirrer in RnD @ Chaos
                ----------------------
                emanuele.lecchi@chaos.com

                Disclaimer:
                The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                Comment


                • #9
                  Still discussing LWF, huh? Not much changed since I was here last.

                  Comment


                  • #10
                    Originally posted by ^Lele^ View Post
                    Ah!
                    In any event i always end up with double or triple versions of any texture, in any given production.
                    A working file set, a flattened, arbitrary output format (call it any flavour of tiff/exr) and the final, render-prepped tiled exr.
                    The only place where any space saving would be greatest is if one worked the base file at 8 bit (when possible, we start from a camera raw file, so bit depth reduction is not quite what we look for), or if for some reason one output the flattened version as 8 bit.
                    Which would make no sense for an exr, and little as well for a tiff (again, due to information wasting, and later re-expansion when converting to tiled).
                    Besides, it's always fx/mesh caches and hyper-high-res proxies eating up the most space...
                    Even in the projects where i was bugging you via email begging for help on how to render just that amount of texture stuff in RAM, the textures space was dwarfed by that used by xMeshes.
                    But that is in film.

                    I appreciate Archviz/Productviz comes from a slightly different angle to the same problem, chiefly due to the strict binding to 8bit displays also for the perusal of the final result (or from RGB to CMYK for printing, whichever the approach).
                    Still, i'd love to see in my lifetime a full 16-half or even better 32bit per channel system, from capturing to display.
                    A man can dream... :P
                    I know someone from the Vancouver Uni presented one prototype of such an hdr device a few years back, but can't quite remember the coordinates to find the paper/presentation...
                    10bit IPS monitors of today are pretty, at 1024 levels of gray, but i feel it's still quite a half-step technology, for the amount of money one such a monitor will cost me (say, around 1k USD plus tax for the hp 30", anything else going quickly upwards of that).
                    Photoshop is a major stumbling block here. It would need to be completely re-written which is unlikely - Mari is really the only alternative I know of atm, and knowing Jack it will only get better
                    And don't forget that a major part of the Film remit is matching to the plates, so LUTS and file consistency are keys. You need the bit depth so that they can pull everything to shit in DI. :P
                    AEC is more about stills and pure CG animation so most of this is irrelevant really luckily for us.

                    Sorry for the Hijack Jo - thats a great video of FOX tech.!
                    Last edited by deflix; 09-04-2013, 09:28 AM.
                    Immersive media - design and production
                    http://www.felixdodd.com/
                    https://www.linkedin.com/in/felixdodd/

                    Comment

                    Working...
                    X