Announcement

Collapse
No announcement yet.

Nvidia 30 series cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    If GPU is good enough for a Blizzard Overwatch Cinematic, its good enough for me. Cause thats the style we are aimng for. If needed we comp in some Volumes rendered with CPU.
    https://linktr.ee/cg_oglu
    Ryzen 5950, Geforce 3060, 128GB ram

    Comment


    • #32
      Why not create a benchmark that renders a couple of scenes with CPU and GPU? We need something to compare CPU to GPU speed. This will be very useful I think
      Max 2023.2.2 + Vray 6 Update 2.1 ( 6.20.06 )
      AMD Ryzen 7950X 16-core | 64GB DDR5 RAM 6400 Mbps | MSI GeForce RTX 3090 Suprim X 24GB (rendering) | GeForce GTX 1080 Ti FE 11GB (display) | GPU Driver 546.01 | NVMe SSD Samsung 980 Pro 1TB | Win 10 Pro x64 22H2

      Comment


      • #33
        Originally posted by ^Lele^ View Post
        The algorithms aren't identical between CPU and GPU, and as such it's hard to compare apples with apples.
        Very diverging tasks (f.e. very long path SSS, or volumes with multiple scattering events) are historically quicker on CPUs.
        Most other tasks fall between the comparable (i.e. speed is cene dependent) and the favoured by GPU, provided the load fits in the card's ram.
        So, no benchmark is actually possible between the two.
        It's very very much task-dependent, and so scene and even frame dependent.
        To add to this, the different set of supported features, and the different ways of supporting them, throw even more variables into the mix.

        Ultimately, one checks the basic available features, and then sticks to the tool until the job is done.
        There is no silver bullet.
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #34
          Originally posted by oglu View Post
          If GPU is good enough for a Blizzard Overwatch Cinematic, its good enough for me. Cause thats the style we are aimng for. If needed we comp in some Volumes rendered with CPU.
          Create a Production scene and do your own benchmark. Thats how we do it for each project. And depending on the scene and requirents we decide.
          https://linktr.ee/cg_oglu
          Ryzen 5950, Geforce 3060, 128GB ram

          Comment


          • #35
            Originally posted by oglu View Post
            Create a Production scene and do your own benchmark. Thats how we do it for each project. And depending on the scene and requirents we decide.
            I wonder why that isn't possible in a Vray benchmark? Isn't that kind of what it already is? Id say a scene using the same textures, same lights, but tweaked so noise visibly was the same is all that's needed. If ultimately we are just looking for clean renders, isn't that apples to apples? Sure you might say CPU renders SSS or volumes better, but then isn't there a test out there that's generic enough to give a hint at comparable times?

            Like you said above, we can make are own...but the idea of a benchmark is to be able to see what developers have tested and scored, and also others who bought hardware before you. Also the results mean more as we are all using the same tests. Id rather not spend upwards of 5k on hardware to then run my GPU vs CPU tests on a custom scene.

            I could build a scene that's shaded up and lit in Redshift, and one that's shaded and lit in Vray CPU, get them as clean as possible, and as close as possible, and test the render times. Im curious as to why that's not possible with Vray GPU and CPU. Sure it might have margins for error, but at least its some kind of data before we commit to a large investment.

            I guess the question for me is, wanting to know if CPU is going to get left behind in terms of speed. Once you get one or two 3090s, can any CPU keep up? When the 40 series come out next, are we looking at near real time on Vray GPU? Right now, having a first gen threadripper and two 1080tis I have no way of knowing if a 3990x CPU will be as fast as a 3090 GPU, and we are told there is no way of comparing the two.
            Last edited by seandunderdale; 09-09-2020, 01:52 PM.
            Website
            https://mangobeard.com/
            Behance
            https://www.behance.net/seandunderdale

            Comment


            • #36
              For the features which have a presence in both engines, you could surely photograph the current state of affairs. (Current: a new algo comes for one of the engines, and the balance shifts once again.)

              Even if you could measure the performance of a given feature accurately *), all you'd have would be a rough estimate, outside of a production environment, of the cost of that feature to final rendertime, provided you knew for each frame how much was taken up by this or that specific facet of rendering (f.e. proportion of sss versus diffuse.).
              You would need to render the specific sequence entirely to know which of the two was ultimately quicker.

              That is why i wrote what i wrote above: you get a gist as to which set of features you use, and test a specific scenario as close to final production as possible.
              Once you pick though, you'd better stick with it: it's much cheaper to work around potential limitations than change tool mid-course (we're of course talking of equally capable engines here, if differing in feature sets.).

              On the RS comparison, i don't think you can compare unless you cripple v-ray's algos severely.
              Even then, you'd be comparing two *vastly* different solutions to the problems, and they would not match even in the simplest of cases (i.e. GI on diffuse with very few bounces.), let alone a complex scene.
              At least, i was never able to find such a match, much as i wished to, and for the non trivial amount of time i spent trying.
              "Close Enough" doesn't exist here: a little bias can make a world of difference in the amount and quality of the computation to be performed, so if two volumes aren't identical under identical shading conditions, for example, the benchmark is invalid, minute as the differences may look.
              I could match V-Ray very closely with Arnold and RenderMan, however, without the same kind of trouble.
              If you can come up with a 1:1 scene between RS and V-Ray, across the features to be used in production, i'd be exceedingly interested in trying it out.

              *)We're talking multi-variable tables here. Take SSS: you will want to measure it for at least depth, phase and some colors.
              You'd get the quality of the approach used measuring across increasing depths, the quality of the coding tricks used measuring the speed of renders across phase changes, and the presence of wavelenght-based optimisations testing across a range of colors.
              That's three variable, with a 1-100 (let's limit it ) for depth, -1.0/1.0 for phase (say, in steps of 0.1, so 20), and say 5 colors (the pure R, G and B, white and one pastel tone made up of some RGB combination).
              That's 100 x 20 x 5 renders.
              Then, you'll want to test the same stuff, but with very different lighting combinations: is the shader still good without direct lighting?
              What happens when it's hit by 30 light sources at once?
              The 10000 renders above are now three times more.
              You'll see testing *accurately* features can quickly get out of hand: will the shader work as well when behind glass? And in indirect lighting by thirty lights behind glass? Is it as quick shooting the first samples as it is converging to a low noise threshold?
              Even when you had your N-Dimensional LUTs prepared with the results of those renders, you'd be back to the issue of screen coverage for each feature: that will vary per frame.
              Last edited by ^Lele^; 09-09-2020, 04:10 PM.
              Lele
              Trouble Stirrer in RnD @ Chaos
              ----------------------
              emanuele.lecchi@chaos.com

              Disclaimer:
              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

              Comment


              • #37
                I understand the reluctance of comparing the two methods,on a commercial level as you dont want vray misrepresented, but this seems overly complicated on a practical level.

                For me a rough estimate would be fine, I just want to see results based on the same scene so i can roughly know how comparable, a gpu or cpu is...then I can have a good basis on making a purchase choice, because I found the currently method wasnt that helpful when i recently was trying to decide on the gpu and cpu route.

                Could you just not add an option where you can render the same scene and it tells you in a simplistic way how long it took? The threadripper took 5 mins, the 2080ti 10...that sort of thing?
                Last edited by francomanko; 09-09-2020, 07:27 PM.
                e: info@adriandenne.com
                w: www.adriandenne.com

                Comment


                • #38
                  We'd rather not do this, the reasons are explained as well as i could muster above (none of which include fear of misrepresentation. It's a *scientific method* issue.)
                  You, however, are free to do as you please: If it floats your boat, go places with it!
                  Just please do not make it into a sweeping statement: "A is faster than B" has been proven too simplistic a claim to hold any value, time and time again.
                  It's not going to change in the foreseeable future: context, and math brought to bear within that context, have unavoidable weight, and are proven to sway results greatly.
                  Lele
                  Trouble Stirrer in RnD @ Chaos
                  ----------------------
                  emanuele.lecchi@chaos.com

                  Disclaimer:
                  The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                  Comment


                  • #39
                    Soooo it's safe to assume that GPU rendering is still pretty niche outside of product renderings? Is there value propositions outside of these use cases given that GPU is still not at feature parity with CPU? Not to mention the much lower VRAM buffer (typically 10-24 GB compared to 64-256 GB with CPUs). Is it good for production quality high detail exterior and interior renderings that feature a lot of geometry, displacement, etc? By "good" I mean not constantly trying to tend to and work around the limitations and unsupported features. Ideally I would like to hear from people who actually do high quality interior and exterior renderings with GPUs for clients. I don't need scientific explanations about how CPUs and GPUs work under the hood, I know that they work in a very different way even though the result at the end is very similar.
                    Last edited by Alex_M; 10-09-2020, 04:16 AM.
                    Max 2023.2.2 + Vray 6 Update 2.1 ( 6.20.06 )
                    AMD Ryzen 7950X 16-core | 64GB DDR5 RAM 6400 Mbps | MSI GeForce RTX 3090 Suprim X 24GB (rendering) | GeForce GTX 1080 Ti FE 11GB (display) | GPU Driver 546.01 | NVMe SSD Samsung 980 Pro 1TB | Win 10 Pro x64 22H2

                    Comment


                    • #40
                      The explanation was optional.
                      The answer is yes, it's used in productions of any kind and size.
                      The user chooses the engine, trains on it, and becomes effective enough to deliver.
                      There is no more, nor less, "fighting" involved than there is with any other engine, be it cpu or gpu, once one knows what one's working with.
                      Lele
                      Trouble Stirrer in RnD @ Chaos
                      ----------------------
                      emanuele.lecchi@chaos.com

                      Disclaimer:
                      The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                      Comment


                      • #41
                        From a layman, if I can use GPU I will. When a feature is missing and I can't use it, I'll use CPU. An example is I am working on a house with a Railclone roof. GPU doesn't support the Railclone color, so I can't use GPU. For interiors, most of the time I use GPU and that is because it is faster. Nothing overly technical; I use what is faster and it is scene dependent.
                        Bobby Parker
                        www.bobby-parker.com
                        e-mail: info@bobby-parker.com
                        phone: 2188206812

                        My current hardware setup:
                        • Ryzen 9 5900x CPU
                        • 128gb Vengeance RGB Pro RAM
                        • NVIDIA GeForce RTX 4090
                        • ​Windows 11 Pro

                        Comment


                        • #42
                          Thanks for the summary, Bobby. Is this with the hardware in your signature? How much quicker are the interiors on average? Can you give me a range? Have you done any exteriors with GPU and are there any speed advantages there? Based on your observations, what are the strong and weak sides of GPU rendering and what kind of scenes are faster or slower?
                          Max 2023.2.2 + Vray 6 Update 2.1 ( 6.20.06 )
                          AMD Ryzen 7950X 16-core | 64GB DDR5 RAM 6400 Mbps | MSI GeForce RTX 3090 Suprim X 24GB (rendering) | GeForce GTX 1080 Ti FE 11GB (display) | GPU Driver 546.01 | NVMe SSD Samsung 980 Pro 1TB | Win 10 Pro x64 22H2

                          Comment


                          • #43
                            Originally posted by Alex_M View Post
                            Soooo it's safe to assume that GPU rendering is still pretty niche outside of product renderings? Is there value propositions outside of these use cases given that GPU is still not at feature parity with CPU? Not to mention the much lower VRAM buffer (typically 10-24 GB compared to 64-256 GB with CPUs). Is it good for production quality high detail exterior and interior renderings that feature a lot of geometry, displacement, etc? By "good" I mean not constantly trying to tend to and work around the limitations and unsupported features. Ideally I would like to hear from people who actually do high quality interior and exterior renderings with GPUs for clients. I don't need scientific explanations about how CPUs and GPUs work under the hood, I know that they work in a very different way even though the result at the end is very similar.
                            BBB3 uses GPU exclusively and hits that quality bar - however his scenes are designed from scratch on gpu and they're very self contained projects. Lots of displacement and detail, so it's possible, but still not a great 1:1 comparison of what the workflow is like.
                            https://www.flickr.com/photos/bbb3viz/with/48973807192/

                            Most of the challenge with GPU is just remaking everything from scratch and working around it's quirks. you cant just take a cpu scene and turn gpu on because will almost certainly crash and/or look shit. It's a massive time investment and then you end up with a scene built from the ground up for gpu that needs to be converted to work on cpu again. nobody has time for that.

                            Comment


                            • #44
                              Yes, I use GPU on my exteriors when I can. I got around all my limitations, except for my RailClone colors. For post-production, I had to get around my RAW elements, but that was pretty easy to do. I would say that GPU is at least twice as fast on my current setup. If I get a second Titan RTX, I could cut my times in half, but I am having issues with power in my computer case. Progressive with Denoiser is the key.
                              Bobby Parker
                              www.bobby-parker.com
                              e-mail: info@bobby-parker.com
                              phone: 2188206812

                              My current hardware setup:
                              • Ryzen 9 5900x CPU
                              • 128gb Vengeance RGB Pro RAM
                              • NVIDIA GeForce RTX 4090
                              • ​Windows 11 Pro

                              Comment


                              • #45
                                Originally posted by glorybound View Post
                                I would say that GPU is at least twice as fast on my current setup.
                                Not an exact science, but that would put it slightly over a single 3950x. a 3960/70 CPU should be able to outperform it.

                                Comment

                                Working...
                                X