Announcement

Collapse
No announcement yet.

Where can I find tests comparing GPU vs CPU performance in the same scenes?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Where can I find tests comparing GPU vs CPU performance in the same scenes?

    So since the Vray Benchmarks render different scenes for the CPU and GPU tests, can I find somewhere tests where people compare the performance of CPU vs GPU rendering in the same scenes at the same Noise Threshold with the new RTX cards? I'm about to upgrade my hardware and it is very hard to make a decision on whether to go CPU or GPU without knowing how they compare. Maybe some of you have made such tests yourselves? If you have, I'd be really happy to see what your results and findings are. Thanks in advance for the help.
    Last edited by Alex_M; 04-12-2019, 01:14 PM.
    Aleksandar Mitov
    www.renarvisuals.com
    office@renarvisuals.com

    3ds Max 2023.2.2 + Vray 7 Hotfix 1
    AMD Ryzen 9 9950X 16-core
    96GB DDR5
    GeForce RTX 3090 24GB + GPU Driver 566.14

  • #2
    Thats really hard to compare cause on different Scenes you will get different results.
    If there is a lot of Geometry in your scene RTX will be much faster than if there is more shading work todo.

    There are so manny different GPUs and CPUs out there thats hard to find a matching system.
    And you have to eyeball the noise in the comparing images. The same rendersettings wont give you the same image.
    GPU needs more samples to get a clean image. Specially if you use the alsurface ss shader.


    Core i7 5960X 16Threads @3GHz
    vs
    Geforce 2080Ti RTX - Driver 441.28

    Vray 4.3 on Maya



    I cant show the scene its from a running project. But here are the Numbers.
    Is a Character with the alsurface shader.

    CPU 3min 13sec
    CUDA 2min 51sec
    RTX 2min 06sec


    Have in mind its much easier to add more GPUs to your system. Thats the selling point for me.
    I would go with a Geforce 2070 for the Monitors and two Headless NVLinked 2080Ti.
    Last edited by oglu; 04-12-2019, 11:20 PM.
    https://linktr.ee/cg_oglu
    Ryzen 5950, Geforce 3060, 128GB ram

    Comment


    • #3
      I think for me, I'm not bothered about it being completely accurate in comparison, I just want to know that which route to go down when buying hardware, cost being the main differential. Compare equally expensive cpu and gpu on the same scene, i dont mind if they look slightly different...bearing in mind they shouldnt look THAT different I really dont understand why at least this wouldnt be any option.
      e: info@adriandenne.com
      w: www.adriandenne.com

      Comment


      • #4
        I would be curious to see a comparison too. The goal could be to show three scene and per GPU/CPU and nearly the same final result should be reached. Like for a real project, where the client get renderings, it doesn't matter, how the setup was. Two images for each scenes to show what can be reached per CPU and GPU.
        www.simulacrum.de ... visualization for designer and architects

        Comment


        • #5
          Originally posted by Micha View Post
          I would be curious to see a comparison too. The goal could be to show three scene and per GPU/CPU and nearly the same final result should be reached.
          There will forever be disagreement on the meaning of "Nearly".

          In the current state of affairs, a comparison isn't really possible.

          There is no single engine out there which can claim parity between CPU and GPU across the board (Although some do so in the big titles, all become tamer in the small prints.), and as such, comparisons become difficult in the best of cases, meaningless in *many* other.

          Further to this, the two compute types favour some type of tasks over others.
          So, if you were to benchmark a scene with diffuse+spec and some texture, you'd likely find the GPU to be vastly more efficient on the way to convergence, most of the time.
          However, if you were to benchmark other type of scenes, where the task is made complicated (f.e. by complex shaders, by very long paths which require many choices to be made along each vertex.), then the CPU would be distinctly faster, and vastly more graceful in the scaling as the load grew, most of the time.

          Things get muddled, however, by all the genius poured into Algorithms by both camps, and by the advancements in Hardware and Standards (APIs and such), so take the example above with a pinch of salt: some tech especially suited for GPU or CPU may give one or the other some advantage at some specific task.
          Ultimately, it's for this very reasons ChaosGroup decided to use two very different scenes for the benchmarks.
          Last edited by ^Lele^; 05-12-2019, 03:37 AM.
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #6
            Thanks for the inpit, oglu . Is it fair to say that people mostly use GPU rendering for characters and product renders? I seem to see people using them mostly for this kind of work and not much for interiors and exteriors. Am I right or are my observations incorrect?

            Lele, thanks for the clarification. Why is this kind of information not on the Vray help website? It's the first time I learn that GPUs are better at mostly diffuse shader rich scenes with the opposite for CPU. Wouldn't it be useful for this information to be added in the help pages? I am pretty sure I've not seen this explained on there.

            And again, I'm wondering how are users supposed to make a decision whether to use CPU or GPU for rendering? Are we supposed to spend a few thousand dollars on a CPU and then another few thousand dollars on a GPU to test everything for ourselves? Seems pretty stupid to me. What about you? There has to be another way, no? I am aware that there are some small differences between the two engines, but they produce pretty close output nowadays, don't they? They do, at least to my eyes, when I tested Vray GPU again a few days ago. My conclusion from my tests was that today's Vray GPU is not like Vray RT from back in the day where most features and shaders didn't work and many of the scenes I tested looked exactly the same to my eyes. So the benchmark scenes don't need to render 100% exactly the same pixel for pixel. I am sure that if necessary, you can adapt them so that they look very close to each other, if not the same, with not much tweaking.
            Last edited by Alex_M; 05-12-2019, 12:34 PM.
            Aleksandar Mitov
            www.renarvisuals.com
            office@renarvisuals.com

            3ds Max 2023.2.2 + Vray 7 Hotfix 1
            AMD Ryzen 9 9950X 16-core
            96GB DDR5
            GeForce RTX 3090 24GB + GPU Driver 566.14

            Comment


            • #7
              I wont say that. We use it also for all our environments.
              And in the case of Redshift all Blizzard Overwatch Cinematics are rendered with it.
              https://www.youtube.com/watch?v=ALVqeH-J2OE


              As i sayed above for us there is no reason to use CPUs for rendering.

              We put two 2080ti into our render workstations.
              And next year we go with two 3080Ti and those will outperform even the fastest Threadripper CPUs.
              GPUs are evolving faster and are easier to upgrade. You can stick in up to 8 Cards into one Workstation using one Vray license.


              Here is a lot of additional info.
              https://www.chaosgroup.com/blog/what...endering-v-ray
              Last edited by oglu; 05-12-2019, 11:10 AM.
              https://linktr.ee/cg_oglu
              Ryzen 5950, Geforce 3060, 128GB ram

              Comment


              • #8
                Originally posted by Alex_M View Post
                TAre we supposed to spend a few thousand dollars on a CPU and then another few thousand dollars on a GPU to test everything for ourselves?
                I still think Chaosgroup could take a few scene, some good for GPU and some good for CPU and show the differences. Like we all deliver images to our clients it should be possible to create comparable images. For example often I render train interiors. Per CPU 2950x I need nearly 30..20min to reach the needed quality at high res. If I use 2x2080ti than I get it in 15..10min. I'm curious what RTX will bring. A comparison could contain some typical scene - architecture exterior/interior, product shots, movie shots with a lot of effects ... . Since every user need to decide the question "CPU or GPU?" this could be supported by a few examples.

                www.simulacrum.de ... visualization for designer and architects

                Comment


                • #9
                  I *strongly* disagree on the kind of comparison you're suggesting, Micha, particularly if you want it to come from us.

                  In light of what i posted above, the only possibly meaningful comparison would have to be on a feature-by-feature basis, under tightly controlled scenarios.
                  Even then, as the algorithms change, you would *not* always get an apples to apples result (f.e., SSS with long paths. Volumes. Even the simple noise threshold and AA strategies differ.).

                  It's wishful thinking to assume that throwing a random scene (or ten) at an unknown issue will give you much of an answer, and much less so predictive powers.

                  A 1k$ 1950x is, f.e., quicker than a 1k$ 2080Ti on selected tasks.
                  It's also slower in selected others.

                  What would represent an "average" workload for all our users is not only unknown, but largely unknowable, and will forever stay so.

                  This is why we can only have one official stance: try it for yourself, and if it works for you, use it.
                  Last edited by ^Lele^; 06-12-2019, 10:34 AM.
                  Lele
                  Trouble Stirrer in RnD @ Chaos
                  ----------------------
                  emanuele.lecchi@chaos.com

                  Disclaimer:
                  The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                  Comment


                  • #10
                    Originally posted by Alex_M View Post
                    Lele, thanks for the clarification. Why is this kind of information not on the Vray help website? It's the first time I learn that GPUs are better at mostly diffuse shader rich scenes with the opposite for CPU. Wouldn't it be useful for this information to be added in the help pages? I am pretty sure I've not seen this explained on there.
                    Do not make it a universal law, because it isn't: It's severely scene dependent, and further to that, settings-dependent. Not to mention one should have real hardware to test it with, instead of a processor class.

                    And again, I'm wondering how are users supposed to make a decision whether to use CPU or GPU for rendering? Are we supposed to spend a few thousand dollars on a CPU and then another few thousand dollars on a GPU to test everything for ourselves? Seems pretty stupid to me. What about you? There has to be another way, no?
                    You can use whatever hardware you have, and the Benchmark Site, to gauge benefits on you specific scenes via simple interpolation.
                    As i wrote above, we have one official position on this, as none other would be responsible.

                    I am aware that there are some small differences between the two engines, but they produce pretty close output nowadays, don't they?
                    Not at all.
                    GI is still going to be GI, and so diffuse, the GGX BRDF, and so on, but the visual differences in *many* a scenario are important enough to warrant calling them two different engines altogether (as opposed to an optional switch), and we strongly suggest to start afresh with one or the other.
                    We'd really, really, really wish we could just say "toggle this on for extra speed", alas we cannot in good faith.

                    They do, at least to my eyes, when I tested Vray GPU again a few days ago. My conclusion from my tests was that today's Vray GPU is not like Vray RT from back in the day where most features and shaders didn't work and many of the scenes I tested looked exactly the same to my eyes. So the benchmark scenes don't need to render 100% exactly the same pixel for pixel. I am sure that if necessary, you can adapt them so that they look very close to each other, if not the same, with not much tweaking.
                    What to your scene may be a viable amount of difference, could be a deal breaker to someone else.

                    What is true is that V-Ray GPU is mature and production ready: huge amounts of work from a very talented pool of coders went into making it so. It's also RTX enabled, squeezing more out of one's current RTX card, and apparently from the next-gen GPUs too.
                    A quicker version of the CPU engine it is not, though.

                    Lele
                    Trouble Stirrer in RnD @ Chaos
                    ----------------------
                    emanuele.lecchi@chaos.com

                    Disclaimer:
                    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                    Comment


                    • #11
                      What if I tested the scenes you tested here, but on my CPU instead of an RTX card? Would that be a valid comparison? Did you just load these scenes and hit RENDER or did you make changes to the materials and lighting before rendering?

                      BTW, slight correction. Threadripper 1950x costs $600-650, not $1000.

                      Originally posted by ^Lele^ View Post
                      A 1k$ 1950x is, f.e., quicker than a 1k$ 2080Ti on selected tasks.
                      Last edited by Alex_M; 06-12-2019, 09:01 AM.
                      Aleksandar Mitov
                      www.renarvisuals.com
                      office@renarvisuals.com

                      3ds Max 2023.2.2 + Vray 7 Hotfix 1
                      AMD Ryzen 9 9950X 16-core
                      96GB DDR5
                      GeForce RTX 3090 24GB + GPU Driver 566.14

                      Comment


                      • #12
                        Originally posted by Alex_M View Post
                        What if I tested the scenes you tested here, but on my CPU instead of an RTX card? Would that be a valid comparison?
                        Eheh, nope, it wouldn't.
                        CPU and GPU would look different if they were to work on compatible stuff (maps, shaders, and such.).
                        They'd have done different work to achieve a similar result, but not quite the exact same.
                        That's why Vlado's blog post only compared GPU to GPU.

                        Did you just load these scenes and hit RENDER or did you make changes to the materials and lighting before rendering?
                        For these benchmarks we used the following scenes (some of the Evermotion scenes are modified from their originals):
                        We made them compatible with GPU, making sure we got all shaders rendered and with decent settings.
                        We did not look at the visual quality of the renders that much at all (as i'm sure you'll notice in one or three. XD).

                        BTW, slight correction. Threadripper 1950x costs $600-650, not $1000.
                        Ah, thanks!
                        See? even more tempting a value proposition IF your workload falls into those which are going to be quicker (or if they just need the system ram.).

                        This thread makes me think that perhaps it's wiser to be a bit more on the best-buy target, rather than hitting the very top.
                        If with some limitations, one would nearly be able to buy two of the best value items for one of the top ones (be it a cpu, a gpu or whathaveyou.), to get a good 75% of the performance, when not more.
                        I mean, there's always DR, and a small farm would work marvels.
                        To top it off, one would have very competent hardware for both CPU and GPU, and would be able to assign a job to one or the other engine depending on that job's specific requirements, or other contingential issues.

                        edit: a mid-range 20xx card would also play Crysis quite well, i'm sure.
                        Last edited by ^Lele^; 06-12-2019, 10:30 AM.
                        Lele
                        Trouble Stirrer in RnD @ Chaos
                        ----------------------
                        emanuele.lecchi@chaos.com

                        Disclaimer:
                        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                        Comment


                        • #13
                          I believe this thread answered a question for me regarding a current experiement. I am measuring the electricity usage on each of my machines as they each render a different scene. I am comparing the same scene in CPU Progressive and CPU bucket across my current workstation, my previous workstation, my son's computer and my two dedicated render nodes. My current workstation is the only machine that can currently do GPU rendering and I am doing comparisons on GPU Bucket, GPU Progressive, GPU+CPU Bucket and GPU+CPU Progressive. This is part of a larger experiment to find the break even point on the cost of buying/building local render nodes vs using render farm services like Vray Cloud, RebusFarm, etc.

                          One of my questions was is if there was some specific formula to derive equivalent subdivision samples for Vray NEXT and Vray NEXT GPU, but it sounds like it's just eyeballing the noise levels in renders from either renderer.

                          Comment


                          • #14
                            V-Ray CPU has an MSR of 6, whereas GPU (and hybrid) doesn't have MSR (i.e. it's 1).
                            However yes, the noise level calculations differ a bit, so it's all a bit more difficult.
                            You can surely measure noise levels in the outputs, though, and so come to an exact match.

                            This however would still have produced *different* results in the images, subtle as the differences may be.
                            I am not sure it would make your testing moot, or not.
                            Lele
                            Trouble Stirrer in RnD @ Chaos
                            ----------------------
                            emanuele.lecchi@chaos.com

                            Disclaimer:
                            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                            Comment


                            • #15
                              One of the test scenes has an MSR of 1 already. It's actually the "Mixer Animation" scene provided on the Vray Cloud page. The Max subdivs in the image sampler is set to 24 (based on a long back and forth conversation I had with Vladimir Dragoev via email). Knowing that the actual number of subdivs is the square of this number, I tried entering 576 as the GPU samples limit. This produced a substantially noisier image.

                              While I would love a direct equivalent number of the samples for CPU and GPU, for the purposes of my test it's not necessary. Because as you stated, simply switching from CPU to GPU does not produce a visually identical render anyway. The idea is really to match the "quality" as closely as possible to make roughly accurate comparisons of power consumption across different hardware configurations and render settings.

                              And thank you for replying so quickly to a thread that hasn't had new activity in almost a year.

                              Comment

                              Working...
                              X