Announcement

Collapse
No announcement yet.

Compare GPU vs CPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Compare GPU vs CPU

    Hi,

    I wanted to see if I can get a faster rendering with my CPU (Threadripper 3990x) in V-Ray CPU 6.2 or with my new GPU (RTX 4090) in V-Ray GPU 6.2 and have some questions:

    1: I have a scene that renders identical on both engine, however It is hard to match noiselevels though and the result in terms of noise amount seems to be very different.
    Is there any official/approved way how to setup the scene with CPU or GPU rendering so that it renders very similar in terms of noise so that I can get an accurate comparision in terms of rendertime vs. quality?

    2: Also so far the performance of the 4090 in V-Ray CPU seems to be not up to what I expected. For example the image in IPR takes much longer to clear up compared to CPU IPR. Also overall noise levels seem much better on CPU, with much lower contrasty noise even during very early passes.
    I'm completely new to V-Ray GPU, so maybe I'm doing something wrong...

    Cheers
    Check out my FREE V-Ray Tutorials

  • #2
    Originally posted by JonasNöll View Post
    Hi,

    I wanted to see if I can get a faster rendering with my CPU (Threadripper 3990x) in V-Ray CPU 6.2 or with my new GPU (RTX 4090) in V-Ray GPU 6.2 and have some questions:

    1: I have a scene that renders identical on both engine, however It is hard to match noiselevels though and the result in terms of noise amount seems to be very different.
    Is there any official/approved way how to setup the scene with CPU or GPU rendering so that it renders very similar in terms of noise so that I can get an accurate comparision in terms of rendertime vs. quality?
    The noise patterns will not match, as the quasi-random number sequences are slightly different (because... reasons. /me waves hands looking clueless).

    Further, not everything is a 1:1 match between engines, for a given intent.
    F.e. computing diffuse GI with an LC on GPU will produce different results than with an LC on CPU, if it's BF/BF the different patterns will show, and also marginally influence distribution, and so on.
    The same applies to shaders and lights: even with our devs' best efforts, not everything can be matched exactly by the pixel: the goal of complete parity had officially been abandoned many years ago.

    As an added intricacy, the GPU and CPU engines use different ways to go about tracing, with the GPU being -essentially- a fixed MSR-1, while CPU defaults to 6.
    This will produce a different distribution of noise across the image, with GPU generally better at finer geometric detail/moblur/DoF by virtue of the higher number of camera rays, while CPU will have to be made to behave that way by reducing its MSR.
    Which will however often hamper its performance, unless one's doing so for good reasons (i.e. fine geometric detail, moblur or DoF).

    Last but not least, performance for the two engines will be *strongly* correlated to the kind of task: divergent tasks (SSS, many-bounces volumes, etc.) on the GPU will tank it, while if one renders simpler diffuse+specular, with few bounces, one can find a ton of performance from using even a simple GPU.

    2: Also so far the performance of the 4090 in V-Ray CPU seems to be not up to what I expected. For example the image in IPR takes much longer to clear up compared to CPU IPR. Also overall noise levels seem much better on CPU, with much lower contrasty noise even during very early passes.
    I'm completely new to V-Ray GPU, so maybe I'm doing something wrong...

    Cheers
    The primary to secondary rays argument above is also responsible for the very different look, by the pass, of the two engines.
    GPU will be noisier to begin with, but each pass will be quicker (compared to a CPU's default MSR of 6) to complete and it will also serve other purposes.

    TL;DR: two different engines, both very task dependent for performance.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #3
      Awesome, thanks a lot for the detailled explanation! I understand both are completely separate engines and they also create different results. However I want to roughly get an estimate abouto if my CPU or GPU would be faster for rendering in general. So in my very "unscientific" test I took a scene and tried to match the noise until I roughly get a somewhat comparable result. Attached are 2 pictures, one with GPU one with CPU. And those are the results (minutes:seconds):

      -------------------------
      CPU
      Threadripper: 01:05
      -------------------------

      -------------------------
      GPU CUDA:
      4090: 01:01
      4090 + Threadripper: 00:54
      4090 + 3060: 00:52
      4090 + 3060 + Threadripper: 00:57
      3060: 03:08
      3060 + Threadripper: 02:24​
      -------------------------

      -------------------------
      GPU RTX:
      4090: 01:02
      3060: Doesn't render
      ​-------------------------

      So my takeaways are this so far:
      - 4090 is roughly comparable with Threadripper in terms of speed (but at a much "cheaper" price)
      - Using CUDA with CPU actually only maginally increases the render speed and probably not worth it
      - No difference in terms of speed between RTX and CUDA?
      - In IPR I definitely prefer the CPU, where you get a visually cleaner result much faster
      - CPU "feels" faster because you have a lot more buckets moving around

      Would that also roughly match with your developers view?
      Is there anything where the GPU would be much superior compared to CPU? I heard that volumetric effects, DOF, Blurry Refractions should be in general faster on the GPU?
      Attached Files
      Check out my FREE V-Ray Tutorials

      Comment


      • #4
        I test something like this few times back then. And I always conclude that my CPU and GPU is somehow the same speed. Maybe it is weird but this is it. But on paper GPU should be much faster. And everytime I think GPU rendering is somehow slower then we can expect. With latest Vray GPU rendering animation is better because VRay doesnt load bitmaps everytime. But still reducing noise is really really slow compared to let say Fstorm. And with latest Fstorm update with really really fast loading of bitmaps (dont know how he do this) it is absolutelly different level from VRay. Main problem is memory, where VRay take much more memory then FStorm. And when you have low VRay or RAM, computing is really really slow. There is in my opinion biggest lag in VRay GPU rendering. Scenes you can render without any problem on FStorm cant be rendered in VRay. But even if you have RAM and render continue without any problem, reducing noise is really slow. Dont know why, FStorm is somewhere else but still dont know why when RTX 4090 is on paper a beast. And it looks VRay cant handle it right.

        https://www.youtube.com/watch?v=lV6aUqLBn6A
        Last edited by Jiri.Matys; 25-01-2024, 01:23 PM.
        AMD TR 7980X, 256GB DDR5, GeForce RTX 4090 24GB, Win 10 Pro
        ---------------------------
        2D | 3D | web | video
        jiri.matys@gmail.com
        ---------------------------
        https://gumroad.com/jirimatys
        https://www.artstation.com/jiri_matys
        https://www.youtube.com/channel/UCAv...Rq9X_wxwPX-0tg
        https://www.instagram.com/jiri.matys_cgi/
        https://www.behance.net/Jiri_Matys
        https://cz.linkedin.com/in/jiří-matys-195a41a0

        Comment


        • #5
          Hi, yeah I was testing out FStorm yesterday and it blew my mind so far. Seems super fast feedback and render times and it doesn't even support Denoising. Especially with things that normally take huge time like Refractions, Dispersion, Scattering, Volumes. Need to do more testing though, but so far it looks super promising!
          For V-Ray GPU I love the much more mature featureset/framebuffer which is now very close to the CPU version. And you have very detailled control about all aspects of your scene through excludes/includes and so on. But I was probably expecting a little bit more in regards to speed
          Need to do more testing but for me so far it seems that I prefer use the CPU version as the IPR looks much more clean and seems to update faster and the rendertime in my setup is very comparable.
          Also have to test out Redshift to see how that performs...
          Check out my FREE V-Ray Tutorials

          Comment


          • #6
            Yes, my experience is that GPU is great for render simple animation. I mean you can use environemtfog and so on, but when you have to many textures, objects and so on, inicialization phase take so long and reducing noise the same. With small amount of objects GPU is great. When I have bigger scene I use CPU or trying Vantage. It looks very promising now.
            FStorm is incredibelly fast even without denoising. It looks FStorm keeps reducing noise with the same speed all the time, in the beginning and in the end. But VRay (CPU and GPU) reduce noise fast in first few passes and then it is much slower and in some cases (parts of the image) it takes forever. I dont see this behavior in FStorm (GPU) or in Corona (CPU).
            AMD TR 7980X, 256GB DDR5, GeForce RTX 4090 24GB, Win 10 Pro
            ---------------------------
            2D | 3D | web | video
            jiri.matys@gmail.com
            ---------------------------
            https://gumroad.com/jirimatys
            https://www.artstation.com/jiri_matys
            https://www.youtube.com/channel/UCAv...Rq9X_wxwPX-0tg
            https://www.instagram.com/jiri.matys_cgi/
            https://www.behance.net/Jiri_Matys
            https://cz.linkedin.com/in/jiří-matys-195a41a0

            Comment


            • #7
              Originally posted by JonasNöll View Post
              Would that also roughly match with your developers view?
              I wrote you what the developer view of benchmarking the two is: it's nuanced, not binary
              Is there anything where the GPU would be much superior compared to CPU? I heard that volumetric effects, DOF, Blurry Refractions should be in general faster on the GPU?
              Volumetrics will depend on their inherent characteristics (f.e. density), tracing depth and lighting scenarios, DoF/Moblur are quick on CPU as well if one knows what one's doing (and the effects themselves mean nothing without context: what is being confused or blurred matters more.), while blurry refractions specifically converge quicker in GPU (which says nothing of how the image will converge: that will depend on what is being refracted.).
              As i wrote: Non-divergent tasks are best suited to GPU computing, currently.
              Your best bet to speed is to keep the scene in its less divergent state possible by carefully choosing features, tuning shaders, bounces and so on.


              In your posted test scene, Noise from GI is distributed unevenly between the images, with the CPU one visually showing enormous clumps where the GPU is clean.
              Vice-versa, the GPU shows it's missing some reflections on the floor, and so on and so forth.

              I understand you really want a simple answer, but it's not there to be had.
              The scene you tested lacks the parts a GPU would suffer with, and a CPU would have fun with.
              So, if that's your kettle of fish, you're cool, the GPU is usable.
              But if you're doing a different kind of work, need tons of RAM, hundreds of GI bounces, deep SSS and so on, the results would flip on their heads.
              The same goes for the noise shape, convergence, and so on: it's all highly dependent on shaders and textures used for their setup, etc.etc.
              Mileage varies by the view, nevermind the scene.

              This is why we do not support a binary view of things, but rather tell people to experiment specifically for their tasks.
              Lele
              Trouble Stirrer in RnD @ Chaos
              ----------------------
              emanuele.lecchi@chaos.com

              Disclaimer:
              The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

              Comment


              • #8
                Originally posted by Jiri.Matys View Post
                on paper GPU should be much faster
                There are things where a GPU always outperform a CPU significantly, but rendering is not necessarily one of them. As Lele already explained some scenes may benefit more than other, other won’t benefit at all.
                The perception problem comes from a decade of “bad” marketing, like: “a GPU can render 10x100x1000x faster than a CPU..” well is not, or at least it can be much faster only on a subset of scenes.



                3D Scenes, Shaders and Courses for V-ray and Corona
                NEW V-Ray 5 Metal Shader Bundle (C4D/Max): https://www.3dtutorialandbeyond.com/...ders-cinema4d/
                www.3dtutorialandbeyond.com
                @3drenderandbeyond on social media @3DRnB Twitter

                Comment


                • #9
                  Originally posted by sirio76 View Post

                  There are things where a GPU always outperform a CPU significantly, but rendering is not necessarily one of them. As Lele already explained some scenes may benefit more than other, other won’t benefit at all.
                  The perception problem comes from a decade of “bad” marketing, like: “a GPU can render 10x100x1000x faster than a CPU..” well is not, or at least it can be much faster only on a subset of scenes.


                  This is not what I meant.
                  AMD TR 7980X, 256GB DDR5, GeForce RTX 4090 24GB, Win 10 Pro
                  ---------------------------
                  2D | 3D | web | video
                  jiri.matys@gmail.com
                  ---------------------------
                  https://gumroad.com/jirimatys
                  https://www.artstation.com/jiri_matys
                  https://www.youtube.com/channel/UCAv...Rq9X_wxwPX-0tg
                  https://www.instagram.com/jiri.matys_cgi/
                  https://www.behance.net/Jiri_Matys
                  https://cz.linkedin.com/in/jiří-matys-195a41a0

                  Comment


                  • #10
                    Originally posted by ^Lele^ View Post
                    I understand you really want a simple answer, but it's not there to be had.
                    You are probably right about that as it is heavily scene/usecase dependant. However when trying to benchmark it would be nice to have a simple "Hardware A is roughly 2x faster than Hardware B"- type of anwer
                    So I was thinking maybe instead of my initial thought of eyeballing noiselevels and then comparing rendertimes wouldn't it be better to flip that process around? So just open any given scene and then do a 1 minute progressive rendering with CPU and GPU and then afterwards compare the noiselevels for both pictures. So rendertime would be the same but you can compare which result has less noise. Then that should probably give you a better answer if that scene renders better with GPU or GPU...
                    Check out my FREE V-Ray Tutorials

                    Comment


                    • #11
                      Originally posted by JonasNöll View Post

                      You are probably right about that as it is heavily scene/usecase dependant. However when trying to benchmark it would be nice to have a simple "Hardware A is roughly 2x faster than Hardware B"- type of anwer
                      So I was thinking maybe instead of my initial thought of eyeballing noiselevels and then comparing rendertimes wouldn't it be better to flip that process around? So just open any given scene and then do a 1 minute progressive rendering with CPU and GPU and then afterwards compare the noiselevels for both pictures. So rendertime would be the same but you can compare which result has less noise. Then that should probably give you a better answer if that scene renders better with GPU or GPU...
                      Thats how we do it here.
                      https://linktr.ee/cg_oglu
                      Ryzen 5950, Geforce 3060, 128GB ram

                      Comment


                      • #12
                        Originally posted by oglu View Post

                        Thats how we do it here.
                        Awesome, any hints about how you set the "Rays per pixel" + "Rays bundle size"? Or just leave the default values?
                        Check out my FREE V-Ray Tutorials

                        Comment


                        • #13
                          We just use the default. It depends more on whats in the scene and how big(data) the scene is.
                          https://linktr.ee/cg_oglu
                          Ryzen 5950, Geforce 3060, 128GB ram

                          Comment


                          • #14
                            Originally posted by JonasNöll View Post

                            You are probably right about that as it is heavily scene/usecase dependant. However when trying to benchmark it would be nice to have a simple "Hardware A is roughly 2x faster than Hardware B"- type of anwer
                            So I was thinking maybe instead of my initial thought of eyeballing noiselevels and then comparing rendertimes wouldn't it be better to flip that process around? So just open any given scene and then do a 1 minute progressive rendering with CPU and GPU and then afterwards compare the noiselevels for both pictures. So rendertime would be the same but you can compare which result has less noise. Then that should probably give you a better answer if that scene renders better with GPU or GPU...
                            Sadly, absolutely not: we explicitly advise against flipping the render engines, because reasons.
                            But don't take my word (or the devs'!) for it, use some image metric and try it out to your heart's content.
                            Lele
                            Trouble Stirrer in RnD @ Chaos
                            ----------------------
                            emanuele.lecchi@chaos.com

                            Disclaimer:
                            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

                            Comment

                            Working...
                            X