Announcement

Collapse
No announcement yet.

CPU or GPU, I'm lost

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • CPU or GPU, I'm lost

    Hello everyone,

    I'm lost and I request you help.
    I need to upgrade my old workstation, actually playing with a Threadripper 1950x & 1070 TI (yep, that's old).

    I mainly use VrayGPU and I like how fast it is compared to CPU rendering in my workstation. But is it still that fast ?
    Everybody here talks about the last generation of AMD CPU, and I can't find comparison between RTX 3090/3080 and those 3960/3970/3990x. Vray Benchmarks does not even use the same metric so it's not even comparable.

    So, wich one is faster now ? GPU rendering or CPU rendering ?

    I was ready to put money on a future RTX 3080 20GB or 3090; I'm not sure anymore.

    Kind regards.

  • #2
    Well, if you were already working with VrayGPU and satisfied with it there's no reason to switch to CPU.
    There are some reasons to stick with CPU, like for eg better support for plugins. But I guess you don't have that problem.

    mekene

    Comment


    • #3
      Originally posted by FranklinTre View Post

      So, wich one is faster now ? GPU rendering or CPU rendering ?

      I was ready to put money on a future RTX 3080 20GB or 3090; I'm not sure anymore.

      Kind regards.
      Many of us have the same issue, that we cant guage how to bias a new system, either GPU heavy or CPU heavy since there are no official benchmarks comparing. Its very frustrating. As mentioned above, youll just have to pic one youre happy with and go with it. It sucks but thats kind of how it is as far as I can tell. For this very reason Im switching back to CPU heavy for my next system since I feel its a more reliable and flexible renderer than GPU. And since I cant see any tests of a 3090 vs a 3990x I just have to guess and play it safe.

      Website
      https://mangobeard.com/
      Behance
      https://www.behance.net/seandunderdale

      Comment


      • #4
        seandunderdale theedge

        Thanks for both your replies.

        Indeed, there is a gap here... And yes I would probably stick to GPU since it's seems to me that it's way quicker, even thought there is some bug here and there. (I manage to deal with it over time)

        For what I can see, you can buy for 1500 € either a RTX3090 or a 3960x (without RAM and MOB).
        Considering that the VrayBenchmark score of the 3960x (28000 pts) is only the double of my 1950x (14000 pts), and that the RTX 3090 is five times faster than the 1070Ti (withtout even comparing technologies), and finally that on my machine the GPU render is 10 times faster than the CPU render : theoratically, the RTX 3090 is supposedly 25 times faster than the 3960x.
        That would also put the RTX 3090 10 times ahead of the 3990x, for a third of the price.

        My only concern is VRAM. I'm not sure 24 GB of VRAM is enough for what I do. I struggle so much with my poor 8GB ^^



        Comment


        • #5
          if 1070 is your only gpu in that workstation - you have probably like 6-7 gigs left for VRayGPU. so you should be more than fine with 24 if you use 1070 for running display/displays.
          Marcin Piotrowski
          youtube

          Comment


          • #6
            You're right.
            I mean if you're carrefull and clean your scene it's okay.

            If you put large surface displacement, even at 1K it won't start rendering, that's the only issue. I can render 1 billion polygon in realtime even with my good old GPU.

            Comment


            • #7
              if you can fit your work in a current GPU vram overhead then go for it....cant believe you can use GPU with a 1050 Lol
              if not dont bother and get a 3990x or 3970x

              hopefully the OOC will work better in future like the Redshift one and we dont be so reliant on vram levels or havign to NVLink 2 x 3090s together to get a complex scene working. its also lacking a lot of the vray CPU features

              Comment


              • #8
                I splashed on a 2080ti and it's basically just a gaming card now (which btw is just great , as the fact that Vray GPU is lacking in so many areas ends up being a time-waster and more than a bit of a wind-up.
                The documentation on what works/doesn't work is sadly very lacking and so one is constantly taking a risk when starting any project intending to use it, in my opinion.
                I vote cpu and wait for another 10 years until gpu catches up, by which time it won't matter and robots will be our masters
                https://www.behance.net/bartgelin

                Comment


                • #9
                  The thing is, you have to consider Vray GPU for what it is : a separate render engine with its own strenghts and weaknesses. It's not just a faster version of Vray CPU. If that was the case everyone would have ditched Vray CPU by now.

                  mekene

                  Comment


                  • #10
                    Originally posted by fixeighted View Post
                    I vote cpu and wait for another 10 years until gpu catches up
                    I think there is an important take away in this statement. Gpu is likely never going to catch up, it hasn't yet. At least not at the price point. You can get killer gpu rigs with ton of processing power that cost insane amount of $. However the severe limitations of the gpu don't justify the spending. And the cpu progress does not sit idle. I think the perfect combo is still going to be gpu/cpu and no single 1 fit all scenario. Gpu is just too limited on vram to start with. Yes cpu is slower in some cases but it is a lot more versatile. Get both for now best you can buy I think its the way to go, I'd done this many times in last 10 years and it worked well.

                    Dmitry Vinnik
                    Silhouette Images Inc.
                    ShowReel:
                    https://www.youtube.com/watch?v=qxSJlvSwAhA
                    https://www.linkedin.com/in/dmitry-v...-identity-name

                    Comment


                    • #11
                      If you can hold off a few months, its possible they will release a new Threadripper in Feb 2021. You might find the 3990x becomes a bit more affordable. Pair that with a 3080ti (hopefully coming soon with 20gb ram) and youll have a very robust system. This is my plan anyway, assuming any new Zen 3 threadripper isn't a vast improvement, in which case Id maybe need to get that. All speculation however since neither of these are official. I think right now is a very difficult time to know how long to wait for the latest and greatest gear. Obviously nobody wants to spend top dollar on a rig to find there is a 30% improvement on their system if they waited a few months, but that's where we are!
                      Website
                      https://mangobeard.com/
                      Behance
                      https://www.behance.net/seandunderdale

                      Comment


                      • #12
                        FranklinTre I'm using 2x2080ti all days and I'm quite happy (product design; large train interiors/exteriors). Today I jumped to CPU for a test. My scene was rendered per GPU in 13min. Per CPU 2950x the scene needs 1h20min - 6 times slower. I was shocked.

                        Now I looked at this CPU benchmark side:
                        https://www.cpubenchmark.net/high_end_cpus.html

                        My weak CPU 2950x ... ~30.000 points
                        A good CPU (but not for the highest price) ... ~60.000points

                        So, if I would use a much better CPU than CPU rendering of this scene would be doubled so fast, but still 3 times slower than my 2x2080ti.

                        2x2080ti are so fast like a 3090.

                        Also I like that I can easy upgrade my GPU hardware, no need to buy a whole computer with OS, no need for DR.

                        If you feel comfortable with GPU I would stay on it.
                        www.simulacrum.de ... visualization for designer and architects

                        Comment


                        • #13
                          Good point Micha but as you say, you are doing the same stuff repeatedly, so you have a workflow that is working for that, which you have expectations of and also know the limitations of.

                          The difficulty comes when one needs to do something else that does or maybe does not work with gpu; this is not something that can be known beforehand always, so is a real risk to productivity/time/money.
                          Benchmarks are wonderful but they mostly are made with the intention of selling something (excuse my cynicism).

                          It's all down to what specific capabilities you need and what is most cost-effective and problem-free for that particular project.

                          The issue with that methodology is that it is genuinely hard (for me at least) to remember what works and what does not, from one iteration of the software to the next, if I am switching between methods.
                          Again, documentation on this is lacking imo and that only leads to more confusion. I am seeing the same questions asked again and again and so there is clearly an issue.

                          I really think we really need a clear, side-by-side table of Vray cpu/Vray gpu capabilities, which will undoubtedly enable users to make better decisions before beginning any project. This should not be a difficult thing to do
                          https://www.behance.net/bartgelin

                          Comment


                          • #14
                            fixeighted Right, to be able to choose between both ways is an advantage if a for a wide range of projects should be rendered. For me it looks like the future will be brighter and brighter for GPU rendering - the calculation power seems to be stronger already and the provided VRAM is growing with every new graphic card generation. I wished Chaosgroup could spend all energy to make GPU rendering feature complete so fast as possible. CPU could be more and more a fall back option only if VRAM or unsupported features are a problem. VRAM is the job of Nvidia ... . I ask me when will GPU rendering will be feature complete? My impression is the difference isn't falling so fast anymore. Maybe the last differences are to difficult to solve. A dream would be if Vlado could say, in one year both engines will provide the same features.

                            Interesting is to compare Cuda benchmarks:

                            CPU 2950x 115
                            GPU 1080ti 257
                            GPU 2080ti 360
                            GPU 3090 750 (not at my PC)

                            OK, CPU in Cuda mode is an emulation only, but my little test where I switched from GPU to CPU shows a clear direction or?
                            www.simulacrum.de ... visualization for designer and architects

                            Comment


                            • #15
                              A feature parity overview would be great. With 24GB on 3090 memory is becoming less and less of an issue in my case, but differences in features, how forest pack works, etc., do. On the other hand GPU power gives access to options such as Lavina and Unreal.

                              My main concern with GPU is actually more the initial startup when it is loading the scene into memory as I do frequent look tests.

                              Comment

                              Working...
                              X