Announcement

Collapse
No announcement yet.

XEON vs. i7 again - TESTING INTERIOR SCENE

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Did quite alot of experimenting with CPU vs GPU rendering times myself as well. I'd expect this scene to do fastest on the CPU with normal Vray settings as well. Vray RT on GPU's seems to gain alot of strength in brute force though calculations , especially with complex geometry. A clean, simple scene such as this benchmark does not show this but large scenes, with alot of trees for example would.

    Comment


    • Originally posted by jstrob View Post
      Ok but 5 gtx titan x is a setup around 7970$ cad here and a 4830k cpu is only 948$. So that's not really a fair compare. you would need to compare your GPU setup to a dual xeons setup with the E5-2696 v3 or something in the same price range of the 5 titan x. My guess is that the same price cpu would be faster than the GPU.

      By the way did you do both renders with vray RT (gpu and cpu)?
      I agree, the comparaison is a bit out, but the same way some ppl have old cpu and some other have dual xeons on this same thread, my point was to eventually start a second benchmark through BF and not IR and compare CPU against GPU.
      That's exactly why I posted above my scene in BF for CPU and GPU so that ppl with different rigs (including those who have more powerfull CPU rigs like dual xeons) could bring their info to the mix.

      My guess is that even a 2696 V3 (or 2699 V3 as it's the same cpu) will not outperform my 5x Titan X setup. From my initial calculations, a 2696 V3 should be just above 1min (1min5sec). So my rig should still be twice as fast, but I'd like to see render to be sure if this.

      For CPU, I used Vray Adv as production as usual, as this is the best optimized way to render out on CPU.
      For GPU, I used RT GPU as production as this is the only way to render out through GPU, but as production ensure not to bias any stats.

      Stan
      3LP Team

      Comment


      • Originally posted by 3LP View Post
        My guess is that even a 2696 V3 (or 2699 V3 as it's the same cpu) will not outperform my 5x Titan X setup. From my initial calculations, a 2696 V3 should be just above 1min (1min5sec). So my rig should still be twice as fast, but I'd like to see render to be sure if this.
        Stan
        Time of 2x Xeon E5-2696 v3 in setup "CPU_BF" is 1 min 30s

        Mateusz

        Comment


        • Great, thanks for this feedback, really instructive.
          My estimate was actually better than what I though for CPU BF.

          Based on this, GPU is nearly 3 times faster compared to the one of the most powerful rig available right now.
          That's only with 5 cards, we will try to hook on another 2 early next year to the rig.

          Stan
          3LP Team

          Comment


          • Originally posted by 3LP View Post

            5x GTX Titan X OC :

            Note that this is BF not IR! Imagine what the speed will be when IR will be ported to GPU...

            CPU : 4930k OC 3.9 GHtz :


            Stan

            Is there a reason why the shadows on GPU are very hard and stencil-like compared to the CPU version?
            https://www.behance.net/Oliver_Kossatz

            Comment


            • The scene is way to simple for a comparison, on a complex scene GPU will take big hit compared to CPU. Loading the scene and textures alone can take 5 minutes or more, and cleaning up fine noise will take forever on GPU.

              I would test something more complex a real world interior scene at 4k at least.
              "I have not failed. I've just found 10,000 ways that won't work."
              Thomas A. Edison

              Comment


              • Originally posted by eyepiz View Post
                The scene is way to simple for a comparison, on a complex scene GPU will take big hit compared to CPU. Loading the scene and textures alone can take 5 minutes or more, and cleaning up fine noise will take forever on GPU.

                I would test something more complex a real world interior scene at 4k at least.
                ^Agreed that loading the scene takes more time for the GPU process. But the more complex a scene gets the better the GPU will do compared to the CPU in my experience. True, for large monochrome surfaces it can take a while before the noise is at acceptable levels but when dealing with scenes that contain trees, grass or alot of other small objects the GPU is way way faster.

                Comment


                • Well I'll be happy to try out any scene to sort this out.
                  ATM whatever production scene I tried, GPU has always been faster, when using BF obviously, as comparing BF on GPU and IR on cpu isn't fair Stan
                  3LP Team

                  Comment


                  • Originally posted by 3LP View Post
                    Well I'll be happy to try out any scene to sort this out.
                    ATM whatever production scene I tried, GPU has always been faster, when using BF obviously, as comparing BF on GPU and IR on cpu isn't fair Stan
                    I've been running quad SC Titan's and dual Xeon's in the same system. In my tests with a complex production interior scene the Xeon's come out on top with BF/LC .The GPU with BF/LC just take an enormous amount of time to clean up the noise to an acceptable level compared to the Xeons. It's like the last 20 percent of noise takes 10 longer than the initial 80 percent.

                    Maybe I'm doing something wrong but here's my top list of cons on the GPU.

                    1. The limited amount shaders and features on the GPU. having to convert or build a scene to be "RT friendly" can be time consuming and frustrating.
                    2. The load times on the GPU's are way to long I get almost instant feedback for previews on CPU (progressive) on GPU I have to wait about 5 minute for the scene to load before I can even see anything.
                    3. The whole system becomes unbearably unresponsive and practically unusable unless I disable the Titan driving the display.
                    4. Reliability I can't depend on that a batch render wont crash when I'm away from the office. Xeons just don't crash. although RT GPU is been getting better about that.
                    7. lack of 3rd party plug-in support. like color correct and berconmaps these been around forever.
                    6. The fact that I cant use LC in activeshade and switching to production render resets all settings.
                    I know I can use LC from file but switching render engines to re-calc the LC if I made a tweak or switched cameras is a pain.

                    and I know this one isn't fair...but, when I'm in a pinch for render speeds a well optimized IM/LC set up will just annihilate the GPU's.

                    RT GPU is getting better and better with every SP update. I hope chaosgroup keeps those GPU enhancements coming!

                    -E
                    Last edited by eyepiz; 29-12-2015, 11:00 PM.
                    "I have not failed. I've just found 10,000 ways that won't work."
                    Thomas A. Edison

                    Comment


                    • 2m 35.6s
                      i7 5930k @ 3.5ghz

                      2m 12.5s
                      i7 5930k @ 4.5ghz

                      32gb
                      Vray 3.20.03
                      Max 2015
                      Windows 10
                      embree on



                      ------

                      for consumer CPUs costing a few hundred dollars, the i7 x930 series are definitely the best bang for buck
                      Last edited by Richard7666; 30-12-2015, 04:49 AM.

                      Comment


                      • I have not read the whole thread, so I hope I am not just repeating someone here :P

                        I hope people change the bucket size to optimize for their exact CPU when you compare render times. I have two workstations that are identical except the cpus. One have 2xXeon 2687W v2. That is 2x8 cores @ 3.4Ghz. The other have 2x2697v2 which is 2x12 cores @ 2.7Ghz. The 2687W setup is considerably faster with default bucket size, even though the 2697 setup have higher theoretical performance (cores*core speed). But by reducing the bucket size (I often end up at around size 10) I get something like 99% of the theoretical performance increase in the 2697. The 2687W also benefit from smaller bucket size, although to a lesser degree than the 2697.

                        When it comes to buying many off the shelf, low end computers instead of just one or a few "big" machines, I always come to the conclusion that bying a few big is better. I have done the math a few times on this.
                        First, you need licenses for everything, including OS, 3d software, render engine, renderfarm software etc. This will add up for every machine. 3Dsmax for example, is not cheap. Maybe you render in other software packages as well, which will of course add to the cost.
                        Another big factor is ram. 128GB ram can be needed if you work on large scenes with many textures and you want to make sure you have some headroom. 128GB ram will be a big part of the total system cost, and you need this in every computer, big or small. Granted, its a small cost if you only need 16 or 32GB.
                        Administration costs are substantial. For every machine you add, you multiply the chance of failure. And that is not even cosidering the HUGE gap in failure rates for low end consumer parts vs professional parts. The difference gets even bigger if you take into account that a system built and tested by professionals vs a system built by a semi-amateur (with some small bolts of static shocks added for good measure) will be more stable by that fact alone.
                        Of course you can argue that the impact of a single computer failure will be much less if you have many cheap nodes instead of a few big, but you still need to invest the same time to actually fix the problem, regardless of the initial impact it had on your ability to continue working.
                        Also, everything you need to "do" on the computers has to be done across all of them. This can of course be streamlined with scripts and\or management software ...
                        Electricity bill is a factor, although it depends on where you live.
                        Last edited by hardrock_ram; 06-02-2016, 06:40 AM.

                        Comment


                        • Just curious if any one has 2AMD Opteron Abu Dhabi 16 Core cpu to try and compare. These cpus on paper look a lot more powerful vs how much they cost. But yet I haven't seen anyone put out a test with them. Are they just not that popular or whats the story?
                          Dmitry Vinnik
                          Silhouette Images Inc.
                          ShowReel:
                          https://www.youtube.com/watch?v=qxSJlvSwAhA
                          https://www.linkedin.com/in/dmitry-v...-identity-name

                          Comment


                          • http://hwbot.org/submission/2482634_...86_se_1724_cb/
                            Only ~1700 cinebench r15, the 2699 (or 2696) does ~4500.

                            Stan
                            3LP Team

                            Comment


                            • 0m 50,4s without embree

                              0m 45,9s with embree

                              Dual Intel e5 2680v3
                              64 GB Ram
                              Titan 6GB
                              Max 2016, Vray 3.30.04, Win 10 pro
                              www.bpositive.dk

                              Comment


                              • Originally posted by Bpositive View Post
                                0m 50,4s without embree

                                0m 45,9s with embree

                                Dual Intel e5 2680v3
                                64 GB Ram
                                Titan 6GB
                                Max 2016, Vray 3.30.04, Win 10 pro
                                Good test there!
                                I have the same CPUs but I haven't tried this benchmark scene with 3.3
                                Chris Jackson
                                Shiftmedia
                                www.shiftmedia.sydney

                                Comment

                                Working...
                                X