Announcement

Collapse
No announcement yet.

GPU benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • I've updated my drivers to 14.4, VRay to 3.00.05, and the APP SDK to 2.9. On Max 2013 Design 64-bit and Win7. Still no go :/

    By "latest hardware" do you mean hawaii architecture?

    Comment


    • W9100 + W8100 @ V-Ray 2.40.04

      1min 19,4s
      If it was that easy, it would have already been done

      Peter Matanov
      Chaos

      Comment


      • W9100 - 2 minutes 55 seconds using 3ds Max 2014 and V-Ray 3.00.07

        W9100 + i74930K using APP - 1 minute 53 seconds using 3ds Max 2014 and V-Ray 3.00.07

        Comment


        • I have a time of 2 minutes 1 second for the GTX TITAN which is slower than previous times. In 3.00.06 I had 1 minute 45 seconds. In 2.40.04 a time of 1 minute 37 seconds. Due to various driver and V-Ray updates Its impossible to come to a conclusion. Just to clarify I wondered if anyone else who has a TITAN could run a test on the latest 337.88 driver and V-Ray version 3.00.07.

          Thanks,

          Click image for larger version

Name:	V-Ray RT 3.00.07 - TITAN 337.88 - CUDA.jpg
Views:	1
Size:	330.7 KB
ID:	852118

          Comment


          • Interesting how throwing in your CPU, a powerful one albeit, shaves a whole minute off your W9100 times. I wouldn't have expected any CPU to help that much.

            Comment


            • Originally posted by Valtiel View Post
              Interesting how throwing in your CPU, a powerful one albeit, shaves a whole minute off your W9100 times. I wouldn't have expected any CPU to help that much.
              CPUs are not too shabby; in fact a K6000 card (which has similar performance to a Titan) is just slightly faster for pure raytracing than a high-end dual Xeon machine. The reason GPU renderers are so much faster is because they don't have all the baggage of years of existing code, plus their code is naturally simpler because of all the restrictions that are there. This is demonstrated by the fact that some of the newer CPU renderers are quite fast too. Of course, a big advantage of GPUs is that you can just put more cards in the system and it gets proportionally faster. Multi-CPU machines on the other hand are very expensive and the performance increase is not there, although one can simply buy more machines. It might also be cheaper to buy one or two graphics cards for an older machine, than it would be to buy one or more high-end CPU render machines.

              Best regards,
              Vlado
              I only act like I know everything, Rogers.

              Comment


              • Originally posted by JamesCutler View Post
                I have a time of 2 minutes 1 second for the GTX TITAN which is slower than previous times. In 3.00.06 I had 1 minute 45 seconds. In 2.40.04 a time of 1 minute 37 seconds. Due to various driver and V-Ray updates Its impossible to come to a conclusion. Just to clarify I wondered if anyone else who has a TITAN could run a test on the latest 337.88 driver and V-Ray version 3.00.07.

                Thanks,

                [ATTACH=CONFIG]19765[/ATTACH]
                Hi, I just tried a GTX Titan with 337.88 drivers and V-Ray version 3.00.07 and it took 1 minute 36 seconds. With the 332.21 drivers was a bit faster (1 m 34s)

                Comment


                • Originally posted by morean View Post
                  Hi, I just tried a GTX Titan with 337.88 drivers and V-Ray version 3.00.07 and it took 1 minute 36 seconds. With the 332.21 drivers was a bit faster (1 m 34s)
                  Thanks, I thought my times were off. assuming yours is not overclocked at all?

                  Comment


                  • 1m 39.0s on Titan Black, CUDA, Max 2015, Vray 3.00.07, driver 337.88
                    Seems slow compared to what I see here.
                    With both 760 and Titan Blck, 1m15s - that's 4032 cores. And 600 paths in 1.5m.
                    Just out of curiosity i rendered with production at the same res with 1/4 adaptive, gi at defaults, noise .02 & .01 - i7 4930, 4GHz: 2m 43s...similar quality as GPU, maybe a little noisier.
                    Last edited by bennyboy; 21-06-2014, 02:34 AM.

                    Comment


                    • I found the cause for my slow time, it seems if you have render server enabled for distributed rendering it slows down the GPU. It must put aside some of the GPU in readiness for DR, even though DR isn't being used. I turned off the render server and my render time went down to 1m 39 seconds.

                      Comment


                      • ZOTAC GeForce GTX 780 Ti 3GB
                        1m43 for 511 ppp

                        Click image for larger version

Name:	gpu.jpg
Views:	1
Size:	330.9 KB
ID:	852235

                        I'm tempted to go with 4 of those...

                        Stan
                        3LP Team

                        Comment


                        • Originally posted by 3LP View Post
                          ZOTAC I'm tempted to go with 4 of those... Stan
                          You know, when we were first looking at GPUs I was looking real hard at the 780ti and I'm kinda wishing we had gone with two of them instead of one Titan Blck (or 4)...we thought the extra RAM would be important, but with the limits to the RT, we're not using the GPU for big scenes like we thought we would...more for product demos and cars, where the scene is usually pretty small (or can be). But having double the cores would really change the game for our render frame rate - for animation demos you could get decent quality renders at like 10 secs a frame if you had 10,000 cores.
                          Just a word of advice to anyone in the market - worry more about the core count than RAM, at least with the currently supported features, and plan on using RT for object demos/animations, not arch viz or anything with a complex environment (the shader support is just too poor right now).

                          Comment


                          • You're absolutely right, and that's exactly why I did that move

                            I'm following RT GPU for years and I'm adapting and building my workflow with the enhancement that are available for the time being.
                            If you want to use RT GPU, the best way is to start the project from scratch having that in mind. Bending your habits to suite GPUs limitations.
                            That way, usually, you can progressively see where ram could be a issue and try to fix it accordingly.
                            The benefits are huge, even if you don't use the GPUs for final rendering, the 60-90% of the pipe where it can help you is a massive time saver.
                            And for the same price, always try to get the more horsepower you can, even if you need to conses a bit of time tweaking, usually you will end up winning. Same theory work for i7 cpu and xeon, but there, it seems to be more common knowledge.

                            Cheers

                            Stan
                            3LP Team

                            Comment


                            • I did some GPU performance testing on viewport performance:

                              http://forums.chaosgroup.com/showthr...576#post625576

                              I also tested a Quadro K5000 in RT and got 14mins or so for the frame :-\
                              WerT
                              www.dvstudios.com.au

                              Comment


                              • Originally posted by bennyboy View Post
                                You know, when we were first looking at GPUs I was looking real hard at the 780ti and I'm kinda wishing we had gone with two of them instead of one Titan Blck (or 4)...we thought the extra RAM would be important, but with the limits to the RT, we're not using the GPU for big scenes like we thought we would...more for product demos and cars, where the scene is usually pretty small (or can be). But having double the cores would really change the game for our render frame rate - for animation demos you could get decent quality renders at like 10 secs a frame if you had 10,000 cores.
                                Just a word of advice to anyone in the market - worry more about the core count than RAM, at least with the currently supported features, and plan on using RT for object demos/animations, not arch viz or anything with a complex environment (the shader support is just too poor right now).
                                Interesting... I've never found RT to be useful for complex stuff yet either. Maybe if i built from the ground up for it, but it just seems unstable most of the time.
                                WerT
                                www.dvstudios.com.au

                                Comment

                                Working...
                                X