Announcement

Collapse
No announcement yet.

Titan X Pascal benchmark

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Titan X Pascal benchmark

    We got the new NVIDIA GTX Titan X Pascal in the office. It runs without any problems with the V-Ray 3.40 (and even with half-year old V-Ray 3.30). After running some performance tests, it seems like the fastest GPU money can buy. The results below are without any overclocking or whatsoever.
    On top of the 12GB GDDR5X it is also the most energy efficient GPU ever, with Pascal being almost two times more efficient than Polaris for V-Ray RT GPU raytracing.

    Click image for larger version

Name:	titan_x_pascal.png
Views:	1
Size:	24.4 KB
ID:	885153
    V-Ray fan.
    Looking busy around GPUs ...
    RTX ON

  • #2
    I want to see a denoiser benchmark with the new Titan XP.
    "I have not failed. I've just found 10,000 ways that won't work."
    Thomas A. Edison

    Comment


    • #3
      The 12GB are very nice, but for me the interesting comparison is against 2 x 1080s as that represents about the same investment as the Titan Pascal.

      Comment


      • #4
        Originally posted by eyepiz View Post
        I want to see a denoiser benchmark with the new Titan XP.
        If your doing a lot of denoising the amd cards are a much faster.
        WerT
        www.dvstudios.com.au

        Comment


        • #5
          Originally posted by Nicinus View Post
          The 12GB are very nice, but for me the interesting comparison is against 2 x 1080s as that represents about the same investment as the Titan Pascal.
          I suppose you can expect the usual scaling RT has with multiple GPUs to apply.
          Lele
          Trouble Stirrer in RnD @ Chaos
          ----------------------
          emanuele.lecchi@chaos.com

          Disclaimer:
          The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

          Comment


          • #6
            Originally posted by ^Lele^ View Post
            I suppose you can expect the usual scaling RT has with multiple GPUs to apply.
            I'm not familiar with that, I thought additional cudas gave almost a linear effect?

            Comment


            • #7
              Is there any bench marks of the 1070 around? Wondering if two 1070s would be better than one 1080 for a 40% price premium.

              and what about rx480s? We already know they are a heck of a lot faster for denoising... so perhaps 4 of those would be better bang for buck.

              In other news I just bought an old 12gb titan x maxwell. For $200 less than a 1080 costs here.
              Last edited by werticus; 23-08-2016, 09:36 PM.
              WerT
              www.dvstudios.com.au

              Comment


              • #8
                Originally posted by werticus View Post
                Is there any bench marks of the 1070 around? Wondering if two 1070s would be better than one 1080 for a 40% price premium.

                and what about rx480s? We already know they are a heck of a lot faster for denoising... so perhaps 4 of those would be better bang for buck.

                In other news I just bought an old 12gb titan x maxwell. For $200 less than a 1080 costs here.
                We have 1070 but still haven't run any benchmarks. We have the rx480 benchmarks here. Keep in mind there are stuff in OpenCL that do not work, and they do in CUDA.

                Best,
                Blago.
                V-Ray fan.
                Looking busy around GPUs ...
                RTX ON

                Comment


                • #9
                  Blago, Lele eluded to in another thread that there were a scaling loss when using two cards. Do you have any feeling for how much that affects a second or even third card? I've always been under the impression that the gains were fairly linear?

                  Comment


                  • #10
                    Originally posted by Nicinus View Post
                    Blago, Lele eluded to in another thread that there were a scaling loss when using two cards. Do you have any feeling for how much that affects a second or even third card? I've always been under the impression that the gains were fairly linear?
                    As far as the GPU kernel itself is concerned - it scales linearly. Light cache does not scale on multiple GPUs (we use CPU for it).
                    The other thing is the adaptivity of the image sampler - when it lowers the noise threshold and/or when there are few noisy pixels left, some GPUs might not be well utilized. In the dev nightlies we start to increase the rays per pixel automatically for the cases when there are very few noisy pixels .. it helps to some extend, but not as much as I wished.

                    With these off it should scale linearly.

                    Best,
                    Blago.
                    V-Ray fan.
                    Looking busy around GPUs ...
                    RTX ON

                    Comment


                    • #11
                      30% over the 1080 huh? Nice!

                      Next I'd love to see 2x 1080s vs. Titan X Pascal comparison .
                      Windows 10 x64 l AMD 8350 8core (oc to 4.6Ghz) on customLoop l AMD Firepro W8100 8GB l 16GB RAM (8x2) Corsair Dom Plat 2400mhz l Asus Crosshair v Formula-Z l SSD OCZ Vector 256GB l 1200w NZXT Hale 90 v2 (gold certified)

                      Comment


                      • #12
                        Originally posted by Libertyman86 View Post
                        30% over the 1080 huh? Nice!

                        Next I'd love to see 2x 1080s vs. Titan X Pascal comparison .
                        ++1. Would love to see that too.

                        Comment


                        • #13
                          Originally posted by Nicinus View Post
                          ++1. Would love to see that too.
                          ++1 on a 10 gig scene.
                          "I have not failed. I've just found 10,000 ways that won't work."
                          Thomas A. Edison

                          Comment


                          • #14
                            How much ram is needed for GPU rendering is a tricky one, since i do renders that seem to use a hell of a lot more ram in cpu mode... But i decided to play it safe with a titan, 12gb... However I haven't had a computer with less than 32gb for 5 years now. So will 12 be enough? Anyone done a forest pack job with 12gb?
                            WerT
                            www.dvstudios.com.au

                            Comment


                            • #15
                              i am doing i would say alot of gpu rendering and have the zotac 1080 and its nice and fast! the titan x is about 10 percent faster but for big scenes that use more than 8 gigs of vram then its where you will need a titan x, i have been doing some big scenes and using the most like 5 gigs of vram and that is with like 50 millions polys but with optimised textures and it renders fast stil, the 1070 is stil worth getting but you have to consider that it has alot less cuda cores and that affects the speed so i would not recommend it if you can afford the 1080 and if you can buy the titan x then heck go for it!
                              Architectural and Product Visualization at MITVIZ
                              http://www.mitviz.com/
                              http://mitviz.blogspot.com/
                              http://www.flickr.com/photos/shawnmitford/

                              i7 5960@4 GHZm, 64 gigs Ram, Geforce gtx 970, Geforce RTX 2080 ti x2

                              Comment

                              Working...
                              X