Announcement

Collapse
No announcement yet.

multiple GPUs - 2 for rendering, 1 for viewport

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • multiple GPUs - 2 for rendering, 1 for viewport

    FROM Vray RT FAQ page:

    Why does my UI become sluggish when I use V-Ray RT GPU ?


    – If you have multiple GPUs, you can speed up screen refresh time by removing the GPU used for monitor/viewport redraw from the devices that RT GPU uses for rendering.
    – If you have only one GPU on your system, you could try reducing the value for Rays per pixel and/or Ray bundle size in the Performance section under the V-Ray RT tab in the Render Setup dialog. This will break up the data passed to the GPU into smaller chunks so that user interface requests can be processed faster. However, this might reduce rendering speed. Enable Show statistics to check the difference in render speed and to help find the optimal settings for your system.


    Does this info suggest that I can potentally use 2 GPUS for rendering and 1 for viewport/activeshade?

    In theory, can I use 2 AMD cards of the same architecture for rendering and an Nvidia card for viewport? Is there a way to install multiple drivers on a single computer but not have them interact per se and not cause issues?

    What is the best course of action here?
    Windows 10 x64 l AMD 8350 8core (oc to 4.6Ghz) on customLoop l AMD Firepro W8100 8GB l 16GB RAM (8x2) Corsair Dom Plat 2400mhz l Asus Crosshair v Formula-Z l SSD OCZ Vector 256GB l 1200w NZXT Hale 90 v2 (gold certified)

  • #2
    It looks as though in your benchmark table for Nvidia (CUDA), someone has rendered the scene using: A GTX TITAN X and with 2 Quadro M6000s. Don't these have separate drivers? Obviously it looks like it worked but..how?

    Also, it looks like having large amounts of VRAM doesn't mean that it will render slower: EX: 3rd listing on table


    GTX Titan X + 2xQuadro M6000 - 0m 39,4s 39
    Quad 980 / 347.52 - 0m 40.0s 40
    Dual ASUS OC 3GB GTX 780 + GTX ASUS OC 670 / 1m 01s 61 <--- only 1 minute??? wow
    Dual GTX 780 Ti / 347.88 - 1m 8.4s 68
    Tri GTX 970 / 347.25 - 1m 10.5s 70
    Tri GTX 970 / 344.16 - 1m 18.2s 78
    Dual GTX 970 / 347.25 - 1m 33.2s 93
    Dual GTX 580 / 344.11 - 1m 41.3s 101.3
    Dual Tesla K20X / 341.44 - 1m 41.9s 101.9
    GTX TITAN X / 347.88 - 1m 54.6s 114
    GTX TITAN X / 350.12 - 2m 01.3s 121
    GTX 690 / 340.52 - 2m 11.0s 131
    Quadro K6000 / 341.12 - 2m 13.0s 133
    GTX 780 Ti / 340.52 - 2m 14.9s 134
    GTX TITAN / 347.25 - 2m 18.2s 138
    GTX TITAN / 347.52 - 2m 22.1s 142
    GTX 980 / 347.09 - 2m 30,9s 150
    GTX 970 / 347.25 - 2m 54.5s 174.5
    GTX 970 / 347.25 - 3m 02.7s 183
    Quadro K5200 / 341.05 - 3m 09,7s 190
    Tesla K40 / 332.76 - 3m 10,6s 190
    GTX 580 Classified / 344.11 - 3m 22.3s 202.3
    GTX 580 / 347.25 - 3m 22,8s 203
    GTX 970 / 344.16 - 3m 32.3s 212


    I'd love to see what the Nvidia Tesla K80 2x12GB (24GB) could do. http://www.amazon.com/Nvidia-Acceler...ds=GTX+titan+X
    Windows 10 x64 l AMD 8350 8core (oc to 4.6Ghz) on customLoop l AMD Firepro W8100 8GB l 16GB RAM (8x2) Corsair Dom Plat 2400mhz l Asus Crosshair v Formula-Z l SSD OCZ Vector 256GB l 1200w NZXT Hale 90 v2 (gold certified)

    Comment


    • #3
      Does this info suggest that I can potentally use 2 GPUS for rendering and 1 for viewport/activeshade?
      Yes.

      In theory, can I use 2 AMD cards of the same architecture for rendering and an Nvidia card for viewport? Is there a way to install multiple drivers on a single computer but not have them interact per se and not cause issues?
      I have Fury X for monitor / viewport / OpenCL rendering and Titan X for RT GPU CUDA rendering and it works fine for me. However if your configuration will work you will have to test (or ask the GPU vendors). Most of the time it worked fine for me. I had 750ti and M6000 and they also worked fine (I installed the drivers for M6000 and the 750ti worked fine as well).

      Some 3D host apps (like Max) happen to disable devices that are not going to be used. For example, if you have your monitor plugged in AMD GPUs, it will disable all CUDA GPUs. But this is not major problem for V-Ray RT GPU, since we have an option to start it "out-of-process" and V-Ray starts out-of-process it does not has the host app limitations. It is just a checkbox that you have to click to make RT GPU start out of process (it has few drawbacks, of course, but nothing major).

      The 3rd listing is dual GPU configuration. The amount of memory does not affect the rendering speed, as far as the scene can fit in the GPU. The RT GPU CUDA has a bit more features (hair, sss, VRayDirt, TriPlanarTexture and few more procedurals) compared to RT OpenCL (and runs much faster on the NVidia GPUs compared to the OpenCL).
      V-Ray fan.
      Looking busy around GPUs ...
      RTX ON

      Comment


      • #4
        great info. thanks for such a detailed reply.

        Considering I have a AMD Firepro W8100, and the fact that V ray (and many other programs) take advantage of CUDA tech, should I sell my W8100 and invest into something coming in to the future.....or perhaps a couple GTX Titan X cards?

        Quite a bit of Adobe programs that I use take advantage of the OpenCL features....and more to come in the future I hear. Maybe get another W8100?


        AMD and Nvidia should be announcing new GPUs in 1st or 2nd quarter of 2016 i hear too....so...its hard to decide.
        Last edited by Libertyman86; 12-02-2016, 02:37 AM.
        Windows 10 x64 l AMD 8350 8core (oc to 4.6Ghz) on customLoop l AMD Firepro W8100 8GB l 16GB RAM (8x2) Corsair Dom Plat 2400mhz l Asus Crosshair v Formula-Z l SSD OCZ Vector 256GB l 1200w NZXT Hale 90 v2 (gold certified)

        Comment


        • #5
          Does 3dsmax viewport work faster on Fury than Titan ?
          http://gamma22.com/
          https://www.facebook.com/gamma22com/
          https://gumroad.com/gamma22

          Comment


          • #6
            I'm personally not sure about this one. I do know that 3ds max is a huge memory hog so one thing to clear any bottlenecks would be 32GB of ram so viewport performance is smooth.
            Windows 10 x64 l AMD 8350 8core (oc to 4.6Ghz) on customLoop l AMD Firepro W8100 8GB l 16GB RAM (8x2) Corsair Dom Plat 2400mhz l Asus Crosshair v Formula-Z l SSD OCZ Vector 256GB l 1200w NZXT Hale 90 v2 (gold certified)

            Comment


            • #7
              Thank You for the reply, i was hoping that maybe savage309 would know something about this since he has both cards
              http://gamma22.com/
              https://www.facebook.com/gamma22com/
              https://gumroad.com/gamma22

              Comment


              • #8
                I can not tell, since I don't use the any viewport that much, nor I happen to switch the main GPU (requires to open the case and exchange their slot positions from the motherboard). I am pretty sure there are benchmarks for that somewhere in the Internet .
                V-Ray fan.
                Looking busy around GPUs ...
                RTX ON

                Comment


                • #9
                  Maybe somewhere, who knows, I havent found anything yet on that topic. I'm starting to think You might be the only human being on planet earth with radeon fury x and 3dsmax on the same computer haha

                  but seriously...there is very little on that matter and i dont understand why, but people just dont care about viewport performance that much.
                  http://gamma22.com/
                  https://www.facebook.com/gamma22com/
                  https://gumroad.com/gamma22

                  Comment


                  • #10
                    Originally posted by Libertyman86 View Post
                    I'd love to see what the Nvidia Tesla K80 2x12GB (24GB) could do. http://www.amazon.com/Nvidia-Acceler...ds=GTX+titan+X
                    For 4k$ : The K80 has two GPUs each powering 2496 Cuda cores running at max 875Mhz when in full boost mode. This delivers a total power of 4.368.000 Mhz
                    For 1k$ : The titan X super clocked from EVGA has 3072 Cuda cores running at max 1216 Mhz in full boost mode. This delivers a total power of 3.735.552 Mhz

                    Basically, the Titan X is 1/4 of the price for 85.5% of the power. So for the price of one K80, you can have 4 Titan X, meaning you will be 3.42 times faster with the Titan X compared to the k80 for the exact same price.
                    I also get my 5x Titan X OC up to 1450 Mhz per core, so if you OC those Titan X to what I've got (and it's rock solid), one titan X is actually faster than one k80 but for 1/4 of the price. For me, the maths are quickly done
                    This is a "fair" comparaison as the K80 has 2 GPU but each with 12Gb so you really only can use scene up to 12Gb as the titan X.

                    IF you can make it work with only 6Gb, then the GTX 980 Ti is EXACTLY the same speed as the Titan X but with only 6Gb of ram, BUT half the price.
                    You would have 8 cards for the price of one K80, meaning, 8 times faster, for the same price.

                    Knowing that, who would still want to render out with GPU on any Testla/quadro?

                    Stan
                    3LP Team

                    Comment


                    • #11
                      Awesome info. thank you so much for this breakdown. I am seriously thinking about getting 2-3 Titan Xs but I am also looking forward to what Nvidia announces next Month with the announcement of Nvidia's PASCAL GPUs.

                      One card I don't hear about enough is the TITAN Z. I wonder how it compares to the TITAN X.
                      Windows 10 x64 l AMD 8350 8core (oc to 4.6Ghz) on customLoop l AMD Firepro W8100 8GB l 16GB RAM (8x2) Corsair Dom Plat 2400mhz l Asus Crosshair v Formula-Z l SSD OCZ Vector 256GB l 1200w NZXT Hale 90 v2 (gold certified)

                      Comment


                      • #12
                        The Titan Z actually is a dual GPU card which might make its specs a bit confusing. First of all it is listed to have 12gb of Vram, but this is divided over both GPU's, so you only have 6gb effective Vram. In addition the dual GPU setup consumes ALOT of power, it has a listed tdp of 375 watts but I have seen benchmark results of them using to up 450-500 watts. All this heats needs to be dissipated which results in the card using triple slots instead of the Titan X dual slot config. Combine this all with its higher price and I'd say the Titan X wins hands down .

                        Also keep in mind that the Tesla K80 cards need extra fans for cooling as they only have passive heatsinks. They are normally build into server setups in which (very loud) high speed server fans blow air straight through the cards. Placing them in a normal case without additional airflow will surely cause them to thermal throttle or worse.

                        That being said, if you do not need the GPU power asap I would wait to see what Pascal will bring.

                        Comment


                        • #13
                          Originally posted by eligiusz View Post
                          Does 3dsmax viewport work faster on Fury than Titan ?
                          I tested AMD (290x) vs Nvidia (780ti) vs Quadro (k5000) last generation and there is was no real difference between the cards in the viewport. In RT the high end gamer cards were about the same speed from nvidia vs amd but the quadro was slow slow slow.

                          My conclusion was that a high end gamer card is the way to go, and it does not matter which team you buy it from.

                          No idea how the fury x vs the 980ti goes or how the 1080 vs whatever amd has next will go tho.
                          WerT
                          www.dvstudios.com.au

                          Comment


                          • #14
                            Originally posted by werticus View Post
                            I tested AMD (290x) vs Nvidia (780ti) vs Quadro (k5000) last generation and there is was no real difference between the cards in the viewport. In RT the high end gamer cards were about the same speed from nvidia vs amd but the quadro was slow slow slow.

                            My conclusion was that a high end gamer card is the way to go, and it does not matter which team you buy it from.

                            No idea how the fury x vs the 980ti goes or how the 1080 vs whatever amd has next will go tho.
                            What was your test scene?
                            I had a Quadro 4000 and I remember it was on par with my GTX 780 most of the times but on really large scenes (around 15M poly) the Quadro just couldn't keep up. Its not like the GTX 780 was nailing it, but still usable.
                            Any thoughts about the RX 480 8GB?
                            This is a signature.

                            Comment


                            • #15
                              I tested on a proper bench mark flythrough scene i found somewhere that flew you through a city. They all ran at 30+ fps on all.

                              The quadro was slightly faster in viewport overall, but not worth the money. (and much slower in RT) I did test on a heavy duty scene and all of them fell down below 10 fps.

                              I think the rx480 could be a good choice for a budget work station. I have no idea how well it will do in RT, but it would be a better choice than the 970 now as it is cheaper, uses half the power, has twice the ram and about the same speed.
                              WerT
                              www.dvstudios.com.au

                              Comment

                              Working...
                              X