Announcement

Collapse
No announcement yet.

4 GPUS on 1 system or 2 GPUS on 2 systems?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 4 GPUS on 1 system or 2 GPUS on 2 systems?

    Which is better for cuda rendering?

    Rendering two frames on a motherboard with 4 gpus, or two frames between 2 systems with 2 gpus each.

    Are there bottlenecks on the motherboard with 4 gpus that cause all the available cuda cores not to be fully utilized?

    Are all available cuda cores better utilized when splitting up the frames over multiple motherboards?

    EDIT: The title should say "4 GPUS on 1 system or 4 GPUS on 2 systems?"
    Last edited by biochemical_animations; 23-01-2017, 10:16 AM. Reason: typo

  • #2
    First one is better, there is lot's of single socket MB who have 4-5 pciex, which is enough! It is much better solution to have everything in one computer because space and cost are reduced greatly compared to 2 computers and it's faster as long as you are using it just for rendering and they are not idling along.

    If you plan to use it as WS than you can end up idling that power for most of the time since you will be working other stuff and would be better to use it in WS(1gpu) + NODE(3gpu) configuration.

    My personal WS have 1x 1060 only for viewport and 2x 1080 for activeshade and rendering. Nodes however have 4-5 x 1080 that can work 24/7. There is no limitation with utilizing cuda, you can combine different manufacturers, no need for SLI cables everything works.

    Take care

    Comment


    • #3
      Thanks Ivan. Isn't the pciex slots on single socket MB reduced to 4x speed when multiple GPUs are installed, or is 4x speed enough? Can you give me an example of a MB that you would use with 4-5 slots? I'm planning to expand my render farm by 2 nodes and was wondering about this...thanks.

      Comment


      • #4
        No, it depends on the cpu you have, if you got cpu with 40pci lanes it will be capable of using 5gpus @ 8x pciex(at least asus x99 deluxe can have it like that), if you are having 2 xeon(40 lane cpu) on asus z10pe-d16-ws you can have all slots at 16x i believe or 16x16x8x configuration(per cpu), not quite sure as i never tested 6 gpus.
        It doesn't make any difference performance wise if they are running at 8x pciex either.
        From experience you will be better of sticking to single socket motherboards as having more than 5gpu, 6 or even 7gpus per node puts you in another price bracket, it's better to invest that money in another single socket node.
        It will add considerable costs if you decide to go with multisocket and more than 5 gpus, additional CPU, more expensive board, ecc dram, better PSU... And you won't be getting anything except waiting it 10 times more for booting as it last forever with server boards.

        There is whole array of reasonably priced x99 MB that have 4-5 or even more full length pciex slots, cheap memory, everything is much cheaper for desktops compared to servers.
        Either way, you will pre calculating LC for nodes, apart from pcie lanes on CPU(that needs to be 40 for pciex slots) i don't see reason why wouldn't you buy cheapest CPU today for your build as it makes no difference.


        https://www.newegg.com/Product/Produ...82E16819117647 overkill 40pcie lanes cpu for gpu node(as it wouldn't do anything except load scene...)

        You can also get away with single xeon 2650v3 from ebay(they go for pocket change) they do have 40pcie lanes, they are much slower than 6850k but you won't see any difference for GPU rendering(as it is not used anyways).

        https://www.asus.com/us/Motherboards...pecifications/ You can see under expansion slots how resources are menaged when you plug in cards

        Any other mb will be more or less same so you can get away cheaper on that side also

        https://www.newegg.com/Product/Produ...82E16813157596

        most of them support xeon cpus, hope you got general idea how you can build your node cheaply and efficiently.

        Take care
        Last edited by Ivan1982; 24-01-2017, 08:41 AM.

        Comment


        • #5
          Thanks Ivan. I forgot the CPU was the limiting factor for the lanes.

          Comment


          • #6
            The nodes prefiltering LC is done per frame right? Since I recently upgraded to vray 3.4 from 1.5 and things are faster nowadays, I've moved my workflow from creating an IR file, to BF+LC on single file mode for animations, eventhough my camera and objects are moving in the scene...I basically left it at default since it seems to render fine. I do have this right, correct? I did notice the prefilter LC is done by the cpu.

            Comment


            • #7
              Thanks for the info! I just bought a 1080 and I left my 970 in the machine to see if it could participate. It does on small scenes but eventually doesn't render and has a memory error on larger scenes. I plan on replacing the 970 with a 1080ti (when and if it comes out). But I shouldn't have any configuration problems with a 1080 having 8gb ram and the 1080ti with possibly different amount of ram?

              Comment


              • #8
                OK, heres a question, if you have a board that runs 4slots at x8, thats 32 lanes with a 40 lane cpu, 8 lanes are not being used right? Is it more cost effective to go with a 28 lane cpu board and 3 slots that run at x8 for 24 lanes so only 4 lanes are wasted?

                Comment


                • #9
                  What sized power supply do you run on your node with 4 gpus?

                  Comment


                  • #10
                    Originally posted by biochemical_animations View Post
                    What sized power supply do you run on your node with 4 gpus?
                    Dang ! I wanted to know that too
                    Jez

                    ------------------------------------
                    3DS Max 2023.3.4 | V-Ray 6.10.08 | Phoenix FD 4.40.00 | PD Player 64 1.0.7.32 | Forest Pack Pro 8.2.2 | RailClone 6.1.3
                    Windows 11 Pro 22H2 | NVidia Drivers 535.98 (Game Drivers)

                    Asus X299 Sage (Bios 4001), i9-7980xe, 128Gb, 1TB m.2 OS, 2 x NVidia RTX 3090 FE
                    ---- Updated 06/09/23 -------

                    Comment


                    • #11
                      We've got 4 x Titan X and using this PSU
                      EVGA Supernova 1600 W G2 80+ Modular Gold Power Supply Unit

                      Comment


                      • #12
                        Originally posted by AlexP View Post
                        We've got 4 x Titan X and using this PSU
                        EVGA Supernova 1600 W G2 80+ Modular Gold Power Supply Unit
                        Thanks Alex.

                        Do you know what sort of Power In / Out you're getting, especially when under load ?
                        Jez

                        ------------------------------------
                        3DS Max 2023.3.4 | V-Ray 6.10.08 | Phoenix FD 4.40.00 | PD Player 64 1.0.7.32 | Forest Pack Pro 8.2.2 | RailClone 6.1.3
                        Windows 11 Pro 22H2 | NVidia Drivers 535.98 (Game Drivers)

                        Asus X299 Sage (Bios 4001), i9-7980xe, 128Gb, 1TB m.2 OS, 2 x NVidia RTX 3090 FE
                        ---- Updated 06/09/23 -------

                        Comment


                        • #13
                          No idea no, I'm not sure how to monitor that?

                          Comment


                          • #14
                            I just posted a thread on my new WS build in hardware that may help answer some of your questions...hope it helps.

                            Comment


                            • #15
                              Originally posted by AlexP View Post
                              No idea no, I'm not sure how to monitor that?
                              Click image for larger version

Name:	image.jpg
Views:	1
Size:	481.3 KB
ID:	867521
                              Kill-a-watt meter $20ish from amazon

                              Comment

                              Working...
                              X