Announcement

Collapse
No announcement yet.

Ghetto build

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by super gnu View Post
    ahh isnt it nice being able to design stuff in 3dsmax.. every time i have to build something its straight on me computer. i notice you textured the graphics cards.. as i would have done.

    edit: yeah your design looks fine as long as the risers will reach.. you should draw them with splines and check spline length.

    i was imagining the cards resting on the back face where the exposed circuitry is.. that would be a problem.. your layout is fine tho.. like you say, its just resting n the plastic shroud.


    edit 2: couldnt you squeeze it down smaller? im sure you could reduce the space between cards and have it 3 fans high instead of 4

    Phew

    I think you're absolutely right to mention it took you a month - I think I was rushing things a bit (though I am trying to get this out of the way whilst I have a window of opportunity, work-wise).

    Once you mentioned my first riser cables were the wrong ones, I knew that the correct 30cm cable (as opposed to the 80cm one I chose mistakenly) meant this design had to be absolutely right - so, yes, I have already done the all important critical spline test

    Now it's down to those Riser cables.

    Bio recommends powered riser card/cables...... (which Moddiy do at around ?15 each).......

    Moddiy,com recommend the non-powered gold-plated, shielded ones over those (at ?32.99 a pop.....).

    My head tells me to go for what Bio recommends - I'd like as much power requirements to be taken away from the mb end of things (psu 760w) and put those demands on the GPU PSU's (2 x 1500w).

    This one

    https://www.moddiy.com/products/PCI%...2830cm%29.html


    PS - I think you're right about squeezing it down smaller, but I wanted to give the same about of space these cards would have if I were only installing 3 on my mobo, i;e how they'd naturally be spaced out in that scenario. Plus also to let some air between them all. I think that keeping it this size will give me a bit of wiggle room if I do end up needing to squeeze the cards more together, say if those riser cables need it... (or if I ever need to reuse it for 8 cards someday).


    Last edited by JezUK; 11-07-2017, 10:49 AM.
    Jez

    ------------------------------------
    3DS Max 2023.3.4 | V-Ray 6.10.08 | Phoenix FD 4.40.00 | PD Player 64 1.0.7.32 | Forest Pack Pro 8.2.2 | RailClone 6.1.3
    Windows 11 Pro 22H2 | NVidia Drivers 535.98 (Game Drivers)

    Asus X299 Sage (Bios 4001), i9-7980xe, 128Gb, 1TB m.2 OS, 2 x NVidia RTX 3090 FE
    ---- Updated 06/09/23 -------

    Comment


    • #47
      Originally posted by JezUK View Post


      My head tells me to go for what Bio recommends - I'd like as much power requirements to be taken away from the mb end of things (psu 760w) and put those demands on the GPU PSU's (2 x 1500w).

      This one

      https://www.moddiy.com/products/PCI%...2830cm%29.html


      Those look good to me...what those capacitors do is make sure a little power is always available to the ribbon to ensure data transfer does not get interrupted.

      What I read, was there were reports of data signal loss across non-powered risers...maybe as power fluctuates due to various psu load, the ribbons lost a little power and data was interrupted. People went with powered risers and had much better results. The power is like 15 watts per riser, which is what your psu is supplying each pcie slotI think...so it's pretty low, but good insurance to keep the data transfer from dropping when sent across the ribbon.

      Regarding the extra shielding moddiy recommended...I read one guy who wrapped each ribbon with 3M electrical tape...I didn't do that with the risers I went with (which look the same as yours except mine were 8x on one side). I figured if I got data transfer problems I might try the 3M tape, but I never had an issue so I didn't worry about it. I simply made sure there was a little gap between ribbons so they did not touch each other...personally, I think the 3M guy did overkill just for fun...he's working on a 20 gpu workstation.

      I think your on the right track with figuring out the power...once again, as you build, add one gpu, use the kill-a-watt meter, see what you pull on production render, power down, add a second gpu, etc...this will help you figure out how it all goes and what you will need. I had to do this too. As best I planned, I had to adjust things later due to real world data. The planning gets us real close, but the actual may be different.

      Remember this is prototype stuff...we are shaping technology as we speak.

      Oh yeah, it's better to have a little longer ribbon as you space the cards, the little extra length helps a lot...plan for a little curvature in the ribbons with your splines...you will probably need it.

      Oh yeah, you might want to test/plan for the hybrid rendering...it is real nice, although I'm currently dealing with some bugs...but running your cpu at max along with your gpu's creates a greater max power requirement then running cpu at light cache calc time first, then gpu 's after...everything runs at the same time and pulls "ALL POWER"!
      Last edited by biochemical_animations; 11-07-2017, 11:38 AM.

      Comment


      • #48
        Thanks Bio - I really appreciate you reminding me that this is prototype stuff The main objective all along was to give my 4 cards a bit of space apart from each other, and to add more cards to further improve rendering in RT.

        But, I'm now wondering about PCI-e lanes - can you (or Super) help ?

        My mobo, an Asus X99-E WS, states that expansion slots can be in a variety of configurations, including single x16, dual x16/x16, triple x16/x16/x16 or quad, x16/x16/x16/x16.

        Or, seven x16/x8/x8/x8/x16/x8/x8.......

        Now, I'm definitely no expert in this area (especially) but adding all those seven lanes give 72 pci lanes ?

        I believe that there is also a restriction on the CPU side of things, in that my cpu, a 5960x supports up to 48 PCI-e Express lanes.

        So........ my questions are;

        a) Will 7 GPU's even be possible on my mobo (at full x16) - I'm guessing no, but that perhaps at x16/x8/x8/x8/x16/x8/x8, I wouldn't perhaps know the difference.

        b) Seems like my CPU wouldn't be too happy with this amount of hardware because it is stated that it supports up to 48 pci-e express lanes ? (what would happen) ?

        c) What is confusing is that I already have Quad 16x setup and that is already 64 lanes and my CPU didn't blow up..... (so what's this about 48 lanes for a 5960x) ?

        Please can you shed some light on this side of things ?

        Many thanks.
        Last edited by JezUK; 13-07-2017, 12:55 AM.
        Jez

        ------------------------------------
        3DS Max 2023.3.4 | V-Ray 6.10.08 | Phoenix FD 4.40.00 | PD Player 64 1.0.7.32 | Forest Pack Pro 8.2.2 | RailClone 6.1.3
        Windows 11 Pro 22H2 | NVidia Drivers 535.98 (Game Drivers)

        Asus X299 Sage (Bios 4001), i9-7980xe, 128Gb, 1TB m.2 OS, 2 x NVidia RTX 3090 FE
        ---- Updated 06/09/23 -------

        Comment


        • #49
          Originally posted by JezUK View Post
          Thanks Bio - I really appreciate you reminding me that this is prototype stuff The main objective all along was to give my 4 cards a bit of space apart from each other, and to add more cards to further improve rendering in RT.

          But, I'm now wondering about PCI-e lanes - can you (or Super) help ?

          My mobo, an Asus X99-E WS, states that expansion slots can be in a variety of configurations, including single x16, dual x16/x16, triple x16/x16/x16 or quad, x16/x16/x16/x16.

          Or, seven x16/x8/x8/x8/x16/x8/x8.......

          Now, I'm definitely no expert in this area (especially) but adding all those seven lanes give 72 pci lanes ?

          I believe that there is also a restriction on the CPU side of things, in that my cpu, a 5960x supports up to 48 PCI-e Express lanes.

          So........ my questions are;

          a) Will 7 GPU's even be possible on my mobo (at full x16) - I'm guessing no, but that perhaps at x16/x8/x8/x8/x16/x8/x8, I wouldn't perhaps know the difference.

          b) Seems like my CPU wouldn't be too happy with this amount of hardware because it is stated that it supports up to 48 pci-e express lanes ? (what would happen) ?

          c) What is confusing is that I already have Quad 16x setup and that is already 64 lanes and my CPU didn't blow up..... (so what's this about 48 lanes for a 5960x) ?

          Please can you shed some light on this side of things ?

          Many thanks.

          I think I found my answer.....

          It will work as per the mobo documentation, i.e. seven x16/x8/x8/x8/x16/x8/x8, using the onboard PCIe switch (PLX Chip).

          So I believe I should be good to go
          Last edited by JezUK; 13-07-2017, 01:20 AM.
          Jez

          ------------------------------------
          3DS Max 2023.3.4 | V-Ray 6.10.08 | Phoenix FD 4.40.00 | PD Player 64 1.0.7.32 | Forest Pack Pro 8.2.2 | RailClone 6.1.3
          Windows 11 Pro 22H2 | NVidia Drivers 535.98 (Game Drivers)

          Asus X299 Sage (Bios 4001), i9-7980xe, 128Gb, 1TB m.2 OS, 2 x NVidia RTX 3090 FE
          ---- Updated 06/09/23 -------

          Comment


          • #50
            Originally posted by JezUK View Post


            I think I found my answer.....

            It will work as per the mobo documentation, i.e. seven x16/x8/x8/x8/x16/x8/x8, using the onboard PCIe switch (PLX Chip).

            So I believe I should be good to go
            Understanding the lanes took me a while to wrap my head around.

            The way I understand it, and others can chime in if I'm off a bit, is like this...the lanes are for data transfer. If your playing a game and driving through a city with explosions and firing bullets, that is a lot of graphical transfer across the lanes. So going back to PCIE speeds...let's start with the lowest:
            PCIE Gen 1 at 1x (1 lane) can transfer 250 MB per second...in each direction (using the same lane)...250 to the card and 250 from the card.

            If our max scene file was 250 MB in size, it would take 1 second (theoretically) to transfer the max scene to the gpu for rendering. Once on the gpu, the calculations go on and we get a rendered image that needs to be transfer from the gpu to the HDD. If that image is 250 MB is size, it would take 1 second to transfer to the HDD. If the HDD cannot write at that speed, then we have a slow down due to the HDD. Slow downs can also occur on other components of the MB, RAM, bus speed, etc. But let's not worry about those now.

            So if you ran a gpu at 1x (1 lane) at gen 1 that is what you would get above.

            Now lets go to 1x gen 2. Gen 2 doubles the throughput to 500 MB/s. Gen 3 is twice the speed of gen 2 at 985 MB/s.

            If all your gpus were run at 1x gen 3, you would have rounding up 1 GB/s transfer speed.

            So lets say you had 4 cards connected at 1x gen 3, your cpu would only need 4 lanes...for the gpus...your cpu also uses lanes for other things on your mb...but I'm not super savvy about how that works, so lets not worry about that for now either.

            That is for gpu rendering...which includes max scene transfer to the gpu, rendering, and image transfer back...not much transfer required. Blago told me this.

            If you are running a game at 1x gen 3, you would probably get stuttering or something where the transfer speed could not keep up with the game graphical data being transferred.

            Now take max viewport...not quite game intensive, but needs more data transfer than a 250 MB/s for max scenes. That is where you want to run your cards with more lanes (2x, 4x, 8x, or 16x (overkill)) to keep up with the viewport. Same thing for the game, more lanes...that is why people run higher resolution games on multiple monitors at higher frame rate...all that requires greater data speed and thus people try to run at 16x in sli (which I think is combining multiple cards to output very high graphical transfer to displays).

            Max and vray rt rendering does not need sli (more so)...some are venturing towards that, but I'm interested to see if it would benefit me...not yet though.

            I did a test one time with a ws with 2 gpus plugged into the mb running at 8x and gen 3 I think, and did a rt render and got a certain time. Then I tried with plugging 1 of the cards in at 1x speed (your 1x to 16x adapters you were considering...I have some of those) and the render speed was the same speed...which told me, for rt render and my scene, the data transfer requirement was less than what the lane was plugged in at and there was no slow down. Now the card I had at 1x was not running one of my monitors, so I saw no monitor or viewport slowdown.

            Hopefully, this helps.

            Comment


            • #51
              basically rt has relatively low requirements for transfer speed, once the scene has loaded onto the card. i assume this is complicated by the ability to stream textures to the cards while rendering (a relatively new feature) but basically, a gpu in a slower slot will not show a massive difference in rendering speed.. better in faster slots, but it doesnt make a huge difference.. i believe that having all your cards in 8x slots shows basically no slowdown over 16x.

              Comment


              • #52
                Thanks guys,

                Well, everything has been purchased - it's now a waiting game for the parts to arrive, to then assemble and hopefully turn the theory into practice

                Thanks so much for your inputs - I'll certainly keep you all informed
                Jez

                ------------------------------------
                3DS Max 2023.3.4 | V-Ray 6.10.08 | Phoenix FD 4.40.00 | PD Player 64 1.0.7.32 | Forest Pack Pro 8.2.2 | RailClone 6.1.3
                Windows 11 Pro 22H2 | NVidia Drivers 535.98 (Game Drivers)

                Asus X299 Sage (Bios 4001), i9-7980xe, 128Gb, 1TB m.2 OS, 2 x NVidia RTX 3090 FE
                ---- Updated 06/09/23 -------

                Comment


                • #53
                  We did something similar about a year ago. Bit of a hassle to set up, as beyond seven cards, things get a little bit awkward in Windows, like registry hacking, above 4G decoding, eternal boot times... Fun to build though .
                  Best regards,
                  Michael
                  This signature is only a temporary solution

                  Comment


                  • #54
                    Nice,
                    yep, it was a challenge but the gpu speed payoff is well worth it for immediate creative workflow. Plus I learned a lot about hardware I vaguely new before the build.

                    Comment


                    • #55
                      Originally posted by Sushidelic View Post
                      We did something similar about a year ago. Bit of a hassle to set up, as beyond seven cards, things get a little bit awkward in Windows, like registry hacking, above 4G decoding, eternal boot times... Fun to build though .
                      Best regards,
                      Michael
                      How did you solve the heat problem? I only have one 1080 and one 1080ti with half a centimeter space between them and it's kind of difficult keeping them cool.
                      Regards
                      Olaf Bendfeldt
                      3D-Artist
                      dimension3+

                      Comment


                      • #56
                        Originally posted by dimension3plus View Post

                        How did you solve the heat problem? I only have one 1080 and one 1080ti with half a centimeter space between them and it's kind of difficult keeping them cool.
                        Regards
                        Spread them out...restrict the airflow and you will see temps rise.

                        i moved mine to like 5 inches apart.

                        Comment


                        • #57
                          In sushidelics picture the GPUs sit pretty tight, that's why I'm wondering.
                          Last edited by dimension3plus; 20-08-2017, 12:50 AM.
                          Olaf Bendfeldt
                          3D-Artist
                          dimension3+

                          Comment


                          • #58
                            New design...increase gpu spacing to help keep top temps lower...the goal is to allow for as much airflow as possible.
                            Last edited by biochemical_animations; 20-08-2017, 12:19 PM. Reason: Typo

                            Comment


                            • #59
                              I'm adding my Super Ghetto gpu render farm to this thread. This is in an out building on my property we use for storage. I wasn't going to share this because I was embarrassed about the pink walls that I never got around to reprinting. But I'm going to share this because another thread is asking about gpu render farm build.

                              i7 4790s on asrock z87 extreme 4 MB, 16 gb 1600 ddr3, ssd, nothing special, just cost effective to get animations rendered. Evga 980 ti classifieds. 12 in all...add to the 7 I have on my WS (only 6 currently installed gives 19 (18 really) in all.

                              It summer here, we've been getting 90-100 degree F heat, I render 4K stuff overnight and morning hours before afternoon heat, and test renders (1280x720 or 1900x1600) when needed to get review clips out.

                              the room will get up to 108 degree F, I can open the two windows and vent that heat, reducing down to 100-104. At 108 the gpus get up past max heat (83 C) where they throttle. Reducing the top room temp to 100-104 will keep them at 78-82. During cooler hours they run at 72. I installed a box fan on a window with a thermostat that turn on to help move the outside air thru the room, effectively venting the heat. I lowered the gpus as low to the ground as possible, this is where the cool air resides...the hot air generated from the gpus moves higher in the room, and vents out the windows which are in the upper half of the room. It's ghetto, but it works. Not gonna win any beauty contests, but all this is run outside my office. I can even run an air conditioner to help cool the lower air, and leave the windows open to vent the hot air outside. It works well.

                              power wise I have two circuits, one 15 amp and one 20 amp to spit the power load. Each box pulls about 500-550 watts.

                              Only problem is the wax on my surfboards keeps melting in this room.

                              Comment


                              • #60
                                Wow, that's amazing! And thanks for the detailed imformation.
                                Best regards
                                Olaf Bendfeldt
                                3D-Artist
                                dimension3+

                                Comment

                                Working...
                                X