Announcement

Collapse
No announcement yet.

My new custom render farm inside a Helmer!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • My new custom render farm inside a Helmer!

    Hello all - I just thought folks might be interested in a project I just finished. I'm an arch. grad student (and vray edu user) and am prepping for my thesis project this semester. I already had a nice computer I built a year ago (custom watercooling loop, 2600k OC to 4.5ghz, EVGA 580, 32gb ram) but I wanted more!! So long story short I built a little render farm with some spare cash I earned from teaching a Revit class.

    Check it out if you like:

    http://carpitecture.blogspot.com/201...nder-farm.html

    Render times are amazingly short and that's with only 2 of the nodes done. I'm very excited for the third.

    Cheers!

    Dan

  • #2
    grad student + extra cash. where's the punch line? It looks cool!
    Bobby Parker
    www.bobby-parker.com
    e-mail: info@bobby-parker.com
    phone: 2188206812

    My current hardware setup:
    • Ryzen 9 5900x CPU
    • 128gb Vengeance RGB Pro RAM
    • NVIDIA GeForce RTX 4090 X2
    • ​Windows 11 Pro

    Comment


    • #3
      I think spending my spare money on a render farm does probably confirm my status of 'major nerd,' but what can you do? Now I just need to sort out my firewall issues.

      Comment


      • #4
        i did a similar "fit computers in a small box" thing here:

        http://www.chaosgroup.com/forums/vbu...k-workstation-)

        if i were doing your project i would have:

        a) used corsair h60 or h80 coolers, this would lose the extra height the cooler gives, as you could mount them behind the motherboard in the same drawer. and remove the need for case fans, as the cpu fan would act as a vent fan too.

        and b) used a high wattage psu shared between all the nodes.


        these two changes you could probably have fitted a node in each drawer rather than 2 drawers. so 5 cpus in the helmer, plus a drawer for your psu. if you dont have any gpus in there, and youre only using quad cores, a 1500w psu like mine would probably do all 5.


        for future reference this is very useful:

        http://www.extreme.outervision.com/p...ulatorlite.jsp

        Comment


        • #5
          I saw your build when I was researching mine a few weeks ago, really nice! I'm always amazed at how much space is left in a big ATX case when the build is done. Great idea.

          I considered sealed watercoolers but I was concerned about stories of leakage with corsair. I don't know how valid they are though. Also the price was enough more than my cooler that I just thought 'what the heck.' I was trying to stay at the $500 mark per node and I found my coolers for $20. All my stuff was bought during black friday sales so the cost per node came to about $530 in the end. Plus since the Helmer is only $39 it swayed me toward less dense.

          We considered the single PSU option, too. I think that could have been a really good choice. I was just concerned about cable length and availability of the 12pin ATX connectors.

          If I were gonna do this again and had more money I think I would probably go for higher density. But as it is I can't even afford to finish the third node yet and I'm not earning money with my renders. If I go into freelance rendering after graduating and I am able to buy all the software necessary (around $8k I'm thinking) then I'd love to build another farm.

          Comment


          • #6
            fair point.. yes $530 per node is pretty damn good value

            id say the corsairs seem pretty solid, ive done custom loops before and this feels less likely to leak than anything ive built never say never though!

            wrt. my project id point out mine is actually a pretty small ATX case. one of the smallest decent towers i could find. i toyed with all sorts of ideas for how to pack a lot of renderpower into a small space. (of course modelling them in 3d i was all set to go with a totally custom case, with -6-! mini itx boards with an overclocked 6-core in each, fitted into significantly less space than my current workstation..

            but in the end the limit of 2 ram slots, hence 16gb per node killed the idea for me. im not entirely happy with the 32gb per node limit in my current machine, but 32 should last me a fair while, and theres always a chance they will bring out 16gb dimms and a bios update for my motherboards, before its too limiting. then id be limited to only 64 gb

            ohh and about the cable length, shouldnt be an issue at all.. and you can easily buy atx extension cables for a couple of dollars and splice them all together. thats what i did. id wager that doing it for 5 machines might be a bit more involved that for 2 ( you have to consider the number of rails in the psu and how much load you are placing on each, plus the (usually pitifully small) +5v current limit for psu ( they spec them by the 12v current limit, but each board needs a 5v supply too)

            that and the 5 soldered splices for each thick wire, which if done in a single splice will lead to some fairly impressive big blobby solders. (if you were doing 5 machines, something a bit more professional like junction boxes might be in order. rather than the electrical tape mess i have behind my mobo. )
            Last edited by super gnu; 11-01-2013, 02:37 PM.

            Comment


            • #7
              Yeah, the limitations of small motherboards was a deal breaker for me too. That's part of why I didn't go Micro-ATX and 1 node per drawer. They either had to few ram slots and\or weren't very overclockable. If you look at the manuals for some of them even some of the Z77 didn't have the ability to manually adjust core voltage, things like that.

              Maybe a person could do 2 PSU's for 4-5 nodes rather than just one monster? You could even place them in the middle drawer to keep cable length shorter. That might be more doable and keep the cost down. When I was estimating my wattage it really wasn't that high but still I'd want to allow a solid 350-400w per node, I think, so that puts you pretty high with a single.

              I recently made the move to 32gb and would never go back. I've pushed the limits a couple of times but I think it'll do me for a long while to come. Are you in Arch. Viz?

              Comment

              Working...
              X