Announcement

Collapse
No announcement yet.

GPU farm--will CPU boxes still render?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GPU farm--will CPU boxes still render?

    So I haven't used Next's GPU renderer yet, but I'm in the market to try. The situation is this, I'm still on 3.6 & have a 6 node cpu farm (dual xeons). I'd like to get a gpu driven workstation, 3 or 4 2080TI's since I'd need at least 44 GB vram to try & see if my work (arch interiors & animations) will work on GPU.
    All of my farm nodes only have 32 GB & most of my typical work fits in that memory space.

    The question is this--if I submit a job through GPU, does that mean that none of my cpu farm would participate in rendering frames? That's what I'm thinking & while I'd like to try & move to a GPU workflow, I can't afford to replace my whole farm. I'm sure I'm not the only one in this situation as well, so I'd love to hear from those of you that have crossed this bridge already. Thanks.

  • #2
    Originally posted by 100%indifferent View Post
    3 or 4 2080TI's since I'd need at least 44 GB vram to try & see if my work (arch interiors & animations) will work on GPU.

    couple of things to clarify: first, yes you can use your cpus. if you put vray GPU in cuda mode, you can select both nvidia gpu's o x86 cpus to participate in rendering... the gpu is obviously tons faster.


    second, you cant just add up gpu ram like that.

    by default the scene has to be duplicated on each card, meaning with 4x gpus you would still only have 11gb.


    to confuse this issue, you have nvlink.. this allows you to connect gpus, and get (almost) double the ram..

    however unless you are using quadro/tesla cards, this has major limitations..

    firstly, on consumer cards you can only connect 2 cards. , so a maximum of double your gpu ram

    second, nvidia have very kindly removed the ability to use multiple nvlinked pairs from the latest drivers. so with 2 cards, youd haave circa 22gb to play with, but moving to 4 (on latest drivers) youd be back to 11 gb.

    it sucks.


    id suggest either empty your bank account and get a pair of titan rtx, 22gb each so 44 gb total. or , what im doing:

    waiting for nvidia Ampere this summer, its supposed to be cheaper, have more ram , and feature double the raytracing performance.




    Comment


    • #3
      also on a positive note, youd be amazed what vray gpu can do with limited video ram.. it has all sorts of tricks to offload stuff to main ram... if youve got lots of that, you can generally render much more than u expect.. just stay away from displacement (for now) its a killer on gpu..

      Comment


      • #4
        Thanks much for the info. I was unaware that you can only link 2 consumer cards, though I guess with 2 Titans or similar I could get closer to the range of what I typically use in regular ram. The no displacement thing is a non-starter for me though, displacement is everywhere in my flow for both interiors & exteriors. That's they type of thing I wish we had a list for around here, the 'this stuff still doesn't work in gpu' vs what we know works in cpu without even thinking about it.

        Its really the reason that peeps like me, & I suspect there's a bunch, are not making the jump from cpu to gpu for production purposes, at least for non-product type shots. The other thing that I'm not quite getting is what would I do with one of these systems that both Puget & Boxx are pushing these days; 4 or 8 card gpu systems? If I can't leverage the total amount of vram spread across the total # of cards, are these for scenes where the scene fits into let's say the 11gb per card & then each card renders a frame of an animation?

        https://www.pugetsystems.com/nav/peak/1U/customize.php

        I hate to think that I'm going to invest in more dual xeons for node purposes, but working with them & staying on cpu is both predictable & stable, not to mention there's no feature that I can't use in cpu. Seems gpu just isn't there yet for my needs; heavier archviz scenes that need displacement & at least 32gb of ram if not more.

        Comment


        • #5
          dont misunderstand me.. you CAN use displacement.. its just pretty limited because it needs to generate the mesh on cpu and load it in its entirety to the gpu.. this tends to swallow cpu and gpu ram massively.

          and why sell a machine with 4 - 8x consumer gpus? speed! not to mention that until recently you could at least use nvlink on multiple pairs.. its a new driver limitation.

          ps. threadripper, not dual xeon

          p.p.s i suggest you try gpu.. the speed is very impressive, even on an older card (i have a pair of first generation titan x'es) start the job from scratch on gpu (youll more often than not have issues trying to open a scene set up on cpu) , keep things simple ( bitmaps rather than procedurals, limited displacement, keep texture sizes smaller where possible, etc ... and for less massive jobs, it can be a winner. admittedly i often have to swap to cpu halfway through because the job gets too heavy, but when it works, its great.
          Last edited by super gnu; 31-01-2020, 08:03 AM.

          Comment


          • #6
            Great info.

            Yeah, I get the speed thing on those multi-gpu boxes, but my scenes would never fit into 8 or 11gb of vram. Typically my scenes are close to 30 gb or more on cpu. If I were working on less complicated scenes perhaps.

            On the threadrippers, Puget is offering those for workstations, but not render nodes. Boxx as well. I've read a number of threads on these forums with people loving their speed, but it seems a lot of folks are having heat issues. For a box whose sole purpose would to be under full load most of the time, I'm hesitant on that front. The dual-xeons are not the fastest, but they're rock solid in the dependability/no overheating category.

            I feel like a year from now all of these issues from 'can't really use disp in gpu' to 'can't add vram like that' may be all remedied. But, makes switching a workflow & making a big dollar hardware investment tough.

            Comment


            • #7
              One thing I'm still unclear on, you mentioned that it used to be possible to add the vram across consumer level cards up to 2, but that the latest drivers eliminated that feature. Is that feature coming back? With 2 Titans, I would have 24GB x2. Right? That amount of vram would be enough to handle the type of scenes I typically work with. Seems odd the Nvidia would purposely delete a feature that everyone I'm assuming would want.

              Comment


              • #8
                you will struggle to do complex arch vis on GPU - it fails badly on larger scenes.
                maybe in future there will be a way around that limitation but for now its really for cars, teapots and those funny scenes with a couch, a few cushions and those fig plants in a danish warehouse.

                Comment


                • #9
                  Originally posted by squintnic View Post
                  you will struggle to do complex arch vis on GPU - it fails badly on larger scenes.
                  maybe in future there will be a way around that limitation but for now its really for cars, teapots and those funny scenes with a couch, a few cushions and those fig plants in a danish warehouse.
                  Also GPU renderings works very nice for train and air plane interiors where you can use a lot of instances (seats, tables, window panel, ...). So, the scene can be looking quite complex.
                  www.simulacrum.de ... visualization for designer and architects

                  Comment


                  • #10
                    Alright, thanks for the info guys. Seems like gpu just isn't there yet for the heavy lift type scenes.

                    Comment


                    • #11
                      If the GPU doesn't suit your heavy lift scenes, could you send such scenes over to us, so we have a real world sample of what type of scenes our engine struggles with? We're constantly looking for ways to improve it. And sometimes it's little things that take a day or two to fix, but nobody ever reported.
                      Alexander Soklev | Team Lead | V-Ray GPU

                      Comment

                      Working...
                      X