Announcement

Collapse
No announcement yet.

GPU Rendering

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GPU Rendering

    Hey guys,

    I'm currently in talks with the powers that be at our office about updating our rendering hardware. Obviously top of my teams priorities are speed, and the top of the directors priorities are cost/time.

    I've been reluctant to go down the route of GPU rendering up until now because of RAM limitations, especially when you can get such powerful (and relatively cheap) enthusiast CPU's. But now with the RTX Titan the RAM limitation has been lifted (if they're run via NVLink).

    I suppose my question is this; If we were to go down the route of GPU rendering would it be better to build a GPU render node or upgrade our individual workstations to (dual) Titans?

    My current workstation is a dual xeon e5-2630v4. In terms of a speed increase does anyone have any kind of GPU vs CPU benchmarks?

    I probably have a dozen other questions, but shall wait for now.

    Many thanks,
    Chris
    Last edited by Macker; 02-04-2019, 01:43 AM.
    Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/

    www.robertslimbrick.com

    Cache nothing. Brute force everything.

  • #2
    We shy from comparative benchmarks because things will vary quite a lot depending on a number of factors.

    There still are some specific rendering tasks for which CPUs hold their ground, while GPUs falter.
    These are very few, admittedly, but they plague not just for us, but most other engines too.
    Part of it is quality of the code, i hear from those which know best, but part of it is inherent in the way the GPUs want work to be presented to them.
    In general, divergence is an enemy of GPUs, the stronger it becomes (f.e. more GI bounces, deeper conditional shaders, etc.), the harsher the penalty GPUs pay compared to CPUs.
    But as i said, this isn't set in stone, and it varies *a lot* from scene to scene, even when one would think either technology had the upper hand.

    The picture -at least for us- is complicated a bit more by the fact that technologies won't necessarily make it into both engines at the same time (not for lack of trying!), so you may find that for one specific task, where a new tech has freshly been developed, one of the two will be a lot faster than the other.
    Things would though even out or change sign by the time the next patch came in, likely.

    The best thing you could do before committing, imo, would be to be exceedingly thorough in testing your average workloads with both techs.
    This will make you more aware of the differences between the engines, both in terms of supported techs and in terms of speed per specific tasks, as you will likely need some fiddling with your scenes to ensure the best possible results.

    Should your workloads fit what the GPU engine (and hardware) allows, rest assured the speed would blow you out of the water (even with those little monsters you have as CPUs), whichever upgrade route you chose to render on GPU.
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #3
      Thanks Lele, very helpful.

      Given that we're an architectural firm not a visualisation firm I don't think we'll be going too crazy with shaders/GI bounces and certainly I'm not averse to simplifying things if need be.

      We do have a 1080ti (appreciate this isn't as quick as 2080/titan rtx) in house that I could probably run a test on (just to benchmark it against my PC); can I just install vray standalone as a demo on it? Does that support GPU?
      Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/

      www.robertslimbrick.com

      Cache nothing. Brute force everything.

      Comment


      • #4
        I held off on GPU, too, until a couple of months ago. I was trying to render an interior on CPU and my times were 3+ hours per still. I enabled hybrid rendering and was blown away. With my single 1080TI my render time went from 3+ hours to 1/2 an hour per still. I have since gone to the Titan RTX and I am seeing a 35% increase in speed. I am contemplating a second Titan RTX with VLINK, but that's probably a couple of months away if I can even do it with my current computer (still up for debate).
        Bobby Parker
        www.bobby-parker.com
        e-mail: info@bobby-parker.com
        phone: 2188206812

        My current hardware setup:
        • Ryzen 9 5900x CPU
        • 128gb Vengeance RGB Pro RAM
        • NVIDIA GeForce RTX 4090 X2
        • ​Windows 11 Pro

        Comment


        • #5
          Thanks Bobby, that's very helpful.

          We're all in agreement then that GPU is the way to go for speed!
          Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/

          www.robertslimbrick.com

          Cache nothing. Brute force everything.

          Comment

          Working...
          X