Hi guys.
I'm planing a new custom built dual xeon 2690 V4 workstation with possibly dual GPU as well (Titan X Pascal).
Since our production pipeline still relies on DR CPU rendering, the GPU would be mostly used for preview purposes.
This type of hardware setup generates a lot of heat and consumes a lot of electricity. The plan was to hook up 2 water cooling loops (CPU + GPU)
My question is, would it be better to assemble a separate GPU render node? What would be the initial load times of a render job if going through a 1Gb LAN and it's responsiveness later on when doing scene/shader adjustments? Would it be much slower than using the local GPU?
Is it even possible to launch a RT render without using the local machine/GPU?
Last question, Is 2 Titan X a waste of money since the VRAM doesn't sum up? Perhaps, would it be better to use i.e. a 1080 as the primary card and the Titan for RT? I'm talking preview renderings/basic lightning setup/material-shaders testing.
cheers!
I'm planing a new custom built dual xeon 2690 V4 workstation with possibly dual GPU as well (Titan X Pascal).
Since our production pipeline still relies on DR CPU rendering, the GPU would be mostly used for preview purposes.
This type of hardware setup generates a lot of heat and consumes a lot of electricity. The plan was to hook up 2 water cooling loops (CPU + GPU)
My question is, would it be better to assemble a separate GPU render node? What would be the initial load times of a render job if going through a 1Gb LAN and it's responsiveness later on when doing scene/shader adjustments? Would it be much slower than using the local GPU?
Is it even possible to launch a RT render without using the local machine/GPU?
Last question, Is 2 Titan X a waste of money since the VRAM doesn't sum up? Perhaps, would it be better to use i.e. a 1080 as the primary card and the Titan for RT? I'm talking preview renderings/basic lightning setup/material-shaders testing.
cheers!