I've just added a new machine to my set up and have been doing some testing, using the latest project I recently worked on - and am getting surprising results which are leaving me a bit disappointed !
My first PC, a 5960x, has 4 x 1080ti's, and before I built my new PC, it was rendering this project, on all 4 GPUs, at 1m 56s, locally.
Adding my new machine (a 7980xe), which has become my main machine, it has 2 x 1080ti's, and it was RT rendering on 2 GPU's at 2m 11, locally.
Now, they are connected via a very simple Windows HomeGroup Network, and the same scene yields surprising results......
When both are working together (distributed rendering) RT, just on GPUs alone (all 4 + 2), the render time is 1m 51s.......
That's just marginally quicker than the sole 5960x on its own....
When both are rendering with GPU *and* CPU, the render time is fractionally better, by around 1 or 2 seconds.....
But when I retry rendering via the network, disabling all devices on the 7980xe, so it was using just the 5960x as a slave, the render was taking just 1m 46s.......
As I was watching the temperatures etc via the Corsair Link software, I could see;
When all 6 GPU's were used to create the scene, the GPU's weren't being stressed and the temperatures were mid-50's to mid-60's.
But when only one machine was being used in the rendering, the temperatures on the GPU's were much higher (and what I'm used to seeing when rendering - ie around 70++)
It looks to me, that the more GPU's you have, the less stress the GPU are put under - at the expense of render times.
If I just use one machine, it's as though *then* the GPU's are being pushed.
My first PC, a 5960x, has 4 x 1080ti's, and before I built my new PC, it was rendering this project, on all 4 GPUs, at 1m 56s, locally.
Adding my new machine (a 7980xe), which has become my main machine, it has 2 x 1080ti's, and it was RT rendering on 2 GPU's at 2m 11, locally.
Now, they are connected via a very simple Windows HomeGroup Network, and the same scene yields surprising results......
When both are working together (distributed rendering) RT, just on GPUs alone (all 4 + 2), the render time is 1m 51s.......
That's just marginally quicker than the sole 5960x on its own....
When both are rendering with GPU *and* CPU, the render time is fractionally better, by around 1 or 2 seconds.....
But when I retry rendering via the network, disabling all devices on the 7980xe, so it was using just the 5960x as a slave, the render was taking just 1m 46s.......
As I was watching the temperatures etc via the Corsair Link software, I could see;
When all 6 GPU's were used to create the scene, the GPU's weren't being stressed and the temperatures were mid-50's to mid-60's.
But when only one machine was being used in the rendering, the temperatures on the GPU's were much higher (and what I'm used to seeing when rendering - ie around 70++)
It looks to me, that the more GPU's you have, the less stress the GPU are put under - at the expense of render times.
If I just use one machine, it's as though *then* the GPU's are being pushed.
Comment