Not sure if this is a current limitation of Vray GPU, but since the other day i noticed that the GPU's don't scale as linearly as i would expect with Vray... My workstation has 3x 980 Ti's, soon 4x or more. So i did a very basic scene and compare render times per GPU combo, and also compare with Redshift renderer. While RS scales almost linearly, and that's what i would expect from a GPU renderer, Vray not so much.
Here's the results (times on the left for Vray):
Vray
00:31.5 - 100%
00:42.5 - ~34.9%
01:04.2 - ~33.6%
Redshift
00:38 - 100%
00:57 - ~50%
01:46 - ~46%
Is there any reasoning for this massive difference in scaling?
I want to get the most of my GPUs, so the only thing i can think of with Vray is to somehow render sequence frames per GPU, instead of assigning 3 GPUs to a single frame. (I don't own deadline...). How would i go about this then?
But with that said, for still images, there's still a big crush in performance.
Here's the results (times on the left for Vray):
Vray
00:31.5 - 100%
00:42.5 - ~34.9%
01:04.2 - ~33.6%
Redshift
00:38 - 100%
00:57 - ~50%
01:46 - ~46%
Is there any reasoning for this massive difference in scaling?
I want to get the most of my GPUs, so the only thing i can think of with Vray is to somehow render sequence frames per GPU, instead of assigning 3 GPUs to a single frame. (I don't own deadline...). How would i go about this then?
But with that said, for still images, there's still a big crush in performance.
Comment