How many titans cards will be needed to match the rendering times of a dual Xeon system (E5 2640-2.5 GHz) with a clean result?
Announcement
Collapse
No announcement yet.
Multiple GPU vs dual Xeon system?
Collapse
X
-
Isn't GPU rendering like a zillion times faster than a CPU? (its just that certain features aren't supported yet)Kind Regards,
Richard Birket
----------------------------------->
http://www.blinkimage.com
----------------------------------->
-
It's a valid question. The only thing currently stopping me from recommending the practice I work for to switch to GPU rendering is the amount of RAM onboard the gpu. We'd need 12gb minimum really. Everything else that we use in every day practice is supported by RT now, even forest pro.Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/
www.robertslimbrick.com
Cache nothing. Brute force everything.
Comment
-
Originally posted by Macker View PostWe'd need 12gb minimum really.
Everything else that we use in every day practice is supported by RT now, even forest pro.
To go back to the original question, it very much depends on the particular scene - i.e. interior/exterior; image resolution; material complexity etc. For many things, especially exteriors, the Titan is insanely fast (compared to a brute force approach on the CPU of course).
Best regards,
VladoI only act like I know everything, Rogers.
Comment
-
Originally posted by Macker View PostEverything else that we use in every day practice is supported by RT now, even forest pro.Kind Regards,
Richard Birket
----------------------------------->
http://www.blinkimage.com
----------------------------------->
Comment
-
Hi, Vlado, thanks for your input . What I´d like to know is this: How many Titans cards will be needed to match the rendering times obtained with the biased combination of irr.map/light cache on dual Xeon E2640. I know it may be difficult to estimate because of different approaches to solve the GI but even a rough idea will be great. We do mostly office interior renderings @ 3500px by 2000px.
Comment
-
Well, get me a scene set up with the production V-Ray to vlado@chaosgroup.com and I will try to run some tests and see how it goes. We can put a number of cards and CPUs to it and see how it goes.
Best regards,
VladoI only act like I know everything, Rogers.
Comment
-
I shall be watching this thread with bated breath, eager to see some results.Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/
www.robertslimbrick.com
Cache nothing. Brute force everything.
Comment
-
Originally posted by tricky View PostBut not MultiTexture or BerconTile...c'mon guys. Who do I need to pester? Mr Bercon? Mr MultiTexture? Are there 'similar' tools available that are supported by RT?Check out my (rarely updated) blog @ http://macviz.blogspot.co.uk/
www.robertslimbrick.com
Cache nothing. Brute force everything.
Comment
-
-
Originally posted by tricky View PostIsn't GPU rendering like a zillion times faster than a CPU?
Note that in the kitchen curtain scene I posted, the number of paths traced per pixel was a whopping 12,040 - many, many more than most images I render with RT/GPU. In these cases, greatly reducing the amount of paths to trace using biasing techniques can reduce noise in a very fast, efficient way whereas our non-biased GPU renderer still renders every single ray in the scene to get a solution.
At the current time, GPUs are just getting strong enough to begin to challenge the CPU/biased renderer for interior shots with lots of bouncing GI to compute. In this thread's case, it would be good to get a typical shot rendered by the OP and test it in GPU and the CPU/biased render with a specifc CPU(s) to truly see where he is at with the technology, based on his personal needs. He is in the same basic place as you, Tricky. It would be interesting to see the results of such a test...
-Alan
Comment
Comment