So I'm trying to test out Amazon's EC2 machines as a remote renderfarm and trying to gauge the speed and compare it to my dual Xeon machines I use locally. I fired up an instance, M3 Extra Large, which as 13 compute units and 15 gigs of RAM. I've only done one initial test so far on a simple scene of teapots. My workstation is an Intel i7 2600K (3.4 GHz) with 16 gig of ram, and my 2 render slaves are Dual Xeon 2.4 GHz with 8 gig of ram each. Here are the results:
Workstation = 44 secs
Slave1 = 1 min 11 secs
Slave2 = 1 min 9 secs
M3 XL = 2 min 9 secs
Amazon says 1 compute unit is roughly 1Ghz from a cpu circa 2006. So to me, it seems this M3-XL instance should be the equivalent of 13 GHz from a 6 year old machine. Or perhaps 4 x 3 GHz machines? However, the machine only runs 4 cores (equaling 4 buckets). Regardless of how this equates, it seems to me like 13 CUs should be much faster!! But rather it is almost 3 times slower than my workstation? Is there really THAT much difference between CPUs of 2011 and CPUs of 2006???
I'm really trying to come up with a semi-accurate way to predict render times and ultimately pricing for future animation projects. I'm curious what the rest of you have seen.
Workstation = 44 secs
Slave1 = 1 min 11 secs
Slave2 = 1 min 9 secs
M3 XL = 2 min 9 secs
Amazon says 1 compute unit is roughly 1Ghz from a cpu circa 2006. So to me, it seems this M3-XL instance should be the equivalent of 13 GHz from a 6 year old machine. Or perhaps 4 x 3 GHz machines? However, the machine only runs 4 cores (equaling 4 buckets). Regardless of how this equates, it seems to me like 13 CUs should be much faster!! But rather it is almost 3 times slower than my workstation? Is there really THAT much difference between CPUs of 2011 and CPUs of 2006???
I'm really trying to come up with a semi-accurate way to predict render times and ultimately pricing for future animation projects. I'm curious what the rest of you have seen.
Comment