Announcement
Collapse
No announcement yet.
GeForce 580 announced
Collapse
X
-
Saw that. $590 for the EVGA variant. For that price, I'm not thrilled with its specs for rendering purposes. Athough it has 512 CUDA cores, the clock speed is 772 MHz. 512 x 772 = 395,264.
For comparison's sake, the GTX 470 is less than half the price, and has 448 CUDA cores with a clock speed of 1.215 GHz. 448 x 1215 = 544,320.
I realize it can be overclocked, but that is probably a good way to fry a gaming card being used for GPU computing
-
EVGA's superclocked 470 is listed at 625Mhz vs the 580 superclocked at 797Mhz?? Where did you get the 1.215Ghz?www.dpict3d.com - "That's a very nice rendering, Dave. I think you've improved a great deal." - HAL9000... At least I have one fan.
Comment
-
Originally posted by vlado View PostTo be honest I too am somewhat confused between "graphics clock" and "processor clock"...
Best regards,
Vlado
But I was wrong about one thing. The processor clock on the 580 is 1544 MHz, so it is faster. Though still not enough to warrant a 100%+ mark up from the 470. For the extra $600 it would cost to put 2 of those in compared to 2 GTX 470, I could build another render node and have 2-3 more PCI x16 slots.Last edited by dpmitchell; 09-11-2010, 12:03 PM.
Comment
-
Originally posted by vlado View PostTrue, however the 580 has 3 GB of RAM...
Best regards,
Vlado
Comment
-
I am so intrigued by GPU rendering...I have been chasing this technology for years, going back to nvidia's amaretto and gelato, and the abandoned realtime rendering projects such as RT Squared and Click VR, and more recently, the empty promise of "Renderboost" from Cebas partner Progeniq (who admitted to me that despite the fluff and boasting on their site and on Cebas' site, every other card they produced experienced random failures in calculating GI, and it was just not stable or close to ready).
I am like a kid in a candy store with iRay and Vray RT. Nvidia's promise of CUDA rendering and Chaos Group/Mental Mill's hard work have finally accomplished what once seemed like a myth...the holy grail of 3d content creation!
So I think that within the next few months, I will become so wrapped up in testing this out, I'll probably aquire and test fermi cards on Nvidia's GF100 & 104 architecture (GTX 470, the new GTX 475 & 580), and a Tesla card.
There's such a vacuum of info on a lot of this...hopefully I can help out and shed some light on this
Comment
-
BTW, check out this page on the autodesk blog -- it has videos demonstrating iRay GPU rendering .. one rendering a scene with 3 fermi cards, and another demonstrating rendering the same scene with Mental Image's Reality Server (cloud) that has 32 Tesla M2050 GPU's.
Awesome to see where this technology is going. I can't wait for the Vray production renderer that incorporates GPU rendering.
http://area.autodesk.com/blogs/ken/m..._and_the_cloud
Comment
-
Originally posted by dpmitchell View PostI'm not really sure how much that will affect rendering performance from the CUDA processor cores. I think you would need to max out the dedicated ram on the GPU, and push the system memory to its limits, before that extra dedicated GPU ram would be a difference maker. If you're running 12-24 GB of system ram and 1 GB per GTX 470, I have a hard time believing that an extra couple GB of dedicated GPU RAM would make much of a difference, if any, in GPU rendering.www.dpict3d.com - "That's a very nice rendering, Dave. I think you've improved a great deal." - HAL9000... At least I have one fan.
Comment
-
Originally posted by dlparisi View PostThe entire dataset has to fit in the ram on one GPU, i.e. your system ram doesn't matter. If you have two 470's with 1.5gb each your limited to 1.5gb max, if you have a 3gb card then you can max out at 3gb.
I will just have to do it anecdotally. The intentional gaps and disparity in data categories provided by Nvidia from product line to product line makes a theorerical discussion just that. Nothing gets answers better than to plug each in and give it a whirl.Last edited by dpmitchell; 10-11-2010, 08:58 AM.
Comment
-
Originally posted by Amplified View PostWhere did you guys see a GeForce 580 with 3 GB of memory..? Have only found 1.5 versions.
Which begs the question...what are they going to do with the Quadros they just released this year that are based on GF100 fermi? If you pop the heatsink off the Quadro 5000, it says GF100 right on the chip, meaning it is a GTX 470 with disabled CUDA cores (probably by disabling one of the multiprocessors) and a lower clock speed to reduce power draw, and special Quadro drivers.
What I wouldn't give to pick up one of the GTX 580 on the new GF110 architecture and tweak the drivers to "make" a Quardo on GF110. I have searched high and low and although people claim this GTX-to-Quadro driver swap is possible, I have not seen anyone explain how to do it. Any ideas?
That being said, apparently nVidia's GTX 460 chip is based on a hybrid architecture called GF104, that was intended to correct many of the problems of GF100 on the way to the release of GF110. They boast higher CUDA clock speeds, lower temps, and lower power draws than the GF100 variant. Might be worth looking into, as the next iteration of that chip will be the GTX 5x0 and it will carry a much heftier price tag. Those will start to roll out when the supply of GXT 460's dwindles. The prices on both the 470 and the 460 are dropping to bring that about.Last edited by dpmitchell; 12-11-2010, 09:53 AM.
Comment
Comment