Announcement

Collapse
No announcement yet.

GeForce 580 announced

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GeForce 580 announced

    http://www.geforce.com/#/Hardware/GP...x-580/overview

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

  • #2
    Saw that. $590 for the EVGA variant. For that price, I'm not thrilled with its specs for rendering purposes. Athough it has 512 CUDA cores, the clock speed is 772 MHz. 512 x 772 = 395,264.

    For comparison's sake, the GTX 470 is less than half the price, and has 448 CUDA cores with a clock speed of 1.215 GHz. 448 x 1215 = 544,320.

    I realize it can be overclocked, but that is probably a good way to fry a gaming card being used for GPU computing

    Comment


    • #3
      EVGA's superclocked 470 is listed at 625Mhz vs the 580 superclocked at 797Mhz?? Where did you get the 1.215Ghz?
      www.dpict3d.com - "That's a very nice rendering, Dave. I think you've improved a great deal." - HAL9000... At least I have one fan.

      Comment


      • #4
        To be honest I too am somewhat confused between "graphics clock" and "processor clock"...

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Originally posted by vlado View Post
          To be honest I too am somewhat confused between "graphics clock" and "processor clock"...

          Best regards,
          Vlado
          The "processor clock" on the fermi cards is the processor frequency for the CUDA cores, which is what we care about for our discussion. The graphics clock bears on direct x performance, which is irrelevant for GPU computing.

          But I was wrong about one thing. The processor clock on the 580 is 1544 MHz, so it is faster. Though still not enough to warrant a 100%+ mark up from the 470. For the extra $600 it would cost to put 2 of those in compared to 2 GTX 470, I could build another render node and have 2-3 more PCI x16 slots.
          Last edited by dpmitchell; 09-11-2010, 12:03 PM.

          Comment


          • #6
            True, however the 580 has 3 GB of RAM...

            Best regards,
            Vlado
            I only act like I know everything, Rogers.

            Comment


            • #7
              Originally posted by vlado View Post
              True, however the 580 has 3 GB of RAM...

              Best regards,
              Vlado
              I'm not really sure how much that will affect rendering performance from the CUDA processor cores. I think you would need to max out the dedicated ram on the GPU, and push the system memory to its limits, before that extra dedicated GPU ram would be a difference maker. If you're running 12-24 GB of system ram and 1 GB per GTX 470, I have a hard time believing that an extra couple GB of dedicated GPU RAM would make much of a difference, if any, in GPU rendering.

              Comment


              • #8
                Speed-wise, it will not help, of course, but you will be able to fit larger datasets on the GPU.

                Best regards,
                Vlado
                I only act like I know everything, Rogers.

                Comment


                • #9
                  I am so intrigued by GPU rendering...I have been chasing this technology for years, going back to nvidia's amaretto and gelato, and the abandoned realtime rendering projects such as RT Squared and Click VR, and more recently, the empty promise of "Renderboost" from Cebas partner Progeniq (who admitted to me that despite the fluff and boasting on their site and on Cebas' site, every other card they produced experienced random failures in calculating GI, and it was just not stable or close to ready).

                  I am like a kid in a candy store with iRay and Vray RT. Nvidia's promise of CUDA rendering and Chaos Group/Mental Mill's hard work have finally accomplished what once seemed like a myth...the holy grail of 3d content creation!

                  So I think that within the next few months, I will become so wrapped up in testing this out, I'll probably aquire and test fermi cards on Nvidia's GF100 & 104 architecture (GTX 470, the new GTX 475 & 580), and a Tesla card.

                  There's such a vacuum of info on a lot of this...hopefully I can help out and shed some light on this

                  Comment


                  • #10
                    BTW, check out this page on the autodesk blog -- it has videos demonstrating iRay GPU rendering .. one rendering a scene with 3 fermi cards, and another demonstrating rendering the same scene with Mental Image's Reality Server (cloud) that has 32 Tesla M2050 GPU's.

                    Awesome to see where this technology is going. I can't wait for the Vray production renderer that incorporates GPU rendering.

                    http://area.autodesk.com/blogs/ken/m..._and_the_cloud

                    Comment


                    • #11
                      Originally posted by dpmitchell View Post
                      I'm not really sure how much that will affect rendering performance from the CUDA processor cores. I think you would need to max out the dedicated ram on the GPU, and push the system memory to its limits, before that extra dedicated GPU ram would be a difference maker. If you're running 12-24 GB of system ram and 1 GB per GTX 470, I have a hard time believing that an extra couple GB of dedicated GPU RAM would make much of a difference, if any, in GPU rendering.
                      The entire dataset has to fit in the ram on one GPU, i.e. your system ram doesn't matter. If you have two 470's with 1.5gb each your limited to 1.5gb max, if you have a 3gb card then you can max out at 3gb.
                      www.dpict3d.com - "That's a very nice rendering, Dave. I think you've improved a great deal." - HAL9000... At least I have one fan.

                      Comment


                      • #12
                        Originally posted by dlparisi View Post
                        The entire dataset has to fit in the ram on one GPU, i.e. your system ram doesn't matter. If you have two 470's with 1.5gb each your limited to 1.5gb max, if you have a 3gb card then you can max out at 3gb.
                        My understanding of the CUDA processing cycle is that system RAM determines the amount of processing data that can be sent to the GPU. Which is why Nvidia recommends a minimum of 4GB of system RAM for each Tesla card in GPU computing environments. The CPU then instructs the GPU on processing commands, then, the actual processing at the CUDA cores would be limited on each GPU by the dedicated GPU ram on that GPU. Then the data set is sent back to system RAM. But being that this is a fluid process, the bottleneck created by the dedicated RAM would not, in my mind, be a major detriment, as it pertains to 1.5 GB GPU ram vs. 3 GB...unless you are talking MASSIVE data sets. I, for one, am not attempting to process the amount of data that SETI, for example, is trying to process.

                        I will just have to do it anecdotally. The intentional gaps and disparity in data categories provided by Nvidia from product line to product line makes a theorerical discussion just that. Nothing gets answers better than to plug each in and give it a whirl.
                        Last edited by dpmitchell; 10-11-2010, 08:58 AM.

                        Comment


                        • #13
                          Where did you guys see a GeForce 580 with 3 GB of memory..? Have only found 1.5 versions.

                          Comment


                          • #14
                            Originally posted by Amplified View Post
                            Where did you guys see a GeForce 580 with 3 GB of memory..? Have only found 1.5 versions.
                            Don't know, you're right...the specs clearly say it's got 1.5. But after doing quite a bit more reading, the 580 is based on a new FERMI architecture called GF110. The old fermi chips in the GTX 400 series were based on GF100, which was plagued with heat problems and high power draws. This new GF110 is apparently a perfected version of fermi, and the benchmarkers are raving over it.

                            Which begs the question...what are they going to do with the Quadros they just released this year that are based on GF100 fermi? If you pop the heatsink off the Quadro 5000, it says GF100 right on the chip, meaning it is a GTX 470 with disabled CUDA cores (probably by disabling one of the multiprocessors) and a lower clock speed to reduce power draw, and special Quadro drivers.

                            What I wouldn't give to pick up one of the GTX 580 on the new GF110 architecture and tweak the drivers to "make" a Quardo on GF110. I have searched high and low and although people claim this GTX-to-Quadro driver swap is possible, I have not seen anyone explain how to do it. Any ideas?

                            That being said, apparently nVidia's GTX 460 chip is based on a hybrid architecture called GF104, that was intended to correct many of the problems of GF100 on the way to the release of GF110. They boast higher CUDA clock speeds, lower temps, and lower power draws than the GF100 variant. Might be worth looking into, as the next iteration of that chip will be the GTX 5x0 and it will carry a much heftier price tag. Those will start to roll out when the supply of GXT 460's dwindles. The prices on both the 470 and the 460 are dropping to bring that about.
                            Last edited by dpmitchell; 12-11-2010, 09:53 AM.

                            Comment


                            • #15
                              heh nvidia is better only when you using IRAY not VRAY when you switch to open CL better choice is ATI by my opinion ....

                              Comment

                              Working...
                              X