Announcement

Collapse
No announcement yet.

Graphics card RAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Graphics card RAM

    Is everything calculated via the video cards RAM? ie textures are loaded onto the card that is why at least 1Gb is recommended.

    Is there anything stored on the CPU or system memory?

    Thanks
    Last edited by ; 07-06-2011, 01:54 PM.

  • #2
    The entire scene, including texture maps, has to fit in the RAM of each GPU for it to render. For this reason, I'm sticking with 2GB as a minimum.

    Presently, as far as RT/GPU rendering is concerned, the system RAM is not an issue.

    -Alan

    Comment


    • #3
      I thought so, just wanted to make sure. Thanks.

      Comment


      • #4
        what happens if my scene is 3gb? does 2gb go on the graphics card memory and the remaining 1 gb go to system memory? I asume the graohics card would then need to talk to the CPU to get the remaining 1gb and therefore it renders slower than if it were to all fit on the card.

        Or does it just not render at all? I have just opened one of my Max scenes and it take sup 4GB of system memory. I'm concerned that if i go out and buy a 590 GTX it wont fit on

        Comment


        • #5
          As far as I know, it just won't render.

          Go get the GTX 580 3GB instead.

          Comment


          • #6
            The 590 is 3GB but if my scene is 4GB then I have waisted money? If a scene takes up 4gb of system RAM. Would it require 4GB of video RAM?

            Comment


            • #7
              If your scene requires more RAM than is available to each GPU, then it will not render and may actually crash Vray. In my tests, I think most of the time it just sat there, but sometimes I had to restart Vray stand-alone.

              Yes, if your scene requires 4GB of RAM to render, each GPU needs 4GB to work with to render the scene.

              If your entire scene does not fit in each GPU's RAM it will not render, but is not necessarily a waste! RT/GPU is currently mostly used as a quick way to set up lighting, materials, reflections, environments, etc., before final rendering. You can easily hide non-essential elements in your scene while doing that, so you can get your scene set up quickly before production rendering using the normal Vray bias controls like IR mapping and the Light Cache.

              Folks that are using RT/GPU as a production renderer are planning their scene so it will render with available GPU RAM. Setting lower texture map sizes, reducing mesh counts, and minimizing lights are some of the ways you can keep your scene as low as possible RAM usage-wise. Using GPU monitoring software like the Evga Precision Tool also helps with this. Also, they must remember that RT/GPU does not yet support materials with considerable set-up times like composites, blends, and physical displacements, so these things have to be planned for as well for folks who want to use RT/GPU as a production renderer.

              Also, the 590 may advertise 3GB RAM, but it is actually two 580 GPUs on one board with an SLI-type arrangement. If I'm not mistaken, each GPU only sees 1.5GB RAM. If this is the case, your scene will have to fit in 1.5 GB of RAM to use the 590 for RT/GPU. Hopefully, an 590 owner will chip in here for a confirmation. Perhaps the 3GB 580 (512 cores) is currently a better idea, as mentioned. I'm pretty sure that the single-GPU 580s are clocked faster than the GPUs in the 590 as well.

              Finally, remember that GPU rendering itself is in its infancy, and there will be many, many developments in the next few years - hardware and software-wise. I'm predicting that many will eventually use a sort of hybrid GPU/CPU rendering scheme in the long run, with software that will organize the RAM and rendering elements to use the fastest and most efficient mode(s) available for the specific application. Stay tuned!

              -Alan
              Last edited by Alan Iglesias; 10-06-2011, 10:57 AM.

              Comment


              • #8
                Thanks Allan that was really helpful. I found it hard to find a good explanation but you have just answered all my questions

                Comment


                • #9
                  Most welcome!

                  -Alan

                  Comment


                  • #10
                    Originally posted by Alan Iglesias View Post
                    Also, the 590 may advertise 3GB RAM, but it is actually two 580 GPUs on one board with an SLI-type arrangement. If I'm not mistaken, each GPU only sees 1.5GB RAM. If this is the case, your scene will have to fit in 1.5 GB of RAM to use the 590 for RT/GPU. Hopefully, an 590 owner will chip in here for a confirmation. Perhaps the 3GB 580 (512 cores) is currently a better idea, as mentioned. I'm pretty sure that the single-GPU 580s are clocked faster than the GPUs in the 590 as well.
                    You're correct. I have a GTX 590 and each GPU is in fact SLI'd as far as the system and drivers are concerned. Each GPU has access to "only" 1.5GB RAM. That said, everyone has to determine their own needs. I rendered some of my typical work scenarios using RT/CPU and looked at RAM usage by the VrayRT process in Task Manager to make my buying decision. I'm not 100% sure if this is a valid method, but I haven't run into the limit of 1.5GB yet. As always, your mileage may vary.

                    Personally, I find RT/GPU only suitable for previz and development work at it's current stage, but it's certainly enough to get excited about.

                    Comment


                    • #11
                      Thanks for your input, Phil.

                      -Alan

                      Comment


                      • #12
                        the thing i dont get, is when AGP (precursor to PCI-e) came out, a big fuss was made about the fact that you were no longer limited to the amount of ram your gpu had, (for games in this case) as the stuff that wouldnt fit in the gpu ram would be stored in ram and transferred over this fast new bus when needed. that was a very long time ago. pci-e is many times faster, and id have thought would also have this ability to use the system ram when the gpu was full. sure gpu ram is about 5-10x faster than system ram, but still... a hard lock up or crash? surely something more graceful, like a slowdown, would be more reasonable.

                        i believe directx also offered elements of this functionality ( as well as feature backup, if the gpu didnt support a feature, it would be emulated on cpu.)

                        id say the ram limit on gpu's is one of the biggest issues with rt gpu..

                        Comment


                        • #13
                          Based on my admittedly limited knowledge of the subject, OpenCL doesn't work like normal graphics acceleration does with DX or OGL. But a 16 lane PCIe V2.x slot has 8GB/s of bandwidth compared to 2.1GB/s for AGP 8x. In comparison, a 384bit GDDR5 has 192.4GB/s of memory bandwidth... Whether this bottleneck would slow things down to a crawl or OpenCL simply can't cache data away from it's little graphics card universe is beyond my understanding.

                          Comment


                          • #14
                            ye to be honest the bandwidth difference might well make it impractical to share with system ram. would still be nice to see some solution to this problem...data compression? or some nice company to make a gpu with expandable ram slots

                            Comment


                            • #15
                              Gnu, the architecture dynamics you describe were referring to basic display graphics support. Hardware-wise, our current GPU rendering is actually based on parallel computing (lots of separate processors or "cores" working on the same project simultaneously). Each processor needs to have direct and immediate access to the entire scene file to do this, so for now at least, the RAM has to be on board for the fastest rendering.

                              That being said, I wouldn't be surprised to see plenty of tweakable sharing between GPU and CPU, and their respective RAM in the future.

                              -Alan

                              Comment

                              Working...
                              X