Announcement

Collapse
No announcement yet.

Dual 4090s any point in Vantage?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Dual 4090s any point in Vantage?

    Hi! I am facing memory issues with my exterior scenes in Vantage. My 4090s 24gig vram doesn't seem to be enough when added vegetation, props, furnitures etc, and I also think I would be on the limit with 5090s 32 gig. So before I spend $2500 on a 5090 upgrade, I would like to know what I can gain from running two 4090s in my computer rather than one. Can I get 48gig Vram that way, or is my only way the A6000 for $10000? For me, rendering with Vantage is the future, and I am never going back to a CPU render farm.

    With render farm mentioned, that is not an option for me when it comes to gpu. My modern workflow opens for more iterations and smoother process, and I need to be able to both view and render locally.

    What are my options here? And how big bottleneck would RAM be for the effiency of VRAM? Also, are there any hybrid alternatives now or coming soon, in that perhaps I can get away with upgrading to 256 gig RAM instead with offloading to RAM? Thanks for any insight or ideas.

    ​​

  • #2
    You're right... Vantage is a great tool and I've totally incorporated it in my workflow. Regarding your question. Do you hit the vram issue straight out of the gate? Have you cleared your temp folder in order to clear the vram? Is your model optimized? Are you using the dynamic textures option in Vantage?

    Comment


    • #3
      My bet is that if dynamic textures is default off, then I'm not using it. That would be incredible. Need to check it out. Thanks for that suggestion. Well, it's not of a big issue before I start adding the props, but by then I bet a lot of my textures are eating vram already. I need to use only one version of a hiq model. There is just additional costs connected to maintaining/storage/time spent. But maybe there are more to the term scene optimization than just using lower poly model and smaller texture maps.

      I hit the limit on only one bigger apartment building now, so what if there were two other buildings in this main scene, how should I go about optimizing that scene? Xrefing and using several passes could do it. But what when animating? I do have a great deal of experience with CPU work, but I have to admit that I might still have some workflow adjustments to do to pull off bigger scenes on gpu. I won't say huge scenes, because in my point of view I am pretty far away from "huge scene" before hitting the limit right now.
      ​​​​

      Comment


      • #4
        Also, cleared temp folder. No, I didn't, other than my bruteforce option of rebooting. How do you go about doing that? (what folder, do you use any tools etc)

        Comment


        • #5
          I checked, and I am using dynamic textures already.

          Comment


          • #6
            Go to the c: user,/your name /appdata/local/template and delete everything thing. You'll get some that won't delete, just skip those.

            Comment


            • #7
              Dynamic textures is ON by default.

              Deleting your TEMP folder only frees hard drive space, not VRAM.

              Things that could use a lot of VRAM include:
              - Textures, but with dynamic textures we only load the LOD (mip level) that is needed, so you probably don't need to optimize that. An exception is made for textures for light sources - they are loaded at full detail.
              - Geometry - use fewer polygons if you can. Displacement can use a lot of memory. If your scene uses it, reduce the tessellation level from Edit->Preferences->Render defaults.
              - Render resolution - we use multiple output buffers at high bit depth, so large resolutions can eat up gigabytes. Adding render elements for final rendering also adds to that.

              If possible, stop other 3D applications because they eat up some of the GPU memory. This includes browsers and even other web-based apps like chat apps (you can check in Task Manager).
              Nikola Goranov
              Chaos Developer

              Comment


              • #8
                Interesting... whenever I crashed before my upgrade to 4090 the temp folder trick was a lifesaver.

                Comment


                • #9
                  It's nice to optimize. But at some point, the geometry is just too big.To switch back to the initial topic, two 4090's, any point in Vantage?

                  Comment


                  • #10
                    Can I get 48gig Vram that way,
                    The answer to this question is NO, you are not getting 48 GBs of VRAM with two 4090 GPUs.
                    The scene, textures, output buffers, etc have a copy on each GPU.

                    Greetings,
                    Vladimir Nedev
                    Vantage developer, e-mail: vladimir.nedev@chaos.com , for licensing problems please contact : chaos.com/help

                    Comment


                    • #11
                      A second 4090 should make renders faster but don't expect it to be 2x faster, it doesn't scale that well. If the FPS is already high it won't improve because we have to transfer a significant amount of data between the GPUs on each frame using the PCI-e bus and this bus becomes a bottleneck. There is some maximum FPS that you can reach when using two GPUs due to the PCI-e bottleneck because at some point the data transfer can become slower than rendering itself. This maximum FPS is dependent on resolution.
                      Nikola Goranov
                      Chaos Developer

                      Comment


                      • #12
                        Originally posted by jon_berntsen View Post
                        Hi! I am facing memory issues with my exterior scenes in Vantage. My 4090s 24gig vram doesn't seem to be enough when added vegetation, props, furnitures etc, and I also think I would be on the limit with 5090s 32 gig. So before I spend $2500 on a 5090 upgrade, I would like to know what I can gain from running two 4090s in my computer rather than one. Can I get 48gig Vram that way, or is my only way the A6000 for $10000? For me, rendering with Vantage is the future, and I am never going back to a CPU render farm.

                        With render farm mentioned, that is not an option for me when it comes to gpu. My modern workflow opens for more iterations and smoother process, and I need to be able to both view and render locally.

                        What are my options here? And how big bottleneck would RAM be for the effiency of VRAM? Also, are there any hybrid alternatives now or coming soon, in that perhaps I can get away with upgrading to 256 gig RAM instead with offloading to RAM? Thanks for any insight or ideas.

                        ​​
                        is your 4090 only for rendering or is it driving the displays as well?
                        Marcin Piotrowski
                        youtube
                        CGI OCIO config 04

                        Comment


                        • #13
                          Had to render a massive masterplan with only a 3060 (12GB) card - had to optimize as far as I could:
                          Reduce polygon count with the ProOptimizer modifier - even on proxies like cars/trees/other high poly elements - that helped me a lot.

                          And as mentioned, lower output resolution and lower samples - even as low as 30 - but will need noise reduction software to use in post.
                          I also clicked the "enable ray termination" under render settings (advanced) which gave me better frame rates.

                          Would have loved a 4090 or 5090 card.
                          .
                          Was hoping the new Vantage update would help the guys with the mid-size GPU's, but only the newer (and really expensive) GPU's benefit from the new optimizations.

                          Comment


                          • #14
                            SER is not something we came up with, it's an NVIDIA technology (which is actually going to become more universal in the future: https://devblogs.microsoft.com/direc...e-at-gdc-2025/ ) and of course NVIDIA is trying to sell their newer cards.
                            Nikola Goranov
                            Chaos Developer

                            Comment

                            Working...
                            X