Announcement

Collapse
No announcement yet.

Nvidia 30 series cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by glorybound View Post
    Yes, I use GPU on my exteriors when I can. I got around all my limitations, except for my RailClone colors. For post-production, I had to get around my RAW elements, but that was pretty easy to do. I would say that GPU is at least twice as fast on my current setup. If I get a second Titan RTX, I could cut my times in half, but I am having issues with power in my computer case. Progressive with Denoiser is the key.
    How is it during look development? I find that the time it takes to upload the scene to GPU makes it less attractive for test renders. Also, I only have 8GB 1080 so I have that feeling I can't trust when I'm going to run out of memory.

    Comment


    • #47
      Yes, the uploading scene does become an issue and sometimes sends me back to CPU. I think that is a Forest Pro issue.
      Bobby Parker
      www.bobby-parker.com
      e-mail: info@bobby-parker.com
      phone: 2188206812

      My current hardware setup:
      • Ryzen 9 5900x CPU
      • 128gb Vengeance RGB Pro RAM
      • NVIDIA GeForce RTX 4090
      • ​Windows 11 Pro

      Comment


      • #48
        Originally posted by ^Lele^ View Post
        We'd rather not do this, the reasons are explained as well as i could muster above (none of which include fear of misrepresentation. It's a *scientific method* issue.)
        You, however, are free to do as you please: If it floats your boat, go places with it!
        Just please do not make it into a sweeping statement: "A is faster than B" has been proven too simplistic a claim to hold any value, time and time again.
        It's not going to change in the foreseeable future: context, and math brought to bear within that context, have unavoidable weight, and are proven to sway results greatly.
        What i meant by misrepresentation was really, sweeping statements that haven't taken the 'scientific method' into account, so i get that. But I just think it would be useful to have comparisons of gpu and cpu from other users , no point me testing stuff after ive bought it.
        e: info@adriandenne.com
        w: www.adriandenne.com

        Comment


        • #49
          Originally posted by francomanko View Post

          But I just think it would be useful to have comparisons of gpu and cpu from other users , no point me testing stuff after ive bought it.
          My point exactly.

          Ive used GPU for the last 5 almost exclusively. I added a second GTX 1080ti and run hybrid mode with my TR 1950x. It is for sure faster than my CPU alone, but its not without its issues. Maya projections, 2d file texture stagger, camera auto exposure....a lot of these things you can find workarounds for, but you have to go digging and find the weird way it needs to work...like having to go and render in CPU mode first for auto exposure to work, then switch back to GPU. I think spent a couple of hours trying to copy camera settings during production time on a shot before I worked that one out.

          With progressive, denoiser, DOF and motion blur, it can spit out a clean car interior render animation frames at 2500px in about 7 mins. That's seriously fast for average hardware, in CUDA mode, and apart from some very small niggles, it was production quality.

          One point I think I'd like clarifying, is the comparison between GPU RAM and system RAM. I read somewhere that 10GB on a GPU equates to about 64GB system RAM. That the numbers arent interchangeable. For example, saying a displacement eats up 40gb system RAM so how would it ever fit on my 1080ti?
          Website
          https://mangobeard.com/
          Behance
          https://www.behance.net/seandunderdale

          Comment


          • #50
            Originally posted by Neilg View Post
            BBB3 uses GPU exclusively and hits that quality bar - however his scenes are designed from scratch on gpu and they're very self contained projects. Lots of displacement and detail, so it's possible, but still not a great 1:1 comparison of what the workflow is like.
            https://www.flickr.com/photos/bbb3viz/with/48973807192/
            Thanks man. Yes, for me there is no going back. GPU for sure has drawbacks, the main one being that V-Ray remains primarily a CPU renderer and that the GPU version, while very capable, isn't the main development priority. There are still serious limitations (limited SSS and no volume material or real translucency, limited support for the Forest color map and RailClone color and working with UVs in TyFlow are the main ones for me...), but problems have been ironed out over time. We can now mix environmental fog (though still untextured) and VDBs, which is a big plus. Displacement is gradually evolving towards parity with the CPU version, including 2D displacement. Matte objects are working more predictably. TyFlow instancing is finally supported...
            Scene size hasn't been a huge problem for me. Some of my scenes are pretty big (this one, or this one), and they fit on my 20GB of VRAM with a ton of space to spare. You just need a little bit of discipline in prepping the scene and you can use on-demand texture mip-mapping, which can save you gigabytes of VRAM. Clearly it won't fit every workflow or work with gigantic scenes. But we also have to admit that Archviz artists have never been trained to optimize for RAM consumption. We're pretty wasteful in how we build our assets. If you start thinking more like a game artist, for instance using normal maps to take mid-res assets to the next level of detail or decimating heavy assets judiciously, you can go a long way.
            One thing I wouldn't recommend is switching back an forth between the CPU and GPU version. Scenes made for one will tend not to work in the other. And of course, every time you switch, V-Ray wipes out all your render settings.
            Check my blog

            Comment


            • #51
              I've had some weird issues with the on demand mip mapping, for the most part it seems fine but certain textures were messed up, i will have to have a better look to see what was causing it, but it made me wonder, I know you can specify vray to resize the textures as well, but it would be pretty useful if you could specify texture sizes based on sets, object ids etc. Similar to unity and unreal where you have control on a texture by texture basis, that way you could easily specify fore/mid/background geo and different texture sizes. Might be a useful extra tool in the belt
              e: info@adriandenne.com
              w: www.adriandenne.com

              Comment


              • #52
                Originally posted by francomanko View Post
                I've had some weird issues with the on demand mip mapping,
                Our CG lead at the studio Im working for forbids us from using "on demand mip mapping". Says it causes all sorts of issues for him.

                Website
                https://mangobeard.com/
                Behance
                https://www.behance.net/seandunderdale

                Comment


                • #53
                  Thanks for your input, BBB3 . Can you quickly explain why "there's no going back" for you? What makes it a better solution than CPU for you?
                  Max 2023.2.2 + Vray 6 Update 2.1 ( 6.20.06 )
                  AMD Ryzen 7950X 16-core | 64GB DDR5 RAM 6400 Mbps | MSI GeForce RTX 3090 Suprim X 24GB (rendering) | GeForce GTX 1080 Ti FE 11GB (display) | GPU Driver 546.01 | NVMe SSD Samsung 980 Pro 1TB | Win 10 Pro x64 22H2

                  Comment


                  • #54
                    Originally posted by Alex_M View Post
                    Thanks for your input, BBB3 . Can you quickly explain why "there's no going back" for you? What makes it a better solution than CPU for you?
                    Speed of iterations mainly. And the way my rig is set up, CPU rendering is orders of magnitude slower. I don’t have a farm so for me, it’s either using the GPU to render in minutes or letting renders cook overnight, which I haven’t done in years. And also, all my assets and scenes are now set up for the GPU so that’s kind of an investment.
                    Check my blog

                    Comment


                    • #55
                      Originally posted by seandunderdale View Post

                      My point exactly.

                      Ive used GPU for the last 5 almost exclusively. I added a second GTX 1080ti and run hybrid mode with my TR 1950x. It is for sure faster than my CPU alone, but its not without its issues. Maya projections, 2d file texture stagger, camera auto exposure....a lot of these things you can find workarounds for, but you have to go digging and find the weird way it needs to work...like having to go and render in CPU mode first for auto exposure to work, then switch back to GPU. I think spent a couple of hours trying to copy camera settings during production time on a shot before I worked that one out.

                      With progressive, denoiser, DOF and motion blur, it can spit out a clean car interior render animation frames at 2500px in about 7 mins. That's seriously fast for average hardware, in CUDA mode, and apart from some very small niggles, it was production quality.

                      One point I think I'd like clarifying, is the comparison between GPU RAM and system RAM. I read somewhere that 10GB on a GPU equates to about 64GB system RAM. That the numbers arent interchangeable. For example, saying a displacement eats up 40gb system RAM so how would it ever fit on my 1080ti?
                      Interesting about the memory info. Would it possible to make a script that runs through scene and calculate how much GPU memory you would need to fit the scene. I understand that it might be tricky with other plugins like Forest or Railclone but nevertheless it would give most of us an idea if this is something we should consider already or wait for more RAM cards, as I guess most consider nvlink duo.
                      Best Regards

                      Tomek

                      Portfolio: http://dtown.pl/

                      Comment


                      • #56
                        Morning all,

                        I'm fairly new to Vray so I'm still getting my head around the different elements of the computer which are utilised for different elements of the whole export. I have used task manager to monitor my current computer, but this computer is very basic, and difficult to match to a modern day 3D machine / 3000 series.

                        Primarily I'm interested in how much CPU you require if you utilise CUDA or RTX only.

                        I'm currently try to get a spec together for a very basic 3D machine for my IT guy to build. Budget is perhaps in the £1 - 1.5k region... I guess the main question is, with one 3000 gpu, how low can you go for CPU?

                        At present I'm thinking:
                        • GPU = RTX 3080 (only 1 due to lack of multi gpu support and budget)
                        • Motherboard = MSI x570-A Pro
                        • CPU = 3600 or 3700x
                        • Ram = 16gig of x brand (semi decent)

                        With the above in mind, what I like about CUDA / RTX is that your computer can still be used once rendering. When utilising CPU only, I pretty much cant use my workstation, so I find the CUDA / RTX more practical from that point of view.

                        In terms of usage, I will be exporting high quality stills of relativity simple models, with the aim of creating animations down the line. Examples of my types of work as follows:
                        Once again, ideally I want CUDA or RTX to be doing the hard work so that I can render in the background whilst doing minor tasks, such as emails and the likes of.... but the question is, how low can you go with the CPU?

                        Thoughts welcome.

                        Best

                        Theo

                        Comment


                        • #57
                          Originally posted by john_gulland View Post
                          At present I'm thinking:
                          • GPU = RTX 3080 (only 1 due to lack of multi gpu support and budget)
                          • Motherboard = MSI x570-A Pro
                          • CPU = 3600 or 3700x
                          • Ram = 16gig of x brand (semi decent)

                          With the above in mind, what I like about CUDA / RTX is that your computer can still be used once rendering. When utilising CPU only, I pretty much cant use my workstation, so I find the CUDA / RTX more practical from that point of view.

                          The 3080 is $700 alone... it's a really tough build to put together for that price.

                          The price scales really well from 3600 to the 3900x - it's double the price for almost double the performance, ish. That's a really incredible level of scaling and 1.7x the cpu power for $200 is a missed opportunity.
                          16gb of ram is really not enough these days... you'll run out if you have chrome open while rendering. 64gb would be a minimum.

                          A 3900 based cpu build with 64gb ram and no graphics card will set you back $1200. that + a 3080 would be around $1900 and get you a an all rounder that can do cpu and gpu rendering with enough ram to work in photoshop in the background and not micromanage background processes. Your build is only $300 less but means you'll be running out of ram constantly and cant fall back on cpu rendering in a pinch.
                          Last edited by Neilg; 14-09-2020, 04:57 PM.

                          Comment


                          • #58
                            If I can find a GPU box, with enough room for two of these new cards (GEFORCE RTX3090) (with vlink), all connected with a SCSI card, I would buy today. I am not sure why these are not all over the place, but I can't find what I am looking for.
                            Bobby Parker
                            www.bobby-parker.com
                            e-mail: info@bobby-parker.com
                            phone: 2188206812

                            My current hardware setup:
                            • Ryzen 9 5900x CPU
                            • 128gb Vengeance RGB Pro RAM
                            • NVIDIA GeForce RTX 4090
                            • ​Windows 11 Pro

                            Comment


                            • #59
                              The same as power as the Titan RTX, but cheaper?

                              https://www.nvidia.com/en-us/geforce...ries/rtx-3090/
                              Bobby Parker
                              www.bobby-parker.com
                              e-mail: info@bobby-parker.com
                              phone: 2188206812

                              My current hardware setup:
                              • Ryzen 9 5900x CPU
                              • 128gb Vengeance RGB Pro RAM
                              • NVIDIA GeForce RTX 4090
                              • ​Windows 11 Pro

                              Comment


                              • #60
                                Benchmarks are looking good!
                                https://techgage.com/article/nvidia-...g-performance/
                                https://linktr.ee/cg_oglu
                                Ryzen 5950, Geforce 3060, 128GB ram

                                Comment

                                Working...
                                X