Announcement

Collapse
No announcement yet.

nvidia gtx1080 :)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nvidia gtx1080 :)

    http://www.theverge.com/circuitbreak...e-release-date


    its only got 8gb ram so id still choose my titanX at the moment.

    however im sure they will release a 16 /32 gb version at some point. i have a sinking feeling it will be a quadro/tesla though :P

  • #2
    Originally posted by super gnu View Post
    http://www.theverge.com/circuitbreak...e-release-date


    its only got 8gb ram so id still choose my titanX at the moment.

    however im sure they will release a 16 /32 gb version at some point. i have a sinking feeling it will be a quadro/tesla though :P
    Yes and no, 400 dollar difference mate! if you use all 12GB of memory I guess it would be, but I don't think I have topped it to 8GB yet.

    Comment


    • #3
      Originally posted by padre.ayuso View Post
      Yes and no, 400 dollar difference mate! if you use all 12GB of memory I guess it would be, but I don't think I have topped it to 8GB yet.
      It's a whopping $620 difference if you compare it with the 1070, which seems to be about the same as the Titan X performance wise. The Titan is severely overpriced now it seems.

      Can't wait to build a workstation with 3 of these.
      Last edited by Visual3D; 07-05-2016, 06:25 PM.

      Comment


      • #4
        Originally posted by Visual3D View Post
        It's a whopping $620 difference if you compare it with the 1070, which seems to be about the same as the Titan X performance wise. The Titan is severely overpriced now it seems.

        Can't wait to build a workstation with 3 of these.
        True that, it seems some have those whooping 630 and some don't. I wish these would have come by before I bought my stations. But anyhow, next year I'll get the ones that have 12GB of VRay for 300 dollars, I'll buy three and put my Titans on a slave rack.

        Comment


        • #5
          8 gb memory... There is nothing that can be done in
          http://facebook.com/Avisgrafik

          Comment


          • #6
            well my current job is rendering now, using 87% of my 12gb. interior scene with 10 - 15 very high res textures, and some detailed geometry. and even if i was not currently utilising the full 12 gb, its called future proofing. im barely comfortable with only 12gb. 80% of my jobs i could never consider rendering on gpu since i max out the 32 gb on my workstation easily. all it takes is a bit of displacement and a crapload of trees.

            - admittedly a lot of my projects are very large masterplans etc. if you just do simple stuff maybe 8gb is enough for you to be comfortable for the forseeable future. in my case, definitely not.

            my next gpu will be purchased when i can get minimum 16gb, ideally 32, as long as its titan price and not quadro/tesla price.

            Comment


            • #7
              That's why having a gpu renderer that supports out-of-the-core architecture and can use system memory is so important... 1080 sounds like a good deal right now!

              Comment


              • #8
                but does that not cause a massive performance hit?

                Comment


                • #9
                  Usually it does, yes. There is no magic trick to reduce the latency to the CPU.
                  GPU apps tends to find tricks for reducing the memory needed for the task (scene) and occasionally call this "out of process", because it seems like you can render more than you can fit on the GPU. Like resize textures in V-Ray (it can be done better if resize to a corresponding mip map level, so their quality doesn't get hurt. I am working on that).
                  V-Ray fan.
                  Looking busy around GPUs ...
                  RTX ON

                  Comment


                  • #10
                    Originally posted by savage309 View Post
                    Usually it does, yes. There is no magic trick to reduce the latency to the CPU.
                    GPU apps tends to find tricks for reducing the memory needed for the task (scene) and occasionally call this "out of process", because it seems like you can render more than you can fit on the GPU. Like resize textures in V-Ray (it can be done better if resize to a corresponding mip map level, so their quality doesn't get hurt. I am working on that).
                    Good to hear! Are you guys also working on poly limitations for non-instanced geo on GPU?

                    Comment


                    • #11
                      Originally posted by super gnu View Post
                      80% of my jobs i could never consider rendering on gpu since i max out the 32 gb on my workstation easily. all it takes is a bit of displacement and a crapload of trees.
                      That's true, I won't be able to render any of my exteriors and most of my larger interiors with either 8gb or 12gb, not even close; I'll just get these because they're sooo cheap and to render those simpler interiors that come from time to time super fast, plus way better viewport performance than what I have at the moment hopefully. Here's hoping the new Titan will have 24gb at least, then it might start to become actually feasible to render somewhat heavy scenes.
                      Last edited by Visual3D; 08-05-2016, 06:14 AM.

                      Comment


                      • #12
                        Well, as far as I know you can render even that scenes with newer cuda. Slower, but it will work. So, I think using 8gb vram + some "pagefile" will do the job quite nicely.
                        I just can't seem to trust myself
                        So what chance does that leave, for anyone else?
                        ---------------------------------------------------------
                        CG Artist

                        Comment


                        • #13
                          Oh wow I had no idea, that's great news . Definitely making the jump to v-ray 3 and trying to render everything on gpu when these are released then .

                          Comment


                          • #14
                            I hope they will release new Titan built on this new architecture, that will have similarly great performance but a bit more memory. Basically at least 16GB card that's not overpriced Quadro or Tesla. With 16GB of RAM, it would be finally possible to render at least moderately complex, non-trivial scenes on GPU

                            Comment


                            • #15
                              Originally posted by Paul Oblomov View Post
                              Well, as far as I know you can render even that scenes with newer cuda. Slower, but it will work. So, I think using 8gb vram + some "pagefile" will do the job quite nicely.
                              is that definitely the case? does vray support this newer cuda? and will it be any faster than vray cpu when using the "pagefile?"

                              id imagine in this case having more than one gpu in a system would be a big no-no.

                              Comment

                              Working...
                              X