Announcement

Collapse
No announcement yet.

a few more RT gpu questions..

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • a few more RT gpu questions..

    ok so with opencl (using the amd cpu driver) we can use both gpu and cpu on a single image. This is not hugely well known or advertised, but for me, seems like a very important feature.

    For me, for example, if i jump to using RT with cuda, as is recommended, purchasing 1 or 2 titan x'es to produce images, im wasting all the cpu power ive built up, which would still be a fair percentage overall.

    i think this is the case for most of us, who will have invested in cpu power until recently, and making the switch to RT cuda wastes all this power.

    so a) are there any major reasons why using cuda is still preferable over opencl? even with the ability to use cpus too in opencl?

    b) in your opinion is there any reason, (apart from politics) why nvidia couldnt release a cuda cpu driver? assuming they wont, is there anything you can do from your end to get the cpus into the game again somehow?


    finally, and on a totally unrelated note, proxies in rt gpu. i assume we dont actually get any memory saving benefits from proxies in rt, unlike the production renderer, since it would have to load all the proxy files completely into gpu ram before rendering?


    thats about it for now. still planning/dreaming a proper switch to RT and not having to constantly worry about rendertimes on my modest setup. .

  • #2
    trying to answer my own question i found this immediately:

    http://www.drdobbs.com/parallel/runn...oces/231500166

    its from 2011, so hardly a new concept... any thoughts ?

    Comment


    • #3
      Originally posted by super gnu View Post
      For me, for example, if i jump to using RT with cuda, as is recommended, purchasing 1 or 2 titan x'es to produce images, im wasting all the cpu power ive built up, which would still be a fair percentage overall.
      The CPU is used to prepare tasks for the GPUs and to gather the results, so it is doing something.

      Originally posted by super gnu View Post
      so a) are there any major reasons why using cuda is still preferable over opencl? even with the ability to use cpus too in opencl?
      b) in your opinion is there any reason, (apart from politics) why nvidia couldnt release a cuda cpu driver? assuming they wont, is there anything you can do from your end to get the cpus into the game again somehow?
      a) Yes, CUDA is more flexible (I can tell what are exactly the restrictions in OpenCL, but they are farily technical stuff) and runs much faster on nVidia GPUs than OpenCL. And there is no way to run a complex kernel on AMD GPU, since their compiler can't handle it.
      b) Actually we have implemented that for ourselves some time ago. The goal was easier debugging. It works and we use it on a daily basis. But there is no really point in using the CPU with the GPU for rendering, since the CPU power is just a fraction of GPU one and CPUs can hardly make a difference. This is the reason we haven't added that to the official builds.

      finally, and on a totally unrelated note, proxies in rt gpu. i assume we dont actually get any memory saving benefits from proxies in rt, unlike the production renderer, since it would have to load all the proxy files completely into gpu ram before rendering?
      Yes.

      trying to answer my own question i found this immediately:
      http://www.drdobbs.com/parallel/runn...oces/231500166
      Thanks for the link, I've missed that.
      But I have done a personal implementation (much simpler, but works fairly well) allowing you to run the same code on OpenCL, CUDA and CUDA CPU - https://github.com/savage309/GPAPI. The one we use in the office is similiar (but far more powerful and completed, of course).
      V-Ray fan.
      Looking busy around GPUs ...
      RTX ON

      Comment


      • #4
        thanks for the reply, yes its best not to get too technical with me.. i have 0 programming knowledge.

        wrt the use of cpu and gpu together, i appreciate that a cpu is a fraction of a gpu in performance, but its still significant; especially if you have several dual 10 core xeons.

        quoting the gpu benchmark thread.. here is a comparison between an amd card, and amd card plus modern cpu:


        W9100 - 2 minutes 55 seconds using 3ds Max 2014 and V-Ray 3.00.07

        W9100 + i74930K using APP - 1 minute 53 seconds using 3ds Max 2014 and V-Ray 3.00.07


        and quoting vlado's comment on this result:

        "CPUs are not too shabby; in fact a K6000 card (which has similar performance to a Titan) is just slightly faster for pure raytracing than a high-end dual Xeon machine. The reason GPU renderers are so much faster is because they don't have all the baggage of years of existing code, plus their code is naturally simpler because of all the restrictions that are there. This is demonstrated by the fact that some of the newer CPU renderers are quite fast too. Of course, a big advantage of GPUs is that you can just put more cards in the system and it gets proportionally faster. Multi-CPU machines on the other hand are very expensive and the performance increase is not there, although one can simply buy more machines. It might also be cheaper to buy one or two graphics cards for an older machine, than it would be to buy one or more high-end CPU render machines.

        Best regards, Vlado"


        - so of course its infinitely preferable to buy more gpus, but most of us are heavily invested in cpu hardware. to have to let all that essentiall sit there doing nothing just cos we bought a couple of gpus, is a bit of a shame. i want to buy a gpu, even one, and see my rendertimes improve drastically. as it is even if i buy a titan x, ill likely only see a doubling in speed (not insignificant i know) since my 3x 4.6ghz 6 cores will be sitting sleeping. if i had invested in a 20-node cpu renderfarm this would be an even more thorny issue. if i could simply get a gpu and *add* it to my setup instead of replacing it, it would definitely make moving to RT easier.
        Last edited by super gnu; 16-06-2015, 09:09 AM.

        Comment


        • #5
          It is better to compare with nVidia GPUs
          Also, pure raytracing means intersecting rays with triangles (no shading, no lights, etc).
          In the tests we made, the final speed benefit a modern CPU gave was near to a single-digit-number percent (depending on the configuration, of course).
          Other than that, we will discuss the possibility to add the support for CUDA CPU in the official builds, too.
          Last edited by savage309; 16-06-2015, 09:25 AM.
          V-Ray fan.
          Looking busy around GPUs ...
          RTX ON

          Comment


          • #6
            Originally posted by super gnu View Post
            thanks for the reply, yes its best not to get too technical with me.. i have 0 programming knowledge.

            wrt the use of cpu and gpu together, i appreciate that a cpu is a fraction of a gpu in performance, but its still significant; especially if you have several dual 10 core xeons.

            quoting the gpu benchmark thread.. here is a comparison between an amd card, and amd card plus modern cpu:


            W9100 - 2 minutes 55 seconds using 3ds Max 2014 and V-Ray 3.00.07

            W9100 + i74930K using APP - 1 minute 53 seconds using 3ds Max 2014 and V-Ray 3.00.07


            and quoting vlado's comment on this result:

            "CPUs are not too shabby; in fact a K6000 card (which has similar performance to a Titan) is just slightly faster for pure raytracing than a high-end dual Xeon machine. The reason GPU renderers are so much faster is because they don't have all the baggage of years of existing code, plus their code is naturally simpler because of all the restrictions that are there. This is demonstrated by the fact that some of the newer CPU renderers are quite fast too. Of course, a big advantage of GPUs is that you can just put more cards in the system and it gets proportionally faster. Multi-CPU machines on the other hand are very expensive and the performance increase is not there, although one can simply buy more machines. It might also be cheaper to buy one or two graphics cards for an older machine, than it would be to buy one or more high-end CPU render machines.

            Best regards, Vlado"


            - so of course its infinitely preferable to buy more gpus, but most of us are heavily invested in cpu hardware. to have to let all that essentiall sit there doing nothing just cos we bought a couple of gpus, is a bit of a shame. i want to buy a gpu, even one, and see my rendertimes improve drastically. as it is even if i buy a titan x, ill likely only see a doubling in speed (not insignificant i know) since my 3x 4.6ghz 6 cores will be sitting sleeping. if i had invested in a 20-node cpu renderfarm this would be an even more thorny issue. if i could simply get a gpu and *add* it to my setup instead of replacing it, it would definitely make moving to RT easier.
            I recently built a machine with an OC 5960X and a GTX 970. In my tests I can usually get a faster CLEAN result with my CPU compared to RT GPU. I think this is largely because of the adaptive sampler that can't be used with RT GPU. I would have to add another card to get a speed increase.

            It's nice to know that you can take an older render node or workstation and add a new GPU or two and potentially make it as fast as a machine with a newer processor. I also see advantages for building a render node with a lower end cpu and lots of GPU's. Potentially that machine could be much faster than an expensive high end CPU based option. It will also require fewer licenses. Of course the big limitation is GPU memory, but that will change soon.

            Comment


            • #7
              ah ok.. i didnt realise the final percentage was so low.. its certainly a reasonable slice when using an amd card!

              from my current experience (and not a directly comparable test) (i have a single gtx670 4gb) rt gpu cuda gives approximately the same kpaths/sec as rt cpu running on 2 of my 6 core machines. i extrapolated from this, since a 980/titan-x is 2/3x faster than my card according to the rt benchmark scene, therefore equal to 4/6 of my machines. .. this means a titan-x gpu would be double my setup of 3x 6 core machines. Obviously there is some aspect of this that im not understanding right. is it just the case that a cpu running cuda doesnt work very well?

              Comment


              • #8
                Originally posted by arobbert View Post
                I recently built a machine with an OC 5960X and a GTX 970. In my tests I can usually get a faster CLEAN result with my CPU compared to RT GPU. .
                and wouldnt it be nice if you could use them both together!

                Comment


                • #9
                  The CPU architecture allows us to do stuff that we can't on the GPU.
                  So comparing RT CPU vs RT GPU is biased because of the different approaches / algorithms.
                  We have compared RT CUDA CPU vs RT CUDA GPU (which is the same code running on different hardware) and we got those numbers, showing that the benefit might not be that great.
                  V-Ray fan.
                  Looking busy around GPUs ...
                  RTX ON

                  Comment


                  • #10
                    ok! understood thanks for taking the time to explain. so the issue is more to do with running cuda code on the cpu.. - if a cpu was only single digit percent of a gpu (in terms of getting a final good image out of vray generally) id be on gpus only already!

                    cheers, Robin.

                    Comment


                    • #11
                      Originally posted by savage309 View Post
                      It is better to compare with nVidia GPUs
                      I tested the 290x vs the 780 which at the time was the same cost and the performance in RT was within about 5 seconds of each other on the test scene.
                      Even vlado was surprised that AMD had caught up.
                      WerT
                      www.dvstudios.com.au

                      Comment

                      Working...
                      X