Announcement

Collapse
No announcement yet.

Non-RT GPU Calculations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Non-RT GPU Calculations

    Is there any way to use the GPUs within a system on top of the CPU(s) to help process a non-RT image faster?
    I'm just wondering if it could be used to approximate things such as general GI, etc.
    LunarStudio Architectural Renderings
    HDRSource HDR & sIBL Libraries
    Lunarlog - LunarStudio and HDRSource Blog

  • #2
    It is not impossible, but we don't do that for the moment.

    Best regards,
    Vlado
    I only act like I know everything, Rogers.

    Comment


    • #3
      If I look to other industries than EC I notice that we are very VERY far behind in terms of GPU usage. I'm aware that programming stuff for the GPU is a whole other world but nonetheless I think we need to catch up there. Not blaming anyone, just saying
      Software:
      Windows 7 Ultimate x64 SP1
      3ds Max 2016 SP4
      V-Ray Adv 3.60.04


      Hardware:
      Intel Core i7-4930K @ 3.40 GHz
      NVIDIA GeForce GTX 780 (4096MB RAM)
      64GB RAM


      DxDiag

      Comment


      • #4
        There are many things that can be done on the GPU, that's true. The difficulty is that our code needs to be able to run without a GPU as well (many of our existing customers have renderfarms that do not have GPUs), so we will have to essentially duplicate, and even triplicate the code (for CPU, OpenCL, and CUDA). We could focus only on one solution, but I hope you can see the disadvantages of that. I'm sure we'll have to tackle that at one point though - perhaps by just coming up with a general language that can be easily compiled to anything. Recent versions of the Microsoft compiler also have built-in extensions for GPU rendering, perhaps something to look into as well...

        Best regards,
        Vlado
        I only act like I know everything, Rogers.

        Comment


        • #5
          Hi there!
          I do understand the way Vray works and i'm pretty satisfied with v3.0 speed and improvements...
          Anyway I wish one day maybe we can have an hybrid solution that give us the full power of CPU+GPU inside Vray (some like thea or Arion do it but it still so damn slow that I don't see the interest of those poor solutions...).

          Maybe can you add some hybrid solution with some limitations ? (because actually not all vray futures are supported by CUDA rendering)... Would be great !

          Have a nice day !
          Best regards

          Comment


          • #6
            I understand what you are saying.
            However, I think we would all agree that large server farms are becoming largely inefficient.
            Eventually, cloud processing by services such as Amazon AWS and nVidia will replace these farms.
            I've heard claims that a single average GPU can replace 10-15 workstations right now and I can believe it. I know for a fact that my one computer running GPUs has replaced 12 of my older slaves.

            While I understand that these older clients have a large overhead, if they don't get with more recent technology then they're going to go the way of the dinosaur.
            The power users or people that really know what they're doing which is generally who the rest follow, will have the latest hardware and trends.
            At a certain point, I think that these companies running older hardware ought to be cut off as they're negatively impacting development/progress.
            I've known VRay to be progressive with great support from you and that's what has kept customers, but I think it risks falling far behind if the least common denominator has to be supported.
            LunarStudio Architectural Renderings
            HDRSource HDR & sIBL Libraries
            Lunarlog - LunarStudio and HDRSource Blog

            Comment


            • #7
              Originally posted by jujubee View Post
              However, I think we would all agree that large server farms are becoming largely inefficient. Eventually, cloud processing by services such as Amazon AWS and nVidia will replace these farms.
              In many cases, I expect that will happen. But not in all cases. Also, if you are going to send a job to the cloud, does it really matter if it gets computed on CPUs or GPUs if the price/time ratio is nearly the same?

              I've heard claims that a single average GPU can replace 10-15 workstations right now and I can believe it.
              However I can't The most powerful GPUs today are 1-6 times faster than a high-end workstation, for production scenes anyways. This is from my own tests with various GPU renderers, not V-Ray RT specifically. Of course, if you put more GPUs in one machine, things are different. On the other hand, there's more performance to be extracted from CPUs as well.

              I know for a fact that my one computer running GPUs has replaced 12 of my older slaves.
              Yes, I'm sure that we will be seeing more of that.

              At a certain point, I think that these companies running older hardware ought to be cut off as they're negatively impacting development/progress. I've known VRay to be progressive with great support from you and that's what has kept customers, but I think it risks falling far behind if the least common denominator has to be supported.
              Well, as you see, we are clearly doing stuff with GPUs And there will be more of it, for sure. My point is, that it puts a lot of extra strain on our development. GPU-only renderers can focus all their efforts on GPU code (and even then, usually CUDA only), and that's fine and good, and I wish we could to the same. But the way things are for the moment, we have to pay a lot of attention to our CPU code as well and balance the development efforts between the two.

              Best regards,
              Vlado
              Last edited by vlado; 03-05-2014, 08:52 AM.
              I only act like I know everything, Rogers.

              Comment


              • #8
                I think the only renderer that claims to utilize both CPU and GPU at the moment is Thea
                Unfortunately, I didn't have a positive experience with it as a plugin as it kept crashing.
                I wrote on their forums searching for help and received no response whatsoever. I also emailed them and no reply.
                I have heard positive things for people that use Thea with SU.
                LunarStudio Architectural Renderings
                HDRSource HDR & sIBL Libraries
                Lunarlog - LunarStudio and HDRSource Blog

                Comment


                • #9
                  Originally posted by jujubee View Post
                  I think the only renderer that claims to utilize both CPU and GPU at the moment is Thea
                  No, it's not; Arion can do that as well, from what I remember.

                  Best regards,
                  Vlado
                  I only act like I know everything, Rogers.

                  Comment


                  • #10
                    Exact: both Arion and Thea uses CPU+GPU solutions but:
                    Come on! I've tried both of them and even with a 2x E5 2687w xeon with dual GTX TITAN i got very SLOW renders. I prefer 1000x my Vray or eventually octane render but that last one as too much limitations for production rendering...
                    That's also why i'm dreaming of a full CPU+GPU Vray solution: it will be awesome !!!

                    Comment

                    Working...
                    X