Announcement

Collapse
No announcement yet.

Finalrender 4 turns into GPU + GPU renderer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Finalrender 4 turns into GPU + GPU renderer

    Finalrender turns into a GPU + CPU Renderer. I guess it will have all features of the CPU version plus the power of speed from the GPU.

    Heres a featurelist:
    http://www.cebas.com/?pid=news_next&nid=378
    http://forums.cgsociety.org/showthre...?f=59&t=901457
    ::::::::::::::::::::::::::
    http://www.taunushelden.de

  • #2
    It will be interesting to see how that works out - I was under the impression that the bandwidth costs of swapping back and forth between GPU and CPU were prohibitive to say the least.

    Comment


    • #3
      Originally posted by duke2 View Post
      It will be interesting to see how that works out - I was under the impression that the bandwidth costs of swapping back and forth between GPU and CPU were prohibitive to say the least.
      Yeah, we tried that, but the speed-up was not really worth it. The CPU is simply not fast enough to keep the GPUs working at full speed, considering that a GPU can process data 10 to 20 times faster.

      Best regards,
      Vlado
      I only act like I know everything, Rogers.

      Comment


      • #4
        so you say it doesn't make sense to mix gpu and cpu rendering?
        do you think cpu rendering will be obsolete soon? what am i doing then with my 8 core renderfarm?
        Marc Lorenz
        ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___
        www.marclorenz.com
        www.facebook.com/marclorenzvisualization

        Comment


        • #5
          From what I gathered from many of the vendors @ Siggraph was the lack of logic on the GPU. The GPU is fantastic at just blasting through raw data, but when you have several processes that rely on another thread finishing before it can start, the GPU really starts loosing it's performance advantage. Couple that with the fact that most GPUs only have about a maximum of 2GB or so of onboard RAM (Unless you count the new 6GB nVidia Quadro 6000) it just isn't up to the task with large vast scenes. Many people just simply said that the the current batch of video cards just aren't quite ready for all this data crunching unless it is specifically setup for something like CUDA with a QuadroPlex rendering solution.

          So in the meantime almost everyone said that they wouldn't be throwing away their dual hexacore CPU rendernodes anytime soon. But there are some pretty cool things that may be happening in the next couple of years.

          Vlado may be MUCH more versed in the limitations and problems, I am just trying to share what I seemed to take away from most of my conversations. So don't quote me on anything. Most conversations with the developers started going WAY over my head.
          Troy Buckley | Technical Art Director
          Midwest Studios

          Comment


          • #6
            I spoke to Edwin about the CPU/GPU implementation in FR at Siggraph, and certainly the disparity of speed between the two solutions seems to preclude a true blending of the two, as Vlado has pointed out.

            But I think I would like to eventually see a mode in Vray in which you can send frames to both GPU and CPU in an animation. This could be controlled by a noise threshold, much like the animation script we now have in RT. The CPU frames will take a lot more time to render of course, but at least we would be able to use our existing CPU machines simultaneously with GPUs in the same job as we convert to GPU-only farms in the future.

            Just looking ahead,

            -Alan

            Comment

            Working...
            X