Announcement

Collapse
No announcement yet.

V-Ray RT on a GPU - SIGGRAPH 2009 Demo

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Congratulations Vlado et al. This looks amazing.


    b
    Brett Simms

    www.heavyartillery.com
    e: brett@heavyartillery.com

    Comment


    • #32
      Amazing work !!!

      Congrats !!!

      Bravo !!!


      Ps : Hope to be usable ("salable") soon
      My Flickr

      Comment


      • #33
        Is there a reason they chose to use a GeForce card over a Quadro card? Will this technology work with other brands of video cards?

        Comment


        • #34
          Originally posted by devin View Post
          Is there a reason they chose to use a GeForce card over a Quadro card? Will this technology work with other brands of video cards?
          To show off that usable real-time could be achieved without having to spend $1500+ on a GPU.

          Comment


          • #35
            im so behind the times. never heard of openCL

            ---------------------------------------------------
            MSN addresses are not for newbies or warez users to contact the pros and bug them with
            stupid questions the forum can answer.

            Comment


            • #36
              Heh. Yeah. I thought it was a typo at first, until I realized that OpenGL couldn't do this.

              Until things get better here and I can lobby for a new system, I think that Vray RT might be a good compromise; from a productivity standpoint. Hopefully a new system and the GPU variant will happen around the same time. Finally a reason to possibly talk IT into playing around with SLI.

              Comment


              • #37
                Will this maybe one day be integrated (could still be 2 products but interoperable) into the vray for "insert favourite 3d app" pipeline, so one could use this for calculating the gi superfast and maybe doing final rendering with regular vray? Does it sample and antialias as well as regular vray so regular vray becomes obsolete maybe? Does openCL have any overhead compared to cuda regarding speed? Should I shut up?
                Signing out,
                Christian

                Comment


                • #38
                  From what I've read OpenCL will execute programs that will use just about any processor; not just GPUs. Does this mean that VrayRT could blend between what the CPU and GPU do best?

                  Comment


                  • #39
                    This is very exciting to say the least. Vlado, I assume this will also be used render out animations? The presentation said that the GPU output is identical to the frame buffer.

                    Let's think about this...If I recall the CPU rendered the frame in roughly 2 minutes or 120 seconds. The GPU was getting 6 frames per second on similar frames. (granted it was showing a grainy 6 frames/second...i'll account for that below) This would make the GPU accelerated V-Ray 720 times faster than using the CPU...

                    If this is working on the 285 cards its should work just fine on a tesla system. We use the 285's to develop higher end tesla based systems. We have several proven designs for these personal supercomputers (PSC) but one in particular is perfectly suited for this application:

                    Decription: Personal Supercomputer X 3 Teslas
                    Processor: i7-series X 1
                    MPU: 2.67 ghz
                    MPU Cores: 4
                    MPU Threads: 2
                    GPU Per Host Computer: 4
                    Host MPU Cores to GPU: 1
                    (Host MPU cores X Threads) to GPU: 2
                    Tesla: C1060 X 3
                    Graphics Card: Quadro FX 3800
                    OS: Windows Vista 64 bit
                    Memory: 12GB DDR3-1333MHz
                    Hard Drive: 1 x 1TB SATA II 7200 RPM 32MB Cache
                    RAID: Optional
                    Form Factor: workstation or 4u or 5u

                    From a pure specs standpointThe GTX 285s are computationally similar to the teslas but I would really like to test the 285's vs the teslas in this application because I'm certian the teslas would be faster...

                    But let's just assume the teslas in our PSC above have similar speeds to the GTX cards in the demo. 3 telsas + 1 FX3800 card (i will assume the FX3800 has .75 the computation power of 1 tesla)

                    3.75 x 720 = 2700 ...wow

                    This means that in a single workstation you can get speeds up to 2,700 times faster than an i7 based workstation. The system above would run about $14,000... seems to good to be true. I hope I understood the demo correctly. I did notice that at 6 frames/second wasn't producing a true production quality render. But even if it is 1-2 frames/second that is still phenomenal performance. And if this will work in DR mode with multiple tesla based systems, the sky is the limit... soon we might be saying bye bye to traditional render farms

                    checking my numbers again...

                    EDIT:
                    After looking into this further the speed difference between using teslas and the GTX cards is marginal. The system can be built for around 6k-7k with GTX cards. The question is would this application benefit from the 4gb of memory on the tesla? If not the GTX based system along with an FX3800 or 5800 with a Dual Xeon W5580 would be and extremely impressive production machine.

                    -joe
                    Last edited by posterus; 26-08-2009, 08:01 AM.
                    www.boxxtech.com

                    Comment


                    • #40
                      I'm not certain but I think you need to load all geometry and textures into GPU memory so you'll need as much vram as you can get to render heavy scenes. Of course you can use normal ram but it slower than using vram only.
                      http://www.ylilammi.com/

                      Comment


                      • #41
                        Wow - very impressive! So is the final version of Vray RT going to be a stand alone program, or will it integrate with 3DSMax and any other plugin such as Forest Pro that one might have?

                        Very exciting news either way!

                        Comment


                        • #42
                          V-Ray RT will pretty much stay as it is - an ActiveShade integration inside of 3ds Max with a standalone render server (or servers). Whether other 3rd party plugins will be supported is a somewhat more complicated matter.

                          Best regards,
                          Vlado
                          I only act like I know everything, Rogers.

                          Comment


                          • #43
                            Can you tell us if models and textures have to be loaded into the videocard memory?
                            If so is sharing the main-memory possible or is that to slow?
                            Reflect, repent and reboot.
                            Order shall return.

                            Comment


                            • #44
                              Well, I am quite stunned. Once the industry picks up I think RT is our next investment.

                              To me, there needs to be a way to 'save' the image from the preview window (if there isn't already).
                              Kind Regards,
                              Richard Birket
                              ----------------------------------->
                              http://www.blinkimage.com

                              ----------------------------------->

                              Comment


                              • #45
                                I don't want to sound like a negative nancy, but i'd remind everyone of the gap between the Siggraph demo and the product release of VRay RT. Wouldn't be holding out for this year, for example.

                                Comment

                                Working...
                                X