If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
New! You can now log in to the forums with your chaos.com account as well as your forum account.
---------------------------------------------------
MSN addresses are not for newbies or warez users to contact the pros and bug them with
stupid questions the forum can answer.
Heh. Yeah. I thought it was a typo at first, until I realized that OpenGL couldn't do this.
Until things get better here and I can lobby for a new system, I think that Vray RT might be a good compromise; from a productivity standpoint. Hopefully a new system and the GPU variant will happen around the same time. Finally a reason to possibly talk IT into playing around with SLI.
Will this maybe one day be integrated (could still be 2 products but interoperable) into the vray for "insert favourite 3d app" pipeline, so one could use this for calculating the gi superfast and maybe doing final rendering with regular vray? Does it sample and antialias as well as regular vray so regular vray becomes obsolete maybe? Does openCL have any overhead compared to cuda regarding speed? Should I shut up?
From what I've read OpenCL will execute programs that will use just about any processor; not just GPUs. Does this mean that VrayRT could blend between what the CPU and GPU do best?
This is very exciting to say the least. Vlado, I assume this will also be used render out animations? The presentation said that the GPU output is identical to the frame buffer.
Let's think about this...If I recall the CPU rendered the frame in roughly 2 minutes or 120 seconds. The GPU was getting 6 frames per second on similar frames. (granted it was showing a grainy 6 frames/second...i'll account for that below) This would make the GPU accelerated V-Ray 720 times faster than using the CPU...
If this is working on the 285 cards its should work just fine on a tesla system. We use the 285's to develop higher end tesla based systems. We have several proven designs for these personal supercomputers (PSC) but one in particular is perfectly suited for this application:
Decription: Personal Supercomputer X 3 Teslas
Processor: i7-series X 1
MPU: 2.67 ghz
MPU Cores: 4
MPU Threads: 2
GPU Per Host Computer: 4
Host MPU Cores to GPU: 1
(Host MPU cores X Threads) to GPU: 2
Tesla: C1060 X 3
Graphics Card: Quadro FX 3800
OS: Windows Vista 64 bit
Memory: 12GB DDR3-1333MHz
Hard Drive: 1 x 1TB SATA II 7200 RPM 32MB Cache
RAID: Optional
Form Factor: workstation or 4u or 5u
From a pure specs standpointThe GTX 285s are computationally similar to the teslas but I would really like to test the 285's vs the teslas in this application because I'm certian the teslas would be faster...
But let's just assume the teslas in our PSC above have similar speeds to the GTX cards in the demo. 3 telsas + 1 FX3800 card (i will assume the FX3800 has .75 the computation power of 1 tesla)
3.75 x 720 = 2700 ...wow
This means that in a single workstation you can get speeds up to 2,700 times faster than an i7 based workstation. The system above would run about $14,000... seems to good to be true. I hope I understood the demo correctly. I did notice that at 6 frames/second wasn't producing a true production quality render. But even if it is 1-2 frames/second that is still phenomenal performance. And if this will work in DR mode with multiple tesla based systems, the sky is the limit... soon we might be saying bye bye to traditional render farms
checking my numbers again...
EDIT:
After looking into this further the speed difference between using teslas and the GTX cards is marginal. The system can be built for around 6k-7k with GTX cards. The question is would this application benefit from the 4gb of memory on the tesla? If not the GTX based system along with an FX3800 or 5800 with a Dual Xeon W5580 would be and extremely impressive production machine.
I'm not certain but I think you need to load all geometry and textures into GPU memory so you'll need as much vram as you can get to render heavy scenes. Of course you can use normal ram but it slower than using vram only.
Wow - very impressive! So is the final version of Vray RT going to be a stand alone program, or will it integrate with 3DSMax and any other plugin such as Forest Pro that one might have?
V-Ray RT will pretty much stay as it is - an ActiveShade integration inside of 3ds Max with a standalone render server (or servers). Whether other 3rd party plugins will be supported is a somewhat more complicated matter.
I don't want to sound like a negative nancy, but i'd remind everyone of the gap between the Siggraph demo and the product release of VRay RT. Wouldn't be holding out for this year, for example.
Comment