If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
New! You can now log in to the forums with your chaos.com account as well as your forum account.
Same goes for dual GPU cards unfortunately (as savage309 points out), but this may be changing in both cases soon from what I gather.
so . please confirm for me ... this card will only be able to load 6gb of a proj into card , as it will divide the 12gb by the 2 processors? if that is the case the only benefit is form factor benefit? wow! harder sell
so . please confirm for me ... this card will only be able to load 6gb of a proj into card , as it will divide the 12gb by the 2 processors? if that is the case the only benefit is form factor benefit? wow! harder sell
Yea that's correct
You basically gain compactness. So you can stick more of them in ur workstation. Basically 4 pcie slots can either give you 4GPU or 8 GPU.
so . please confirm for me ... this card will only be able to load 6gb of a proj into card , as it will divide the 12gb by the 2 processors? if that is the case the only benefit is form factor benefit? wow! harder sell
Though the video that I show with the nVidia cluster is really just playing around, we should do a more impressive one...
Best regards,
Vlado
Thanks Vlado! Very interesting stuff.
I quite like the cluster implementation, specially the option to dynamically add DBR nodes. Do you think that thing can be ported in any way to Maya ? or be cmd controlled?
Yes, the 3.0 SDK does allow render nodes to be added (or removed) on the fly, we just need to figure out a good UI for this.
Best regards,
Vlado
Wow thats awesome. It would be great if it can be standalone ui tho... then again I think it might be hard given different jobs, deadline and so on huh... still nice to know its coming
Every time I try RT GPU on an arch interior something is always not supported or broken.
If you want to use the GPU successfully, it would be best to do your scenes with it from the start. Just taking an existing scene for the regular V-Ray and throwing it at the GPU will likely always come across some little option somewhere that's not supported or works slightly differently.
Comment