If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Exciting News: Chaos acquires EvolveLAB = AI-Powered Design.
To learn more, please visit this page!
New! You can now log in to the forums with your chaos.com account as well as your forum account.
Vlado,
Wow proxy view? That's awesome. I hate to be an ingrate and complain about the displacement not making it. When is the next SP? Or is it a new version?
Thanks
It may be more interesting, but not necessarily successful Right now, the GPU is really only good for unbiased rendering, and even this is with limitations as to the complexity of lighting and materials (not to mention limited GPU memory). Granted, this will change, but for the time being, you will mostly see brute-force unbiased renderers.
Best regards,
Vlado
Has ChaosGroup considered using the GPU for brute force GI and then sending the result back to the regular vray renderer to process as part of the "normal" rendering?
I was just curious if a hybrid technique was doable in a practical way.
The advantage would be if a time consuming GI step could be done on a GPU successfully then used/interpolated by the normal rendering engine, then you have the option of implementing a partial GPU solution for end-users faster than building a physically-correct 100% GPU solution from the ground up. At least it would seem that way, but I am not a developer. My perspective is that of a single user with no render farm who does stills. Maybe this makes no difference if you have a farm or already render on a network of 100 cpus. I was thinking any speedup to the traditional procedure is good to have as an option, because the traditional procedure is reliable.
If it was a simple step in the rendering pipeline that didn't involve every single reflection and refraction then it would help with the memory limits people are going to hit on the normal GPU cards when they try to render everything in the scene at 3000x4000 on a 896MB card. The GI GPU phase wouldn't need to be 100% perfect since it is just a step to be interpolated later, preferably from a map file, instead of the final image that requires more development work to get perfect when GPU rendered.
Obviously if everything that vray does now will be on the GPU by year end then.... nevermind
Comment