If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Exciting News: Chaos acquires EvolveLAB = AI-Powered Design.
To learn more, please visit this page!
New! You can now log in to the forums with your chaos.com account as well as your forum account.
Not at the moment; the hair uses a built-in shader specifically designed for hair rendering. This will be implemented for one of the next updates though.
Thanks for the reply. I have to tell you, Vray renders the hair damn fast, especially when there is indirect illumination. I have been testing this morning, and I almost cant believe how much faster it is. I do most of our hair in Maya + Renderman + Shave using the buffer mode to render. It will be interesting to test the hair passes using Vray in Maya when it is implemented.
This is off topic, but is there a way to tell vray to calculate the indirect illumination based on suface normals, instead of the camera ? In RM we can specify this as an option when baking GI (of course this is really just AO as you know) . Basically we can set a bake camera to "see" our entire scene, bake once, and put our camera anywhere in the scene for final renders.
I know that the initial "baking" would take a while, but would be a real time saver, not to mention iliminate flickering in static scenes.
Photon mapping computes illumination for whole scene. Irradiance Map and Light Cache are viewdependant methods and shoot rays from camera. The samples of Irradiance Map are stored in 3D so you can use multiple cameras to shoot rays from multiple locations and combine them into single Irradiance Map sollution holding illumination of the scene viewed from all those cameras. But you can't bake the lighting for entire scene really. I guess you could try render to texture?
Computing illumination for whole scene is generally inneficient since you'll never see the entire scene from equal distance.
VRay computes everything based on surface normals. Are you talking about creating ambient occlusion from ZDepth buffer like in games?
I was thinking more like radiosity, only not crappy
Im no programmer, so this may seem crazy, and may be silly.
I would just love to be able to calc the lighting for an entire scene in one go, similar to photon mapping. Then I would not have to calc based on camera moves. I know that it isnt very efficient, but spending a couple of days "baking" the indirect would be, for me anyway, time saving. The clients always wind up "tweeking" the cameras near the end of the project. If a complete lighting solution was already computed, this would never be a problem. Not to mention finding all problem areas along a camera path that can cause "holes" in GI solution when using the nth frame method (like part of wall not seen until the camera is right up next to it). In Very heavy/complex scenes this becomes a pain moving the camera to all potential problem areas. I would gladly spend a few days just calculating the GI to avoid these issues.
I am not sure if this makes any sense out side of my mind
Well, you can sort of do this with RTT - you can use it not to bake textures as such, but to make V-Ray to go through all scene objects and produce an irradiance map for them.
Comment