We're in the process of re-evaluating our pipeline going from set dressing through animation to lighting rendering. Over time we found the need to tackle scenery that grew bigger and bigger in terms of content and scene management and are now investigating what to keep in mind to also stay optimal in regards of rendering with V-Ray.
It happens more often that scenes get packed with loads of similar meshes (e.g. lots of books on a book shelf) in large scenery with animated characters. Most (if not all) of our point cache type data is pushed through from animation to lighting as Alembic using custom attributes on the meshes/transforms to identify the exact node/asset to hook up a shader connection in a render scene automatically. In short to describe our workflow: we publish a "look" for an asset that can be applied automatically in a render scene onto the loaded alembic using the node's IDs that identify what object (e.g. what particular mesh of an asset) it is. This look could apply shader sets, v-ray property sets, render stats and v-ray attributes to transforms and meshes using these IDs.
Loading a big alembic (even with all such book shelfs with similar meshes; Alembic "instances" similar geo data in the .abc file) has rarely been in an issue in Maya aside from some slow time scrubbing for larger datasets. Nevertheless when starting a V-Ray render a lot of time is spent "updating vray scene" and merely translating the contents of Maya to V-ray, even though 80% of the meshes are the same books. Because the alembic is loaded into Maya as regular meshes it seems V-Ray has a hard time recognizing these "similar shapes" (because each is their own shape in the Maya scene?) which could theoretically be rendered as instances as they are same geometry with just different transforms.
A possible solution would be to load the geometry as V-ray proxy (Import proxy using the .abc). Yet this gives no per object control to add things to object properties sets or add e.g. Open Subdiv attributes (creases!). As such there's no way to optimally manage such a render scene.
I'm very curious to hear what others do to manage their larger projects to both have it artist-friendly and still be able to optimize for translation to rendering and getting good renders with nice render times. Of course there's a personal preference here and there, but I have the feeling we might be missing out on some neat tricks that can be pulled off here to really get our larger scenery loaded optimally.
In some cases we pushed our "transformation data" into a Maya particle shape and placed similar objects to their positions with a Maya instancer. We found that V-ray does a very good job of loading that very quickly instead of having the same objects in your scene from an Alembic. Using this we still have full control to specificy what V-ray should do to each individual mesh being "instanced" regarding shader or v-ray attributes. (even though it's not per instance, the artist can still manage per object in that instance). I was wondering if there's a way to pass data to the V-Ray proxy to manage the data inside of it in a similar manner to what V-ray attributes or V-Ray sets can do on individual nodes in a Maya scene.
It happens more often that scenes get packed with loads of similar meshes (e.g. lots of books on a book shelf) in large scenery with animated characters. Most (if not all) of our point cache type data is pushed through from animation to lighting as Alembic using custom attributes on the meshes/transforms to identify the exact node/asset to hook up a shader connection in a render scene automatically. In short to describe our workflow: we publish a "look" for an asset that can be applied automatically in a render scene onto the loaded alembic using the node's IDs that identify what object (e.g. what particular mesh of an asset) it is. This look could apply shader sets, v-ray property sets, render stats and v-ray attributes to transforms and meshes using these IDs.
Loading a big alembic (even with all such book shelfs with similar meshes; Alembic "instances" similar geo data in the .abc file) has rarely been in an issue in Maya aside from some slow time scrubbing for larger datasets. Nevertheless when starting a V-Ray render a lot of time is spent "updating vray scene" and merely translating the contents of Maya to V-ray, even though 80% of the meshes are the same books. Because the alembic is loaded into Maya as regular meshes it seems V-Ray has a hard time recognizing these "similar shapes" (because each is their own shape in the Maya scene?) which could theoretically be rendered as instances as they are same geometry with just different transforms.
A possible solution would be to load the geometry as V-ray proxy (Import proxy using the .abc). Yet this gives no per object control to add things to object properties sets or add e.g. Open Subdiv attributes (creases!). As such there's no way to optimally manage such a render scene.
I'm very curious to hear what others do to manage their larger projects to both have it artist-friendly and still be able to optimize for translation to rendering and getting good renders with nice render times. Of course there's a personal preference here and there, but I have the feeling we might be missing out on some neat tricks that can be pulled off here to really get our larger scenery loaded optimally.
In some cases we pushed our "transformation data" into a Maya particle shape and placed similar objects to their positions with a Maya instancer. We found that V-ray does a very good job of loading that very quickly instead of having the same objects in your scene from an Alembic. Using this we still have full control to specificy what V-ray should do to each individual mesh being "instanced" regarding shader or v-ray attributes. (even though it's not per instance, the artist can still manage per object in that instance). I was wondering if there's a way to pass data to the V-Ray proxy to manage the data inside of it in a similar manner to what V-ray attributes or V-Ray sets can do on individual nodes in a Maya scene.
Comment