Hi guys.
Splitting this issue out from my other topic(sample differences between proxy and maya model in certain setups.... which Im still investigating) in order to have a more specific discussion.
It seems that there is quite a large difference in speeds between rendering with Native Maya geometry and then using a VrayProxy loading an Alembic file.
On our production assets and production scenes the differences are significant at least.
In order to debug it myself and to give something clear to you guys, I've tried to construct a simple scene that illustrates what we see.
Ive created a simple scene which is attached to this post.
Two polygon spheres (one with a 100/100 subdivions and one with standard 20/20, all in all 20560 triangles)
Exported to alembic using maya defaults (UV write, Write Creases on)
Alembic is then loaded though a VrayProxy node using standard settings
An ALshader(no inputs, and barely any changes to default settings) is then assigned to the maya geometry and into both spheres in the proxy though the VrayMtl node ( and this time Im sure its assigned because I was smart to change the colors )
A Vray Dome light is created
Ive increased the sampling quality(low threshold) to get get render times that are high enough to not be affected by momentary CPU changes on the machine, and also to not be affected by general overheads in starting the renderer.
The scene is attached to this post.
These are the stats I get from rendering this scene, toggling visibility between proxy and maya geo.
Ive done multiple tests over the last few days and its the same behavior im getting, so this should not be a random fluke.
439 Seconds Maya Geometry - ( using default behavior loaded into Embree Static Tree )
507 Seconds Maya Geometry - ( forced into the Embree Dynamic Tree )
525/550 Seconds Vray Proxy - ( using default behavior loaded into Embree Dynamic Tree )
The images produced are identical with the two tests of maya geometry, but with the Vray proxy you can see a small variation on the edge probably caused by reflections (Variations that we visually would not object to when using one or the other method out of context to each other).
That there is a difference visually hints that the geometries within the rendere are not exactly the same perhaps.
Sample counts are pretty much the same in all 3 renders (though not exactly identical, not enough for it to affect render times)
From what we see(I think) the differences gets larger the more complicated and heavy the scene is (so increased raycasts gives larger differences, which hints that its perhaps a an embree tree speed difference thus my test cases).
Looking at the times for our production scenes(thousands of objects in a proxy and millions of polygons), the relative difference increases, and in some of our scenes I almost see a 2x difference in rendertime between proxy and maya models (though I have to admit that other factors could be in play here in comparison to this specific test scene as often happens with complicated scenes)
My questions are then.
(and I really hope I didn't mess up in my tests this time around)
Thanks for your help in advance,
Jimmi
The full outputs from the renders are in the next post
Splitting this issue out from my other topic(sample differences between proxy and maya model in certain setups.... which Im still investigating) in order to have a more specific discussion.
It seems that there is quite a large difference in speeds between rendering with Native Maya geometry and then using a VrayProxy loading an Alembic file.
On our production assets and production scenes the differences are significant at least.
In order to debug it myself and to give something clear to you guys, I've tried to construct a simple scene that illustrates what we see.
Ive created a simple scene which is attached to this post.
Two polygon spheres (one with a 100/100 subdivions and one with standard 20/20, all in all 20560 triangles)
Exported to alembic using maya defaults (UV write, Write Creases on)
Alembic is then loaded though a VrayProxy node using standard settings
An ALshader(no inputs, and barely any changes to default settings) is then assigned to the maya geometry and into both spheres in the proxy though the VrayMtl node ( and this time Im sure its assigned because I was smart to change the colors )
A Vray Dome light is created
Ive increased the sampling quality(low threshold) to get get render times that are high enough to not be affected by momentary CPU changes on the machine, and also to not be affected by general overheads in starting the renderer.
The scene is attached to this post.
These are the stats I get from rendering this scene, toggling visibility between proxy and maya geo.
Ive done multiple tests over the last few days and its the same behavior im getting, so this should not be a random fluke.
439 Seconds Maya Geometry - ( using default behavior loaded into Embree Static Tree )
507 Seconds Maya Geometry - ( forced into the Embree Dynamic Tree )
525/550 Seconds Vray Proxy - ( using default behavior loaded into Embree Dynamic Tree )
The images produced are identical with the two tests of maya geometry, but with the Vray proxy you can see a small variation on the edge probably caused by reflections (Variations that we visually would not object to when using one or the other method out of context to each other).
That there is a difference visually hints that the geometries within the rendere are not exactly the same perhaps.
Sample counts are pretty much the same in all 3 renders (though not exactly identical, not enough for it to affect render times)
From what we see(I think) the differences gets larger the more complicated and heavy the scene is (so increased raycasts gives larger differences, which hints that its perhaps a an embree tree speed difference thus my test cases).
Looking at the times for our production scenes(thousands of objects in a proxy and millions of polygons), the relative difference increases, and in some of our scenes I almost see a 2x difference in rendertime between proxy and maya models (though I have to admit that other factors could be in play here in comparison to this specific test scene as often happens with complicated scenes)
My questions are then.
- Am I treating my alembic exports correctly (any settings I should be focused on that could have a clear effect)
- Am I treating my Vray proxy correctly ( any subdivision settings in play in my test scene which Ive overlooked )
- Is there a difference in how Vray treats (subdivides, tessellates) my proxy geometry (like trying to render it as subdiv?, but with no smoothing it seems)
- Is there any debugging flags I can turn on that will give me more insight into the amount of geometry loaded/handled (and how its perhaps subdivided or treated)
- Are there any debugging flags in general I can turn on to give me more insight into what could be different in my tests
- Is the difference in speed between Dynamic and Static Embree trees expected ( so the maya geometry shows a difference which can be directly compared which is wierd )
- Assuming that I have done everything correctly and its the same geometry, then why is there a difference between a "proxy" node using the Dynamic trees, and then maya geometry using the same tree.
- Are there other differences in how Proxy nodes have their geometry submitted/handled besides getting pushed into static or dynamic embree trees.
(and I really hope I didn't mess up in my tests this time around)
Thanks for your help in advance,
Jimmi
The full outputs from the renders are in the next post
Comment