So I have Vray next for 3-D studio Max and four rendered nodes. Everything seems to work great. Three render nodes are older HP workstations with duel Xeon chips and my workstation itself is an overclocked i7. The i7 is roughly twice as fast as the older HP workstations. In addition I just added a 32 core thread ripper render node and it is about three times as fast as my workstation and six times as fast as my other render notes. It all works as it should but I have a curious thing happening.
My animation has gotten into some frames that have a lot of transparency and shadows and it’s taking a lot longer per frame than usual. This is not unexpected, however, those usual relationships of computer speed are born out in render times for my frames usually, but now when it gets to a certain especially difficult frame all of the machines are rendering in roughly the same amount of time. So my super fast thread ripper and old nodes and workstation are each taking about four hours per frame. There’s a huge difference in their potential speed. On other frames that aren’t so cumbersome they have the expected relationship of roughly 1 to 3 to 6 time.
I am rendering using back burner network rendering, the setting is in buckets so there is no time limits, I’m using basically the default settings for render except for increasing the transparency to 1000, I know that seems excessive but I have a bunch of stacked transparencies like CAT scans that need this. That all works in other situations just fine and in fact it works here I just can’t understand how these machines with very different speeds, set in bucket mode, would render in the same amount of time roughly. As mentioned above, at other parts of this sequence when it is less cumbersome, like 40 minutes per frame, they revert back to the expected render time relationships that are very much in sync with their speed and with Vray benchmark for example.
using about 25g ram all boxes have 64
Please help me me understand what is going on.
Morgus
Comment