I've been pondering how to achieve the best optimized settings for you bucket size.
Too small and you loose a lot of time because of the overhead involved with DR, too big and you end up with an increasing chance you'll be waiting for your slowest PC in the farm to finish.
-> if you have a DR farm with all the same cpu 's, then this is easy :
divide your image size by the number of render farm you got. That will give you the best rendering distribution with a min of overhead.
-> It becomes more difficult when you have a mixed render farm, with difference in speed and brand. Ideally you want to end the last bucket with the fastest cpu available. This is not always easy to enforce, but a good approximation is to make a test rendering on each system alone and note down the times.
If PC1 renders in 40s and PC2 in 45s then you can say PC1 will render 9 buckets when PC has done 8.
In an ideal world you then should divide your image into 17 pieces. In reality the results might differ slightly, but it is surely a good idea to stay as close as possible to that ideal number.
-> Although the follow findings need more testing, I've observed a strange difference in render time when changing the aspect of the bucket size. There is a tendency that long vertical buckets render faster then square ones.
-> Dividing the same image by 16, once in long vertical bands and once in bucket sizes according the image size (4x4), resulted in the vertical ones being noticeable faster (10%)
I'd ask to Vlado if the bucket labels are accessible through maxscript, or have an option to see the relative performance of each DR machine on the rendered image?
Too small and you loose a lot of time because of the overhead involved with DR, too big and you end up with an increasing chance you'll be waiting for your slowest PC in the farm to finish.
-> if you have a DR farm with all the same cpu 's, then this is easy :
divide your image size by the number of render farm you got. That will give you the best rendering distribution with a min of overhead.
-> It becomes more difficult when you have a mixed render farm, with difference in speed and brand. Ideally you want to end the last bucket with the fastest cpu available. This is not always easy to enforce, but a good approximation is to make a test rendering on each system alone and note down the times.
If PC1 renders in 40s and PC2 in 45s then you can say PC1 will render 9 buckets when PC has done 8.
In an ideal world you then should divide your image into 17 pieces. In reality the results might differ slightly, but it is surely a good idea to stay as close as possible to that ideal number.
-> Although the follow findings need more testing, I've observed a strange difference in render time when changing the aspect of the bucket size. There is a tendency that long vertical buckets render faster then square ones.
-> Dividing the same image by 16, once in long vertical bands and once in bucket sizes according the image size (4x4), resulted in the vertical ones being noticeable faster (10%)
I'd ask to Vlado if the bucket labels are accessible through maxscript, or have an option to see the relative performance of each DR machine on the rendered image?