To preface...VRay's distributed rendering is the single best way to work that I've ever come across since I started working in this industry, so this is just an idea on how to improve an already wonderful tool. Though this suggestion applies to all VRay rendering, not just DR; especially as we begin to use more and more cores in our computers (high end stations already are sporting 24 threads with dual 6 core Xeons).
99% of all scenes you render have varying information densities in different areas, because of this, it can require too much fussing to get the right bucket size and it adds another layer of complexity on top of setting sampling properly, etc, etc. This is extremely apparent once you begin rendering small details with nothing else in the scene. On the film we're working on for example, I've got scenes with extremely complex vehicles (22M polys unsmoothed with 2,500 textures) that might only be a couple hundred pixels across. VRay ends up either spending a ton of time rendering blackness (because I want buckets small enough to get full coverage once they hit the ship) or I set them large so that it renders prepasses and the blackness quickly, but then threads are going to waste at the end of the render while two cores are left rendering the ship.
Is there any way you're able to dynamically affect the bucket sizes in VRay? Let's say I use a bucket size of 64 x 64 and I have 60 render threads; once there are only 59 tiles left in the scene to render, threads begin to go to waste. This phenomenon becomes extremely apparent when you've got 3 buckets still crunching on a detail (like a car headlight) while your other 57 are just dormant.
I have a couple suggestions:
1) What if once there weren't enough buckets left for the threads, it started to subdivide the remaining buckets assuming they're less than 75% complete?
2) Somehow have VRay figure out the complexity of the scene in different areas (light cache comes to mind as a really accurate portrait of how the scene will behave before it's actually rendered) so that it could automatically change the sizes of the buckets in more complex areas? Basically you'd have to approximate what the SampleRate pass would look like before the render begins, then assign bucket sizes based on that.
1 seems like a better solution to me because you don't incur the penalties of using small buckets (few 1/100th second delay as they swap which adds up) and at the same time don't incur the penalties of using large buckets. The only downside at all would be some wasted render time (if a bucket was 10% complete already but then was reassigned).
I guess this is a good place to ask this as well: what is the hidden rendering order going on inside each bucket? Do they also triangulate the same way it does on the main image? Because if so, subdividing could be even easier since it's never a random section that is incomplete.
Sorry for rambling!
99% of all scenes you render have varying information densities in different areas, because of this, it can require too much fussing to get the right bucket size and it adds another layer of complexity on top of setting sampling properly, etc, etc. This is extremely apparent once you begin rendering small details with nothing else in the scene. On the film we're working on for example, I've got scenes with extremely complex vehicles (22M polys unsmoothed with 2,500 textures) that might only be a couple hundred pixels across. VRay ends up either spending a ton of time rendering blackness (because I want buckets small enough to get full coverage once they hit the ship) or I set them large so that it renders prepasses and the blackness quickly, but then threads are going to waste at the end of the render while two cores are left rendering the ship.
Is there any way you're able to dynamically affect the bucket sizes in VRay? Let's say I use a bucket size of 64 x 64 and I have 60 render threads; once there are only 59 tiles left in the scene to render, threads begin to go to waste. This phenomenon becomes extremely apparent when you've got 3 buckets still crunching on a detail (like a car headlight) while your other 57 are just dormant.
I have a couple suggestions:
1) What if once there weren't enough buckets left for the threads, it started to subdivide the remaining buckets assuming they're less than 75% complete?
2) Somehow have VRay figure out the complexity of the scene in different areas (light cache comes to mind as a really accurate portrait of how the scene will behave before it's actually rendered) so that it could automatically change the sizes of the buckets in more complex areas? Basically you'd have to approximate what the SampleRate pass would look like before the render begins, then assign bucket sizes based on that.
1 seems like a better solution to me because you don't incur the penalties of using small buckets (few 1/100th second delay as they swap which adds up) and at the same time don't incur the penalties of using large buckets. The only downside at all would be some wasted render time (if a bucket was 10% complete already but then was reassigned).
I guess this is a good place to ask this as well: what is the hidden rendering order going on inside each bucket? Do they also triangulate the same way it does on the main image? Because if so, subdividing could be even easier since it's never a random section that is incomplete.
Sorry for rambling!
Comment