Is it easily possible? I got an image here which is kinda inefficient rendering distributed on multiple machines when using progressive sampling even with higher ray bundle size etc manifesting in lower CPU loads on the slave machines (<80%). I'm talking about a preview rendering here, not a final one.
What would be cool is if I could say render bucket distributed, but starting with noise threshold of say 0.1 and then after each "completed" rendering at that threshold, render again with say 0.09. Maybe the previous samples could even be used so that the renderer doesn't have to recalculate everything, but say take the result of the 0.1 rendering and render it "further" to 0.09 so not too much cpu time is wasted in this process. Creme de la creme would be if when you save the VFB image in history it takes the last completed pass and saves it instead of the "half 0.1 and half 0.09"-image.
What would you guys think about this?
What would be cool is if I could say render bucket distributed, but starting with noise threshold of say 0.1 and then after each "completed" rendering at that threshold, render again with say 0.09. Maybe the previous samples could even be used so that the renderer doesn't have to recalculate everything, but say take the result of the 0.1 rendering and render it "further" to 0.09 so not too much cpu time is wasted in this process. Creme de la creme would be if when you save the VFB image in history it takes the last completed pass and saves it instead of the "half 0.1 and half 0.09"-image.
What would you guys think about this?
Comment