Hi,
i have been using Vray for several months now, and i still do not fully understand how DMC sampler works. I did many, many tests, but every of them shows different results and nothing follows any logical pattern. It seems to me like there is a lot of internal compensation mechanisms that user can not affect. Mechanism that might prevent newbie users from overshooting rendering times, but also prevent advanced users from optimizing their scenes.
I will demonstrate on a simple scenario:
I have a teapot on a plane with gray VrayMTL applied and LDR jpeg spherical map in my environment to light my scene. I have proper LWF setup and looking at scene through VrayPhysCam with correct exposure settings.
My AA settings are adaptive DMC min 1 max 8, Adaptive amount 0,75, min samples 8, noise threshold 0,01 and global subdivs multiplier is 1. GI is primary brute force and secondary light cache. Light cache is at default settings.
Now, if i do a several renders:
1, 8 brute force subdivs - 17,1s
2, 16 brute force subdivs - 20,6s
3, 24 brute force subdivs - 17,1s
4, 32 brute force subdivs - 16,0s
5, 48 brute force subdivs - 16,0s
6, 64 brute force subdivs - 17,0s
I cease to observe any logical pattern among this. Does that mean i have to look for a sweet spot value in ever single scene i work on for every single supersampling value when using DMC?
I have thoroughly read this article http://interstation3d.com/tutorials/...yfing_dmc.html several times in a row and yet everytime i try to apply knowledge i gained, it simply did not work. The DMC behaves rather randomly, so i never manage to squeeze out as much optimization from my scene as possible.
I used to work with MentalRay for several years, and pattern of increased value = increased rendertime worked there thanks to it's unified sampler. Unfortunatelly i cannot apply it in Vray, as unlike MentalRay, Vray's adaptive subdivision sampler is incredibly slow.
I will be very thankful for any clarification.
i have been using Vray for several months now, and i still do not fully understand how DMC sampler works. I did many, many tests, but every of them shows different results and nothing follows any logical pattern. It seems to me like there is a lot of internal compensation mechanisms that user can not affect. Mechanism that might prevent newbie users from overshooting rendering times, but also prevent advanced users from optimizing their scenes.
I will demonstrate on a simple scenario:
I have a teapot on a plane with gray VrayMTL applied and LDR jpeg spherical map in my environment to light my scene. I have proper LWF setup and looking at scene through VrayPhysCam with correct exposure settings.
My AA settings are adaptive DMC min 1 max 8, Adaptive amount 0,75, min samples 8, noise threshold 0,01 and global subdivs multiplier is 1. GI is primary brute force and secondary light cache. Light cache is at default settings.
Now, if i do a several renders:
1, 8 brute force subdivs - 17,1s
2, 16 brute force subdivs - 20,6s
3, 24 brute force subdivs - 17,1s
4, 32 brute force subdivs - 16,0s
5, 48 brute force subdivs - 16,0s
6, 64 brute force subdivs - 17,0s
I cease to observe any logical pattern among this. Does that mean i have to look for a sweet spot value in ever single scene i work on for every single supersampling value when using DMC?
I have thoroughly read this article http://interstation3d.com/tutorials/...yfing_dmc.html several times in a row and yet everytime i try to apply knowledge i gained, it simply did not work. The DMC behaves rather randomly, so i never manage to squeeze out as much optimization from my scene as possible.
I used to work with MentalRay for several years, and pattern of increased value = increased rendertime worked there thanks to it's unified sampler. Unfortunatelly i cannot apply it in Vray, as unlike MentalRay, Vray's adaptive subdivision sampler is incredibly slow.
I will be very thankful for any clarification.
Comment