No. You misunderstand me.
You are talking about normalized ranges. You seem to want to squeeze a value range from your scene file into a normalized colour space, not too different from what tone mapping does. What is wrong with having your speculars blown out when the diffuse is what you have exposed your camera for? This happens in real like all the time. You expose your camera for the desired lighting levels you want, and you still only get a limited range with both dark patches and burned out highlights. If you expose your camera to get the brightest part of a reflection on a metallic ball outside, I'll bet the rest of the image will be dark.
My point is, your file (exr for example) can still contain a really good value range, but under most plausible circumstances, having details from every part of its range visible on your 8 bit monitor, (or even lower fidelity print), is going to look strange, and what I would call very unconvincing.
But back to the topic. I still haven't heard any reasonable argument for why vray needs us to pre-expose our shaders for bright conditions while we allready expose with our camera. There might be a good technical reason why this has been implemented like this, like it being easier to program, was not given enough time, or anything really.
You are talking about normalized ranges. You seem to want to squeeze a value range from your scene file into a normalized colour space, not too different from what tone mapping does. What is wrong with having your speculars blown out when the diffuse is what you have exposed your camera for? This happens in real like all the time. You expose your camera for the desired lighting levels you want, and you still only get a limited range with both dark patches and burned out highlights. If you expose your camera to get the brightest part of a reflection on a metallic ball outside, I'll bet the rest of the image will be dark.
My point is, your file (exr for example) can still contain a really good value range, but under most plausible circumstances, having details from every part of its range visible on your 8 bit monitor, (or even lower fidelity print), is going to look strange, and what I would call very unconvincing.
But back to the topic. I still haven't heard any reasonable argument for why vray needs us to pre-expose our shaders for bright conditions while we allready expose with our camera. There might be a good technical reason why this has been implemented like this, like it being easier to program, was not given enough time, or anything really.
Comment