Can you help settle an internal discussion; Is the VRay Sun/Sky model more/equal/less dynamic than an HDR-based image? This is mainly in reference to the accuracy and depth of the dynamic range. Say you have a 10K HDR which model has better depth? Thanks.
Announcement
Collapse
No announcement yet.
VRay Sun/Sky vs. HDRI
Collapse
X
-
Hdri's really depend on how they're made - the colour mapping of the camera that shoots them can have a big influence on things, as can the tone mapping of whatever software is used to combine the stills into a hdri. Lastly it depends if you've calibrated the hdri so that the light vaues in it are accurate to what they were in real life.
They kind of serve different purposes. The sun / sky is a quick and handy system for a lot of shots with the brightness values being really accurate to real life. A hdr on the ther hand is less pure but it'll have more detail, more variation and give far more lighting information than the simplicty of the sun / sky will.
Comment
-
Also, an HDR doesn't need a lot of tonal range as in reality most values don't go over the equivalent of "1" in floating point. It's just the light sources themselves that need higher values.
At the end of the day, 24bit is only capable of supporting so many color variations so they will both cap out at a trivial level.
Maybe Joconnel can confirm but 24bit supports like 17 million levels?Last edited by grantwarwick; 19-12-2013, 03:50 PM.admin@masteringcgi.com.au
----------------------
Mastering CGI
CGSociety Folio
CREAM Studios
Mastering V-Ray Thread
Comment
-
Originally posted by joconnell View PostYeah a hell of a lot more than the human eye alright, I think even our standard monitors cover a larger gamut. It's kind of an odd question though, depends what you mean by more "dynamic"
BRB spotting a penny 4k's up in the air!admin@masteringcgi.com.au
----------------------
Mastering CGI
CGSociety Folio
CREAM Studios
Mastering V-Ray Thread
Comment
-
Originally posted by joconnell View PostLastly it depends if you've calibrated the hdri so that the light vaues in it are accurate to what they were in real life.
For me personally it's been the hardest part of shooting and processing my own HDRIs... calibrating the HDRI's pixel values to correspond to real-world luminance values.
Like, just what exactly does a floating point value of 1.0 mean in real life luminance terms?
Comment
-
Originally posted by RockinAkin View PostJohn, out of curiosity do you have experience doing this?
For me personally it's been the hardest part of shooting and processing my own HDRIs... calibrating the HDRI's pixel values to correspond to real-world luminance values.
Like, just what exactly does a floating point value of 1.0 mean in real life luminance terms?
A value of 1 means nothing in real life. There is no maximum amount of light in real life. In real life there are a couple of factors. And they change per situation. At any given moment when you look around and see a perfect dark/black (ie not enough light at that moment) that's you 0 and when you see something thats perfectly white thats your 1. But this is different per human. Some people can see detail in what you see as 0. They have more range ie dynamic range. What you can see also is dependent on the global amount of light let's say your 11% of gray. Your eyes measure the amount of light and make a balance so you can see the most. This is similar as changing ISO. Eyes are very similar to camera's but the huge difference is that the range is larger. Go take a picture inside a building and the windows are blown out but you can still see outside. So your eyes have more range.
This all has nothing to do with bit depth. You can have a insane dynamic range but the low bit depth. You can also have a small range and a very large bit depth. But in all reality is does make more sense to combine a large range with a large bit depth.
Bit depth is the steps between your 0 and 1. In case of 8bit this is 2^8=256*256*256(RBG)=16777216 colours or 16.7milion colours or 16bit 2^16=65536*65536*65536=281474976710656 colours or 281474976.7milion colours. Way way out of your eyes range. Also the average monitor does not have the capability to present you these colours...
Comment
-
-
Originally posted by Sliver_Creations View PostA value of 1 means nothing in real life. There is no maximum amount of light in real life.
Say I take an HDRI of the interior of a room, and make sure my fastest exposure is pure black, and my longest exposure is pure white... we can assume that the entire dynamic range of the room has been captured and no light intensity values are getting clipped on either end.
So when we merge and process the HDRI - just what exactly does a pixel value of 1.0 end up equaling in the real world?
Comment
-
Originally posted by RockinAkin View PostUnderstood - but I wasn't referring to a pixel value of 1.0 as a 'maximum' amount of light - since obviously in an HDR format pixel values can go far above 1.0.
Say I take an HDRI of the interior of a room, and make sure my fastest exposure is pure black, and my longest exposure is pure white... we can assume that the entire dynamic range of the room has been captured and no light intensity values are getting clipped on either end.
So when we merge and process the HDRI - just what exactly does a pixel value of 1.0 end up equaling in the real world?
Comment
-
The reason I'm wondering is because:
Say that I've taken this HDRI of the room, captured all it's entire dynamic range, and can somehow align the HDRI's pixels values to accurately correspond to real world luminance.
Then theoretically I could drop in a V-Ray Physical Camera, set the exposure settings to match my real world camera, and now anything I render with that HDRI and VRayCam should perfectly match any backplates I've taken on set. (This is of course assuming that however the VRayPhysicalCam works is accurate as well)
Right now I just have to eyeball my HDRI's intensity setting to match my backplates, but it'd be nice to have a more accurate solution.
Comment
-
As far as I understand the way you would match the illumination, it would be, renderig with a v-ray physical camera that matches the settings of the real camera, if this was perfect the camera that was used to take the hdri images would be equally sensitive in terms of exposure as the camera to shoot the plate, and you should make he image settings, at HDRI creation time, the right exposure as the one it was display on the camera when the pictures for the HDRI where taken. Of course, it is very hard to achieve perfection, because your constraint on similar cameras exposure values for shooting the plate, for shooting the HDRI and for rendering the image.
Comment
-
It's a bit of a tough one alright. What you've got to do is have a few measured values at black, grey and white (such as a macbeth chart) and then ideally another measured value for a brighter source in your scene. The first three are okay to do since you can buy a chart, photograph it when you're shooting your hdr's and then just use a colour picker back and forth to colour correct those values into place, what's slightly harder though is the brighter value. Any gamma curves you do on your images are going to affect the overbright values very heavily so ideally it'd be nice to have a known value well above 1.0 so you can pin things into place.
What's kind of tough is even trying to figure out the response curve of your camera and the conversion from raw files into tif files without the conversion software adding anything weird in an attempt to make the photo look better.Last edited by joconnell; 28-12-2013, 06:12 AM.
Comment
Comment