Originally posted by Alex_M
View Post
It stems from trying to achieve non-physical results from within a Physically based "sandbox" (Which we'll call, for simplicty, V-Ray. ).
It would be analogous, in photographic terms, to trying to alter reality before taking the picture, so to exceed the capabilities of the camera.
While it's dead obvious that it's not a task one would achieve all too well when picturing wildlife, or natural landscapes (even in studio settings, there are quite precise limits: you'd rather not light your subject with a gigawatt laser...), in CG there is a lot more latitude available.
Stress on "available": it's not there because it HAS to be used, but rather because it CAN be (blame engineers writing software and having fun at it! Ups. ^^).
In the real world, one sets a capture up as best as one can, and all of the artistic latitude, and vagaries, and added taste and value, are added AFTER the picture has been taken, in the development and print stages.
I still fondly remember my enlarger and my endless attempts at varying exposure on the film while moving my cupped hands around the lightbulb, so to beam light just where i needed, a long, long time ago.
Today, we call all of that "Post", and we have a number of tools available (what i did above was, technically, MASKING, go figure.) to facilitate the task.
In the viz world it's often overlooked, or thought of as too complex, or a step too many, or belonging to VFX.
I beg to differ: if you look at VERY old posts on the forum, you will find people were loving "S-Curves" in photoshop (not that different from Filmic tonemapping, for shape and results. It was just the buzzword of the time for selective contrast across an image brightness domain.), and getting very pleasant results out of what was essentially, even back then, a (near!) realtime post session.
Today, with the amount and power of free compositing software (Fusion, to name one. and there are more), i find unthinkable to not tap into all that goodness, especially as the concepts, for most of the usual image manipulation tasks, are very simple, and quite easy to learn in any package.
TL, DR: Don't try to change reality before taking a picture: change the picture instead.
Can you kindly expand on this? Why do you move away from LWF? You've piqued my interest now.
For example, max ray intensity is a form of bias, which however becomes apparent only after the set value is reached: any (secondary) intensity value above the one set in there will be clamped to it.
You will, as such, not notice any difference as long as your (bounced) lighting is dim, but as you experienced in the scene in question, it becomes impossible to get strong bounce lighting if that's active.
That, in effect, breaks the linearity between how intense you set a light, and how intense the renderer is allowed to represent it.
The same, if in slightly different fashion, happens with the other methods i cited above: your graph, after all the gammas and inverse gammas are applied, will not be linear, and somewhere something will break.
Rendering you image in LWF, and then moving it (with assorted render elements) to Post will however allow you much the same results, but without the need for a rerender, and most of all it won't send you to an Asylum trying to make the render do what -in effects- you instructed it not to.
I'm eager to see your findings about this.
Soon, now.
I am the only limiting factor, as of this afternoon. :P
Interesting, never knew this was the case with initial implementation of sun & sky.
It was the fact that we all came from lighting with low-intensity fixtures some overbright shaders.
The sun intensity being around 300 float at midday exacerbated the limits of that heritage workflow, and prompted a change of approach.
Comment