Originally posted by 3LP
View Post
It's not strictly a developer's task to decide on the specific merit of a request, and Vlado's been historically very kind with the user base.
As to how used each of those tools actually is on a day to day basis, i still have my doubts.
Notice I DO use exposure controls to check my renders while developing them.
But from there, to stating that the Lens Effects (or camera response curves. Jaysus, it's a flat value list, 1000 values from 0 to 1. Is it seriously Chaos' task to do a conversion to a readable LUT?) in the Vray FB are a must, there's quite a stretch.
Further, and maybe this isn't quite as apparent as it ought to, LUTs are applied to log space footage.
Ie. your image clips, your values aren't linear anymore (between UI and render), and there isn't a single thing VRay can do about it (nor saving to EXR, if you elect to keep the LUT look baked).
So, here's my qualm with it: we'd be adding more complexity for very marginal, and duplicate (ie. in Post), returns, further running the risk of having the least skilled of users saving out clipped images or sequences, nigh untreatable after the fact, potentially running into more tech support requests, and negative rumor (ie. VRay renders clamped. now, that'd win Chaos plenty clients.).
As for the fact of having a red not rendering pure red in your vfb, I though it could have something to do with the lens (or CCD?)
I might be wrong, but take a stage with a white floor and a big 1m pure red ball.
Shoot it with a Red scarlet, and a Red Epic Dragon, and a Arri alexa, and a Sony CineAlta and with cooke or Arri / zeiss lenses.
Overlay all those footage on top of each other, are you really getting pixel perfect 1:1 match for all those (let's say) 8 different shots?
I might be wrong, but take a stage with a white floor and a big 1m pure red ball.
Shoot it with a Red scarlet, and a Red Epic Dragon, and a Arri alexa, and a Sony CineAlta and with cooke or Arri / zeiss lenses.
Overlay all those footage on top of each other, are you really getting pixel perfect 1:1 match for all those (let's say) 8 different shots?
Everything is not pure straight maths. because real life is not pure straight. Otherwise people would not be adding dirt everywhere and make lens distortions and chromatic aberration and film grain and whatever other impurity that would end up that your pixel in the end, is not the pixel you would have with a pure mathematical render.
Again, I understand you want to have the absolute power and be able to achieve this in post, but some things are just not feasible in post or require a lot of workarounds. Example, it's only with deep rendering (who has been "reasonably" recently introduced) that it has been possible to make correct/true DOF in post. Before that it was just not feasible straight out of the box in one pass. You needed to deconstruct your render in several passes to be able to have a correct dof.
And it kinda worked better than deeps.
But hey, progress.
Well nowadays, even if deep rendering is possible, it's still easier to enable dof in your cam, and done.
This would also be with settings like blades, anisotropy and center bias etc. Again, all that is coming from "real life" camera, and it's only recently also that the Vray cam lost his Physical name as before it was called "VrayPhysicalCam".
This would also be with settings like blades, anisotropy and center bias etc. Again, all that is coming from "real life" camera, and it's only recently also that the Vray cam lost his Physical name as before it was called "VrayPhysicalCam".
You can get the same effect with a standard camera, and the VRay controls present in other parts of the interface.
So if there was a way to get a more "realistic" render out of vray easily, why not let you enable that feature, like having a bit of blue or green in you supposed to be 100% pure red? By the way, GGX wasn't also introduced because it was better suited and with a more "real looking" end-results? Even if you where able to achieve the same result with 5 layered vray blend materials?
No, you couldn't get the same looking speculars with however many other layers of different BRDFs (phong, blinn and ward) VRay already had.
Now with your background I can also understand that you don't like to read "Adobe" and "post" in the same sentence And I get that, but that doesn't mean that some other people would like to use those technique, that's personal preference for each person out there, and just neglect their wishes because yours are different, doesn't mean they are unworthy of consideration.
I don't like adobe because their image maths is opaque, and only very recently, and utterly slowly, is coming up to match the bare minimum requirements of modern image manipulation.
And sure as hell i don't want to be a user of theirs for the next ten years, which they'll likely take to make the tools actually usable.
Stan (who doesn't like to be the devil's advocate )
Don't you find debates like this quite entertaining? :P
Don't take my curt writing style for anger, or distress lol...
And keep them coming, it's the whole point!
Comment