It would be a huge bummer to see such an excellent thread go south. I believe it's the best of interest for everyone on this forum to learn if there is a solution or a logged bug or even both.
Announcement
Collapse
No announcement yet.
Streamline backplate integration workflow
Collapse
X
-
Edited.
Oh, and Merry Chrimbo and all that.Last edited by ^Lele^; 23-12-2015, 03:08 PM.Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
Comment
-
Originally posted by jasonhuang1115 View PostIt would be a huge bummer to see such an excellent thread go south. I believe it's the best of interest for everyone on this forum to learn if there is a solution or a logged bug or even both.
It's a solution, Jason: make the GI environment black, that's all.Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
Comment
-
Originally posted by ^Lele^ View PostIt's a solution, Jason: make the GI environment black, that's all.
Comment
-
Good a call, Olli, so let's be clear and sum it up:
*)This is the solution to the darkening "bug" which was a wrong setup from the user, not a bug. Setting the GI environment black would be good practice if the environment is being duplicated in the dome. Code will change so it'll be out of people's hands.
*)Mattes return always an albedo of 1.0, so there is no basis to the claim energy preservation (or in other words, the V-Ray core) is broken (see my post with normal renders). As a result, ALL of the GI falling on the matte plane would bounce back as if their color was pure white.
This is also in place to prevent intensity explosions when GI falls on a very bright, reprojected, HDRI part: all the shading components are within the 0-1 bounds, but if GI fell on, and got multiplied by, a value above 1.0 you'd get uncleanable fireflies, and then yes, energy preservation would be broken for good: there is more to it than getting a teapot to light a dark piece of tarmac.
Clamping the HDRI BG before reprojecting it for the GI part would be an option, but would only serve this specific need, and break a number of others' (ever tried giving your compers a clipped EXR? Not a fun experience.).
*)Neither of the other software mentioned provides for BOTH compositable alphas and projected-albedo GI on mattes: one can do one, OR the other, for the very same reasons V-Ray shares with any other raytracer.
There is a "GI Amount" spinner in the matte properties, lowering that will lower also the amount of albedo of matte surfaces: this has nothing to do with the current setup, it's the behavior they always had, which worked fine as people would move to compositing.
You can do so by object or shader (vrayProperties or mtlWrapper, depending on your needs.).
Easy enough if all you have is the road plane (pick a color that's a good sum of the averages, its value is what you put in the GI multiplier of mattes. f.e. 0.05 for tarmac, 0.15 for dark concrete, and so on), and a bit more work if what you're doing is lighting a complex matte-only environment.
In the latter case, however, you'd have plenty of little models (or the light would fall wrong on them), and you could change properties for each (or for none, and do a rawGI*diffuseFilter for the mattes and get the right lighting in comp! Render passes ARE the way to go when one's asking a PBR for non-physical interactions.).
Debate on the feasibility of changing what gets projected on the matte at which stage is ongoing, along with storing the mattes' GI somehwere in the REs, but it is not as easy a choice as someone in here made it to be: the considerations go WELL beyond this specific case.
Vlado mentioned it on these subjects COUNTLESS times: Often fixing the specific needs of one party breaks things for another.
So the options grow, and users get confused.
And this if anything CAN actually be done, which isn't a given.
Still, you ought have what you need to get the job done in the real world.
Should you have issues with a real world scenario, i would be very interested in seeing such a scene, under NDA.
You can send it to me any time at my chaosgroup address.
P.S.: It would further seem no one yet noticed the real issue of doing this stuff in camera with repros: both us and the others mess up refraction and reflection (of course.).
This isn't a bug: it's a result of relecting/refracting stuff which isn't infinitely far away.
All good until the reflective/refractive objects are flat, the view angle quite straight, and the shadow catcher is the road.
A nightmare (for me, at least!) in any other scenario (unless one's actually matching the cameras that shot the environment, and the environment lighting the scene is NOT a dome, but emissive geo correctly placed. More renders in passes on top of very accurate shooting and CG setup work.).
In fact, in the samples, the vertical fence becomes the flat road, rather than showing the grass.
Two set of images follow:
The first shows the GI on mattes being adjusted to the BG value (picked at 61i, that's 0.23922f), against a non-matte, cam reprojected, half-diffuse half-self-lluminated shader (eyeballed it to the BG value, more or less.).
See how the GI matches.
The next two is Corona (you can see our behavior matching in the refracted sphere) and us (or Corona, for that matter) without the plane visible (so the sphere refracts the environment as it should.)
Last edited by ^Lele^; 25-12-2015, 10:12 AM.Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
Comment
-
Just another please!!!! for better backplate and hdri integration, my job rely's on this and v-ray makes it difficult,
One button should be able to deliver a dialog asking for hdri and backplate, then everything else setup behind the scenes,
Car advertising is huge!! So I can't understand why this isn't easier,
Something we have problems with is refraction, car windows, never seems correct, either very dark when seeing backplate or just not seeing correctly.
In need of, a ground plane that receives shadows from cgi object and also reflects into the car as a camera map from the backplate as an option,
Car/CGI object needs to render perfectly onto the backplate at render time, so we can see the cgi object in its environment and composited into its backplate, with shadows,
Ideally this should work with both production and RT renderers,
Also excellent alpha channels for re-comp in post with correct refraction levels, as although we need a perfect representation at render time, in reality the final render with always be comped back onto the original backplate in post,
V-Red somes this problem up from a simplicity point of view but obviously, lacks features of using a full 3d app, just replicate their ease of use and also allow the full feature set of vray
Comment
-
Hello all,
I've been lurking on this thread for a bit and would like to comment specifically about the OP's original topic. I went through the exercise of doing the setup described here, in Maya (as I haven't used MAX since it was called 3DStudioR4 ). I'm not kidding. Anyway, I don't think I have the perspective of the OP because I'm used to everything in Maya being overly complicated, but I do understand the workflow and how common it seems to be. From a vfx perspective, we often use lidar captured set data with multiple camera (re)projections of still photography that are used for indirect lighting along with cg set extensions to the same effect. That setup is implicitly complicated though and is to be expected. Just so I understand this use case, I have a few questions:
1.) when talking about the geometry that's used for (re)projection is it always a flat plane in these setups or was that just used for demonstration purposes?
2.) is the desire to always have the same render output regardless of whether or not you're working with a director over your shoulder -or- outputting to comp? (ie. is the alpha transparent but the background is visible in the RGB).
3.) when working in comp, are you tweaking the individual render elements and then trying to reassemble a beauty pass from them, or are you just grading the beauty over the plate?
Again, I'm just focused on the workflow (possible bugs aside).
-sparker
Comment
-
Very interesting thread. Will have to re-read it ...multiple times probably to get all what is going on.
Not going to weigh in on how vray handles mattes but in terms of having to integrate lots of objects e.g. cars onto multiple backplates there are lots of ways to make this process less painful.
If you have to do a lot of swapping of lights render settings etc. then I recommend try render pass manager, not free but is definitely worth it. http://rpmanager.com/about.htm
It was mentioned already but I would like to add that camera map gemini is really worth checking out.
this is free and is SO much better that I couldn't imagine trying to use 3dsmaxs camera mapping ever again.
Max's native camera mapping is buggy and as mentioned repeatedly if you have to setup multiple back plates its a pain.
http://www.projectgemini.net/CameraMapGemini/
Comment
-
Entire point of this thread was to make process that is easy in any other renderer to be easy in V-Ray as well. That definitely means no manual messing with camera projection, and definitely no messing with render elements or render passes. To do a simple integration of a cg object on a single backplate accompanied by a simple matching HDRI, you should never ever be required to set up any manual camera projection rig, nor should you be forced to render a matte on black only, and comp it in post (yet you should definitely be able to do so, when you need it)
So to those of you who weren't willing to spend your time reading entire thread and trying to understand the point of it, here's a video that sums it up: https://www.youtube.com/watch?v=VMSYerUmxx8
Some later pages contain solution to most of, but not all, of the problems mentioned in the video. So the entire point of this thread was not if it is possible, but to make it less of a painful process. And on several occasion throughout this thread, i specifically mentioned that I am not looking for any obscure workarounds, any solutions involving render pass compositing, or anything like that. I am looking to cut down as many steps from the set up process as possible, not add more of them.
Comment
-
1.) do you always use a simple plane for these (re)projections?
2.) does the desired output always contain the backplate in the RGB over a transparent Alpha or is that only when you're working interactively? (ie. do you eventually render over black for final output?)
Comment
-
For me usually a plane is good enough,
Maybe if there is a lot of camber in the road it may be bent, or if the car is close to a sidewalk, it may need to be a little more detailed than a plane but 95% its a plane.
Main thing for me would be backplate for test renders, doesn't matter about a production render as it will be recomped,
But would need to be able to have it in the production render for lighting etc, so yes but for the final output it doesn't matter
Comment
-
Thanks. One idea I have would be to add some functionality to the dome-light that allowed for an infinitely large plane to be implicitly generated at render-time, at some specified height (0, 0, 0 by default) to be used specifically for (re)projection of either the existing HDR or some other image using the current camera projection matrix. The matte properties would be hard-wired (which is why I'm asking about output parameters). If this covers 95% of the use cases, I feel it's a win. The only thing new here would be this implicit geometry and some specific matte properties that it has. Thoughts?
Comment
-
Originally posted by sparker View PostThanks. One idea I have would be to add some functionality to the dome-light that allowed for an infinitely large plane to be implicitly generated at render-time, at some specified height (0, 0, 0 by default) to be used specifically for (re)projection of either the existing HDR or some other image using the current camera projection matrix. The matte properties would be hard-wired (which is why I'm asking about output parameters). If this covers 95% of the use cases, I feel it's a win. The only thing new here would be this implicit geometry and some specific matte properties that it has. Thoughts?
It'll still produce a fully white alpha if raytracing, with GI active.
The current (Proposed by the OP) setup, with the GI environment overridden to black, and the backplate in the secondary mattes reprojection slot, works just as well (if one's not overly generous with the size of the groundplane).Last edited by ^Lele^; 10-01-2016, 05:01 PM.Lele
Trouble Stirrer in RnD @ Chaos
----------------------
emanuele.lecchi@chaos.com
Disclaimer:
The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.
Comment
-
Originally posted by ^Lele^ View PostThe current (Proposed by the OP) setup, with the GI environment overridden to black, and the backplate in the secondary mattes reprojection slot, works just as well (if one's not overly generous with the size of the groundplane).
Comment
Comment