Announcement

Collapse
No announcement yet.

V-Ray/Mari HDR Ptex Lighting workflow videos.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • V-Ray/Mari HDR Ptex Lighting workflow videos.

    Hi Everyone.. I recorded a few videos to show some cool stuff that's now possible with V-Ray and Mari.

    There are 3 Parts.

    They are recorded in 1080p.. Plays really nice on youtube.

    1. Introduction

    http://www.youtube.com/watch?v=1d8ypguQjFw

    2. Painting HDR Ptex in Mari

    http://www.youtube.com/watch?v=pdEyQGzRSaQ

    3. Rendering Ptex in V-Ray.

    http://www.youtube.com/watch?v=K3nPBrESJeE


    Hope everyone finds it useful.

    -Scott

  • #2
    Thanks for posting and sharing the workflow. I am enjoying it and learning! The first thing I learned is put the cool GPU Observer gadget you used on my Windows 7.

    I am wondering how do you set up and do the Image-based modeling with all the rectalinear 3-bracket photos you took for the interior space? Are all camera position calibrated out of Maya and then brought in via FBX?
    always curious...

    Comment


    • #3
      Well I didn't undistort, but I should have.. You can use Nuke X to undistsort like in the video. I found the easiest way was to take measurements and line up the camera's by hand. I would duplicate the cameras and line up all the different camera positions. Once you have at least 3 camera's line up you can pretty much just fit geometry to match from all 3 cameras. You can move the camera's pivot in maya by hitting the move tool then insert key. I would snap or move the camera pivot to easily scale camera and rotate. That makes things really easy to line up with. Once you have all camera's you want to create a totally new camera, then match FOV and Filmback settings. I would go to a frame say 6, find camera 6 and then parent the new camera under CAM6 and set a keyframe. Then I would unparent that camera, change frames and repeat. By the end you have 1 camera with all of the positions and you use the frame you are on to describe which image it is. If you take 1,500 exposures and bracket them in Photomatrix you can then rename each one so its photo_hdr.001.exr , photo_hdr.001.exr etc.. That will make things a lot easier for when you undistort all images inside of Nuke. This process was a learning experience for myself, and I have since then made changes on workflow.. And yes, I am using FBX exports into Mari. Make sure you don't export your scaled camera's. They will not import correctly. There's a rounding error bug somewhere with FBX camera translation. Best to start with a fresh new camera.

      -Scott

      Comment


      • #4
        So, one has to take measurements of pretty much all objects of the scene or just the hero object (then scale/down accordingly for other objects)? It's still a bit hard for me to imagine the lining up based on 3 cameras and measurement as I haven't done this before (would be great if this process is included shortly in the first video....). With 150 photos, will that be a very time-consuming process just for lining up cameras?

        I've tried to achieve a similar task using a different workflow that involves ImageModeler and panoramic images. That was before Mari came out and I have to paint/manipulate HDR values in Nuke directly. I'm on my way to the 2nd video and looking forward to learn more about Mari and implement its features into this kind of HDR set-reconstruction task for spatially-aware IBL. Seems like to perfect fit so far....
        always curious...

        Comment


        • #5
          It's really not that bad.. You can also get a video camera, track the footage to get a point cloud and use that information for a starting point. To do you rmodeling. The models dont have to have a lot of detail. When you have 3 camera's lined up you can actuately move cubes around and everything else will pretty much fit in. Take some known measurements of something though.. The more measurements the merrier. They do help, and so does undistorting.

          Comment


          • #6
            I am also thinking about using ADSK's photofly to calibrate camera positions based on tonemapped images and then export camera info to Maya for image-based modeling. With your workflow, maybe I can then swap out LDR images with the original HDR images for painting in Mari.....hmmm...it's getting more and more interesting.....
            always curious...

            Comment


            • #7
              Watched till the mid of the 3rd video. Love the live-demo that even includes a Maya crash caused by stopping the RT.
              I see you crank up the diffuse contribution (like 20 times) of the Rect Light that is positioned under the hanging lamp. Is that simply a creative control for compensating the non-sufficient pixel values of the original lamp HDRI (assuming 3 brackets won't convert the full dynamic range of the light sources)? I hope, with this workflow, one can also calibrate HDR-textured lights with gray/chrome ball images photographed on-set?
              always curious...

              Comment


              • #8
                I tried Photofly at first, then Image Modeler.. Didn't have much luck for an entire room in Photofly and it seemed I would have to actually conform to a special method of shooting for it to work well.. I needed more freedom to shoot what I think I need at different angles. There are a few other apps I hear are good, but I ran out of time and brute forced it.

                Comment


                • #9
                  Originally posted by jasonhuang1115 View Post
                  Watched till the mid of the 3rd video. Love the live-demo that even includes a Maya crash caused by stopping the RT.
                  I see you crank up the diffuse contribution (like 20 times) of the Rect Light that is positioned under the hanging lamp. Is that simply a creative control for compensating the non-sufficient pixel values of the original lamp HDRI (assuming 3 brackets won't convert the full dynamic range of the light sources)? I hope, with this workflow, one can also calibrate HDR-textured lights with gray/chrome ball images photographed on-set?
                  Yeah, I didn't have any autobracket software or hardware for my D90 so 3 exposures was all I could do. I probaly should have painted more resolution for the lights. It would be easier to grab with a 20mm full frame camera. I think you would have less resolution from a half spherical chrome ball image. You would want to grab as much detail as possible for the ptex. The sampling from Ptex was a bit slow though.. Vlado said he was going to look at the ptex libraries to see what could be done to speed up GI from PTEX. Using an area light in skyportal is the only control you get from the ptex if you want to add intensity to the scene. It will use what ever image is behind it to light, and that's how you get nice hard shadows.

                  Comment


                  • #10
                    Great to learn that sky portal light trick from the video! I enjoy the tutorial a lot as it also covers some informative VRay for Maya tips.

                    Have you actually approach this with the video footage technique mentioned in your post earlier? I thought about that before but not sure where I can properly generate the point cloud data and how to clean it up as low-rez/detail geometries for texture projections. Plus, wouldn't that be a very long shot or a tricky shot because you probably don't want to do a short nodal pan in the center of the room (or maybe you can)? On the other hand, a long shot with enough parallax for camera solving seems further complicate the point cloud generating process. I might be totally wrong as those are just some ideas about doing this based on a video footage out of my head.
                    always curious...

                    Comment


                    • #11
                      You could use a camera with video to Syntheyes track a scene, then add real dimensions to help with the accuracy of the point data you get from a 3D Track. If you had an EPIC camera that shoots HDR footage you could track that footage, model geo and have everything you need to paint with since Epic HDR mode shoots up to 22 stops.

                      Nodal pan wouldn't do much since you want parallax to generate 3D point Data. A long shot would work. Or cut up multiple shots.. Remember, you aren't tracking to a plate for a shot to comp.. You are generating point cloud data. You could pick up a Leica point survey device, or try a Microsoft Kinect. Or like I did, just measure stuff and line up camera's manually. It's really not as bad as you think.

                      Comment


                      • #12
                        Thanks a lot for the reply, Scott. I don't have access to an EPIC camera, but I'll explore the options you mentioned.
                        always curious...

                        Comment


                        • #13
                          No problem. If you have questions you can always find me here.

                          Comment


                          • #14
                            very interesting video. inspires a lot to test that workflow... I'd be also interested in the hardware configuration, used for that video. Somewhere you told, you're on a MacPro with two gtx580? Thanks for any info.
                            best regards,
                            sacha

                            Comment


                            • #15
                              Great videos Scott, really informative and some good tips in there.

                              Would it be much easier to set up things like this with the Nuke -> Mari bridge in 6.3v4 now? It seemed like most of your time was eaten up by just lining up the Mari paint-through images, with the bridge though (assuming the geo lines up to the camera shot) it'll import a camera from Nuke to Mari and set up the projection for you to begin a paint-through.

                              Is the Nuke file generated entirely by hand or is there some scripting going on to do things like assign file names to the write nodes and that type of thing? Just feels like it would get very tedious to do something like this with 50+ images if you wanted to generate environments for several different shots...though I suppose that even if your .nk isn't set up like that, some smart coder at a big studio could easily get it automated for pipeline work.

                              Oh and since skylight portal was discussed already, I just had a question as to how the "simple skylight" option functions. Will that simply look through all scene geometry and get a reading from the vrayEnvironmentPreviewTm?

                              Comment

                              Working...
                              X