Announcement

Collapse
No announcement yet.

photogrammetry-based set reconstruction

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • photogrammetry-based set reconstruction

    Hi,

    I wonder what would be a viable low-budge workflow (no Faro or LIDAR involved) for highest possible photogrammetry-based set reconstruction for entire entire exterior/interior enviroment.

    Say you take HDRs at multiple survey positions along with many many (tens or hundreds) still photos, how would you solve camera positions for reconstructing the set in your DCC application...I have seen couple demos by Scott Metzger but that's with a FARO and he solved camera position of still photos via measurements. I am looking for a semi-automatic to automatic solution for solving camera position for both stills and HDR survey points.

    1. Photoscan: pro version is expensive and not sure if either version works with spherical images.
    2. PFTrack: significant amount of still image to manually solve can be painfully time-consuming and cluttering the UI.
    3. NukeX: the still image solver appears to me more for reconstructing partial environment.

    Look forward to hearing your thoughts.
    cheers.
    always curious...

  • #2
    You dont need the pro version of photoscan if you're planning to remodel over the top of it. I would use photoscan to get a pointcloud/mesh into max, have a couple known measurements to scale, remodel over it and then use scotts method to texture it from photos. interiors are super easy, exteriors a little more time consuming.
    Depending on the location i'd probably use different sets of photos for the scanning and texturing.

    Comment


    • #3
      Thanks for the input, Neilg. How would you register the position of HDR panos upon solving those still camera positions, if multiple panos are taken on the set and projecting them onto reconstructed set in Mari is desired? Through measurements?

      In my past tests, it has been a bit tricky to line up HDR panos and still cameras solved from structure from automatic solutions like photosynth, 123D, or free structure from motion (SFM) applications....

      Originally posted by Neilg View Post
      You dont need the pro version of photoscan if you're planning to remodel over the top of it. I would use photoscan to get a pointcloud/mesh into max, have a couple known measurements to scale, remodel over it and then use scotts method to texture it from photos. interiors are super easy, exteriors a little more time consuming.
      Depending on the location i'd probably use different sets of photos for the scanning and texturing.
      always curious...

      Comment


      • #4
        The nearest thing I've seen to this is the Ike GPS spike module - it's a laser measure for a smart phone that'll be adding in point cloud functionality soon. It's not quite there yet and the current feature set is only taking a photo of a building / area and then being able to add in points that'll accurately measure the surface / face of said building. They'll be adding in point cloud options soon though which outputs to a plain xml file.

        What I did recently to get what you're describing is not bother with photos, just walk around an area using a small stills cam that shoots video with no rolling shutter issues. Tracked this through syntheyes using some manual trackers based on measured areas and then spat out a dense cloud of trackers to give me a semi accurate point cloud to line up walls and floors on.

        I was hoping something like the google tango phone might be good for smaller sets when it comes out...

        Comment


        • #5
          Originally posted by joconnell View Post
          The nearest thing I've seen to this is the Ike GPS spike module - it's a laser measure for a smart phone that'll be adding in point cloud functionality soon. It's not quite there yet and the current feature set is only taking a photo of a building / area and then being able to add in points that'll accurately measure the surface / face of said building. They'll be adding in point cloud options soon though which outputs to a plain xml file.

          What I did recently to get what you're describing is not bother with photos, just walk around an area using a small stills cam that shoots video with no rolling shutter issues. Tracked this through syntheyes using some manual trackers based on measured areas and then spat out a dense cloud of trackers to give me a semi accurate point cloud to line up walls and floors on.

          I was hoping something like the google tango phone might be good for smaller sets when it comes out...
          I thought about that but wondering how is that gonna work if you for example to reconstruct an interior environment, would you take a single really long shot that covers walls and shooting around scene objects? Would that be really difficult to solve due to the length of the take and the camera path? Or would you split up into multiple takes? Any recommended shooting methodology?

          The goal is to have the set fully textured, as much as possible, with HDR data from panos or MDR data from 3-bracket stills to drive lighting/reflections and serve as plates when needed. Similar to what Scott did but without a FARO LIDAR. I suppose multiple ways can tackle geometric reconstruction but solving still camera and pano survey positions are necessary for texturing purposes.

          Thanks.
          always curious...

          Comment


          • #6
            Personally I ended up with a minute long shot - it was nice and gentle with very little motion blur or fast whip pans so it'd be no problem for any auto tracker to solve. I put in manual track in sections just so I could set up alignments and measurements and that was about it. It was a 1280 x 720, 24fps quicktime movie as the file and a few thousand frames long for an exterior set - you literally just walk around "painting" the set with the camera. once you've done this you can track, solve and build your scene, then you just have to measure the height of the camera off the ground and it's distance from any major landmarks as the centre point for your mapping gizmo.

            Comment


            • #7
              Ooh, another sneaky thing that works totally fine is if you get one of the light meter apps for the iphone or any smartphone, and ideally get one of the little covers like a luxi so you can take a light reading. If you calibrate this with a proper light meter so yours is accurate, then you can put your phone down in a location on the set and take a light reading using lux as your unit. Then in 3dsmax, reconstruct your set and put your lights in. You can make a vray light meter helper object and then place this in the same location as where your phone was. Hit the calculate button on it and it'll give you lux values in your 3d scene that are falling on the light meter - if you adjust your lights to get this value you'll have accurate brightness and when twinned with the same fstop / shutter speed / iso as your dslr shot a reference still with, you'll get a very accurate light match. Only thing of note is that lux is only brightness, it won't give you any hue information so you'll have to sort tint another way, probably off your grey ball.

              Personally I don't like using the physical camera / exposure for my scenes but it's just another technique that might interest you.

              Comment


              • #8
                Thanks so much, John. Love the sneaky info you shared every time. Too bad I have no clue how to make a light meter in Maya to "Calculate" incoming light....

                I will try out the video approach. My colleague also mentioned to use extracted frames from a survey video to solve cameras in Photoscan...We might need to take MDR (3-bracket) stills anyway as we currently don't have access to camera that record HDR data and we want to fill in any projection gaps with more-than-LDR pixel info..

                edit: just search a bit about the light meter and realized it's a VRayLightMeter....Maybe Vlado is willing to port this to Maya?
                Last edited by jasonhuang1115; 17-10-2014, 03:14 PM.
                always curious...

                Comment

                Working...
                X