Sunday, March 6, 2016

Lab 5: Using Oblique and Nadir Data to Create 3D Models.

Introduction 

Imagery collected from UAVs can be either oblique or nadir, and both have varying applications and uses with the remote sensing industry.  Nadir, implies that the imagery was taken at or very near a perpendicular angle to the surface below.  Oblique imagery is when the focus of the image is both below and to the side of the sensor, creating a more profiled view of the area/structure being observed.  So far for the labs in this course, students have worked almost exclusively with nadir imagery.  In this lab, however, both types of imagery will be utilized and merged into one in order to create accurate 3D models.  

There are two types of oblique imagery, high oblique and low oblique.  Low oblique implies that the horizon is in the view of the image.  When creating low oblique data-sets, its imperative that the images are captured with the horizon at 180 degree angle relative to the edges of the frame, simply because that is how we has humans expect to view the horizon.  High obliques, on the other hand, are taken at larger angles relative to the ground and do not show the horizon.  Because the horizon is not in view, there is not as much importance of capturing images parallel to the horizon.

In terms of industry uses, Nadir imagery is what is primarily relied upon for creating geoaccurate products like orthomosaics and DSMs.  Oblique imagery is more or less used to for photographic aesthetics because it can provide unique angles that show angles that are pleasing to the eye. For example, in class we saw how oblique imagery was used to help create a 3D model of a construction site.  This technology, when compared across a temporal scale, can really show with great detail the progress of an changing area or structure because it allows the viewer to move across space and view the whole area at varying heights as well as angles. When used in conjunction, the geoacurate data from nadir vantage points combined with oblique imagery can add detail to areas desired, providing higher accuracy and detail at the same time. 

The focus of this lab is how a user can use both nadir and oblique imagery to create 3D models in Pix4D.  In doing this, a report will be produced that comments on the product created through the merger of these contrasting angles.  In conjunction with imagery itself, it will also be important to factor in how the conditions of that particular flight mission effect the output, as well as AOI being captured in the imagery. 

Area of Interest

The area of interest for the focal activity of this project, merging nadir and oblique imagery project to create a 3d model, is a farm.  The area has a number of structures, both small and large, the largest being barn.  The UAV missions that were conducted to collect both the nadir and oblique imagery sets took place in the winter, and much of the surfaces in the AOI are covered in snow.  When creating 3D model, its very important to understand how the landscape itself will effect the quality of a project, and adjust accordingly.  Now, given that this is sample data that was given to us, we as students did not have the ability to take this into consideration and create a plan of action.  Had we the opportunity, it would be important to confider the following. 
  • Because the landscape is snowy, a higher degree of overlap will be necessary 
  • recommended overlap - Frontal: 85%  Side; 75%
  • set sensor settings accordingly to maximize contrast in the AOI imagery.
Now that we covered the specifics of the overall Area, the focal point of the 3D model is the next important thing to discuss - The Barn.  The barn is the object or structure in the AOI, and even though the nadir imagery captured images away from the barn, the oblique images (roughly 30) focused directly at the barn.  In this situation, as with any other oblique imagery taken for the purpose of creating, to create the best result possible you should try to use very diverse data.  Diverse, in this instance refers to imagery sets taken various angles and viewpoints.  The more vantage points available, the more accurate the resulting model can be.  Ideally, when working exclusively with UAV data, the nadir imagery involved should aim to provide vantage points at or near how it appears below in figure 1.

figure 1: Ideal oblique imagery vantage points.

In some cases, you can even add images taken at the ground level to add another layer of detail that isn't as available fro.m a UAV. This video shows just how in-depth the data collection process can be
Examples like this are highly data intensive, and as you can see, highly resource intensive. Let us consider this video when analyzing the project designed to capture and model our 3D barn.  Below, in figure 2 we can see the DSM and Orthomosaic produced by the final merged products with tie points included.

figure 2: Map overview of farm.
In reference to these maps, one can see how the land itself is predominately flat, with a slight slope approaching the south east.  On the land, their are a number of objects both natural and man made which obstruct from the relatively flat surface.  These items include buildings, trees, machines, and other structures.  From this overhead nadir position, we can also see that the farm is cornered in by a road that is to both its east and south.  The road runs north south and turns via an elbow to a east/west orientation.  Within the area contained by the road we see a number of structures, including a barn, a triple set of silos, a house, and several garages.  The tallest portion of the area are the three silos, visible as the cluster of three white circles in the DSM map.  The other tall objects that seem oddly shaped along the outskirts of the farm area, by the road, are trees.  As mentioned before, because the season is winter, snow could potentially be on top f any of these structures, as well as the land, which it clearly is.  As such, height values could either be the actual height or the height of the snow which is on top of the normal surface. 

Methods

To create a 3D model project out of two previously made projects, there are a few extra steps involved as one would expect.  To get these projects ready to merge, initial processing needs to be completed on both an oblique set of images and a nadir set of images.  In the case of the Barn Project, we had about 150 nadir images and roughly 30 oblique images. Once initial processing for those two separate projects has been completed, they can be merged together to create a 3D model project.  Since both sets of images were created using the same sensor, the sensor specifications match up easily. The important thing to remember is to use 'distance above WGS ellipsoid' for the input ordinate system z values and the output z coordinates to be mean sea level.

3D model of Park Pavilion - Soccer Park


Creating the 3D model of the park was simple, but the product was not very sharp.  This mission when ran was only done using almost exclusively oblique imagery, and very little from over head nadir positions. As such, you have varying levels of distortion around the area of intrest, both on the pavilion itself or in the area surrounding it.  The distortion of area around the pavillion makes sense, mainly because it was for the most part captures as foreground to the pavilion.  Whats most concerning is the distortion of the subject of our AOI, the pavilion.  Figure 2 exemplifies this distortion, and can be seen below. 

figure 2: Park Pavilion - notice distortion from prodomenetly oblique mission. 
Creating in 3D model in pix4D does not create an orthomosiac or DSM, like what you can create when building a 3D map.  As such, there is no format in which it could be brought into ArcMap. Without these two things, geolocation is not reliable in any means, and cannot be worked outside Pix4D in mapping programs, like ArcMap. 

Nadir and Oblique Merge Project - Winter Farm Land 

The merged project of nadir and oblique imagery of the winter farm land produced better results, but still left more to be desired int terms of quality.  What popped out that was initially a concern, was that the merged project layers didn't overlap and stack perfectly, there seemed to be two floating just atop one anther.  Figure 4 below exemplifies what I'm talking about in this instance. 

Figure 4: Side profile of unoptimized data produced from merged oblique and nadir image sets. 

To create a better representation of the AOI, manual tie points were added too areas where the two projects overlapped the most, around the barn.  Three tie points were added to visible corner features that were apart of the barn.  Each tie point created used roughly 20 images to calibrate a location for the tie point.  The key with this process, is to make sure that the tie point is adjusted in both imagery taken from the nadir and oblique imagery set. Doing this will correct the unwanted layering that was present after this project was completed without GCPs. figure 5 below shows the location and orientation of these tie points in relation to the barn structure.

figure 5: Tie points used on barn.
Once the tie points are established, re-optimization can be conducted.  In this instance, I re-optimized by initially only rerunning the initial processing.  Once that was complete, and I could see the results produced, I was confident that i could  ran the point cloud and mesh, as well as the DSM/Orthomosaic re-optimization and produce a better product than what was created initially. When the orthomosaic was complete, what i thought would be a properly aligned merge turned out to still not be 100% correct.  Figure 6 below shows the adjustment that took place after the tie points were applied in the optimization process (compare with figure 4).

figure 6. Barn distortion post tie point application and re-optimization. 
What i believe occurred here, and i must admit that this was an oversight on my part, but I should have used a broader range of Tie points.  What i mean by this, I should have used more than 3 and I should not have had them so clustered.  My thinking was that i only needed to put tie points on the barn because that was the area that intended to be the focal point of both projects that were merged.  What i should have done is added more tie points to things on the ground, which would allow the orthomosaic to calibrate and better synchronize the merging projects' z values.  With a larger spread of tie points, this data could be even further optimized.  

Discussion

Nadir and oblique merge project

To create a meaningful 3D project, the data acquisition process must be concise, thorough and optimized.  In this instance, merging two data sets into one showed how issues can arise and alter your data in ways not expected.  In the last lab we as students conducted for this lab, we learned the power of using GCP points to re-orientate data to provide maximum accuracy over an area.  This, as we found out, is even more so important when creating a project from two separate flight missions. Just as in the last lab, laying down manual tie points and optimizing the project was the key to creating a meaningful product. 

One issue that could also have produced better results is by adding a more diverse range of data to cover the pavilion that is being modeled.  As we learned from the pix4D sponsored video shown above, the more data and vantage points the better.  In comparing the two projects, redoing this project with both GCPs and more imagery (at both nadir, high oblique and low oblique) would likely produce a better result then what observed.

Pavilion 3D model project


In terms of the distortion involved with the modeled pavilion, several things could account for to the predominantly distorted project that was created previously.  In either case, there is likely more than source of distortion at play , and I would ascertain that the distortion was a result of a a combination of these possible errors: 
  1. Sun distortion 
  2. inadequate coverage by sensor 
  3. Varying landscape in foreground 
  4. Sensor malfunction
  5. GPS malfunction.  
  6. Lack of GCPs could have resulted in insufficient calibration of images
Of these potential causes, I believe 1, 3, and 5 to be the most likely contributes to the distortion we see in the the pavilion. Sun distortion likely played a factor because the pavilion has a relatively dark and metallic roof, with various angles.  This could cause sun light and heat to interfere with the data collection process.  The evidence for this can be seen in the portions of pavilion that seem to melt, a typical sign sun distortion. Also, when the foreground of an oblique image is constantly changing, the software has less common ground between images to match during the overlap process when creating the 3D mesh.  As a result we get the level of pixilation at various part of the modeled pavilion that can seen in figure 2 back up in the methods section.

 Referring to the quality report, one major concern I saw in the quality check was that only 48% of the 188 images were able to be calibrated by the software. That means that over half of the images were thrown out because they could not be appropriately geolocated. What could have helped calibrate these images would have been by using GCPs.  Had they been used, there would have been more overlapping data that could have been used to create a more complete and good looking model. figure 3 below shows the quality report created during the initial processing of this project.

Figure 7: Quality report for 3D model creation of pavilion















Conclusion 

Working with oblique imagery can be a tedious process in terms of both collecting and processing.  What has become apparent is that thoroughness is of a very high importance when trying to create a good 3D product.  To be able to use a 3D product, and benefit from it, one must be thorough in creating adequate overlap and geoaccuracy.  Doing so will help Pix4D be better able to accurately calibrate  the imagery, maximizing the efficiency of your flight missions.  It is simply a waste of time and resources if only half the data can be calibrated from a UAV mission because of inadequet georefernce or overlap.  Preventing this can only be accomplished if mission planning is prioritized to a high degree.  If rushed or not fully thought out, the product will provide nothing that is worth sharing or using.   As such, creating a good 3D map/model product is dependent on the mission planner providing a diverse array of vantage points as well as GCPs/tie points which allows the data to be better callobrated.

In relation to lab 4 (seen just below this lab), I should have done more of what i did in that lab in this one.  I should have spread out my tie points more and not have put them on one structure that occupied a very small amount of the total subject area.  Going forward, I will often refer to the lessons learned here so that I do not repeat being thorough with the necessary user input when processing imagery to create respectable 3D products. 

No comments:

Post a Comment