Bachelor thesis True orthophoto generation

Size: px
Start display at page:

Download "Bachelor thesis True orthophoto generation"

Transcription

1 Bachelor thesis True orthophoto generation Rupert Wimmer March 25th, 2010

2

3 Abstract Throughout this Bachelor thesis methods for generating true orthophoto imagery from aerial and satellite imagery based on digital elevation models are investigated, compared and an application for generating true orthophotos is developed. The term True Orthophoto is based on a generation process that tries to restore any occluded objects in aerial imagery while at the same time including as many objects as possible in the surface model. New developments in image and digital processing still increase the interest in orthophotos and result in a demand for greater quality of orthophotos. However, occlusions due to rough terrain or significant difference in elevation lead to inconsistencies in accuracy and scale. True orthophotos eliminate these inconsistencies, but most of the existing approaches to generate true orthophotos require a 3D model of the desired earth surface area, which is time and cost-intensive in the generation process. The German Aerospace takes another route, and wants to generate fast and cheap true orthophotos based on fully automated generated elevation models. The four general steps of the true orthophoto generation process are (1) rectification of the source images and locating occluded areas, (2) seamline placement based on a distance-toblindspot algorithm, combined with (3) mosaicking and (4) feathering of seamlines with multiresolution splines. The overall goals of this thesis are to investigate problems of orthophotos and to devise solutions in order to implement methods that are capable to create true orthophoto imagery fully automated. The achievements show that the investigated and implemented methods give reasonable results compared to other true orthophoto applications, due to image quality and computation time. A performance gain of 10 times is accomplished. iii

4

5 Contents Contents Abstract iii Contents v 1 Introduction Motivation Problem definition Outline and structure General overview of the chapters Orthophotos Creating orthophotos Reprojection Mosaicking Relief displacements True orthophotos Accuracy of orthophotos Summary The Camera Model Interior orientation Exterior orientation Summary Digital Elevation Models Elevation models Data collection for digital elevation models Surface representation Regular Raster Grid v

6 Contents Triangulated Irregular Network DEM generation by stereo image matching Summary Design description Limits of other true orthophoto applications Creating true orthophotos - Step by step Rectification Locating occluded pixels Seamline placement Mosaicking Implementation Raytracing the elevation model Data storage Bounding box optimization Global and local maximum heights Raytracing with the Bresenham algorithm Parallel processing Rectification Summary Mosaicking Mosaicking and Merging methods Mosaicking by Nearest Feature Transform Seamline feathering Generating the Gaussian Pyramid Generating the Laplacian pyramids Summation and splinning overlapped images Summary Experimentation and Evaluation Performance Pros and cons Using a simpler DEM Considering all images for Nearest Feature Transform vi

7 Contents 8.6 Summary Conclusion Evaluation Outlook Acknowledgements References List of Abbreviations List of Figures A Appendix A.1 Content of companion CD A.2 Enblend user guide A.2.1 Raytracing user guide vii

8

9 1 Introduction This chapter gives an overview of the general purpose for and objectives of this thesis. The motivation and goals of the project are presented along with a brief description of the following chapters of the thesis. 1.1 Motivation New developments in image and digital processing still increase the interest in digital, accurate, undistorted and true-in-scale images - so called orthophotos, the very common part of spatial datasets. Orthophotos can be used to measure true distances and are commonly used for tasks where a greater detail and timeliness than maps are required. The opportunity for imagery with higher resolution results in the demand for greater quality and accuracy of orthophotos. With today s high resolution aerial photography, only a limited accuracy is provided when using traditional orthophoto production. Rough terrain or significant difference in elevation leads to inconsistencies in accuracy and scale with the normal orthophoto method, which cannot handle occlusions. These limitations might cause problems for the user, who is unaware of them, and incorrectly uses the orthorectified imagery as a true and accurate map. The increasing detail of orthophotos makes the limitations more and more evident. The demand for greater quality and accuracy requires new methods and algorithms to overcome these limitations of normal orthophotos. The ever raising computer processing power increases the feasibility to create true orthophotos on a large scale, and hence the Ger- 1

10 1 Introduction man Aerospace Center wanted to extend the existing image processing software by true orthophoto generation to meet the demand on accurate true orthophotos. Various researchers recently investigated true orthophoto generation, but most of their approaches imply a manually created 3D model of the desired earth surface area, whose production is very time and cost-intensive. To meet the demand of fast and cheap true orthophotos, this study takes another route and works with fully automated generated elevation models. 1.2 Problem definition The aim of this bachelor thesis is the design and implementation of a software for true orthophoto generation. The generation process based on aerial or satellite images and digital elevation models is supposed to be as fully automatic as possible. The overall goals of this thesis are: - Devise a method to create true orthophotos. - Investigate problems and solutions for generating orthophotos. - Implement methods optimized regarding quality and computing time that are capable to create true orthophoto imagery fully automated. - Evaluate the solutions through test methods. 1.3 Outline and structure This bachelor thesis is partially based on the master thesis of M. O. Nielsen [Nie04]. Whenever the preparatory thesis is referenced, the important results are presented and can therefore be read without the prior knowledge of [Nie04]. The first chapters cover the basic theory for generating orthophotos and the difference to true orthophotos. Next, methods to create true orthophotos are introduced. The key steps are explained, tested and evaluated independently in the following chapters. 2

11 1.3 Outline and structure A software module as part of the existing image processing software XDibias was developed during the investigations on this thesis to produce true orthophotos and the existing software enblend [Md04] was used for mosaicking. The developed software can be found on the companion CD General overview of the chapters Chapter 2, Orthophotos: Introduces the concept of orthophotos and the procedure to create them. Afterwards, this is extended to true orthophotos and the differences are pointed out. In the end, the accuracy of orthophotos is analyzed. Chapter 3, The Camera model: The mathematical model for the interior and exterior orientation of a camera lens system, important for the true orthophoto generation, is presented. Chapter 4, Digital Elevation Models: The basic concept of digital surface models and the different model representations are described. A description of stereo-matched elevation models, which are used in this project, are given. Chapter 5, Design description: A step-by-step method to create true orthophotos is investigated and specified. Chapter 6, Raytracing the elevation model: Methods for an effective way of tracing rays between the camera and the surface model are developed in this chapter. Since a tremendous amount of calculations is required for processing large aerial images, the performance is an important issue. Chapter 7, Mosaicking: Methods for seamline placement and feathering are presented, tested and evaluated. Chapter 8, Test results: The implemented method is tested on a set of data. Pros and cons are illustrated with close-ups and results are commented. Chapter 9, Conclusion: This chapter takes the entire thesis into consideration again, summarizes and draws out the final conclusion and statement. On top it presents suggestions for future work and performance optimizations. 3

12

13 2 Orthophotos A taken photograph shows an image of the world projected through a perspective center onto the image plane. Because of this so called central projection and the fact that aerial images are normally shot vertically, objects at the same point but with different heights are placed at different positions in the photograph (figure 2.1). As an effect of these relief displacements, objects placed at a high position (consequently closer to the camera) will appear bigger in the photograph and occlude objects at a lower height. f e d c b a Image plane a b c d e f Perspective center Terrain A B D E F C Datum A' B' C' D' E' F' Perspective projection A B D E F C A' B' C' D' E' F' Orthographic projection Figure 2.1: Illustrating the difference between orthographic and perspective projection Aerial and satellite images are often used combined with spatial data in Geographic Information Systems (GIS), as reference maps in city planning, or as part of realistic terrain visualizations in flight simulators. Therefore the images have to be adjusted for topographic relief, lens distortion, camera tilt and recalculated with an underlying Digital Elevation Model (DEM). All this is done throughout the ortho rectification process, which tries to eliminate the perspectiveness of the image by computing an orthogonal projection for every single point of the image instead of projecting the rays through one point onto the image plane. The orthophoto is true in scale and has a reference to the world coordinate system and can consequently function as an uninterpreted map. Orthophotos have a high up-to-dateness, can be merged into one large photo of an enormous area and they can be generated more often than typical topographic maps because of the low expenses. 5

14 2 Orthophotos Image plate Figure 2.2: Illustrating the cause of relief displacements [M + 01] 2.1 Creating orthophotos For the orthophoto generation process knowledge of the terrain and also the camera model, position and orientation during exposure is required. A terrain model can be created in several ways, but the most common is to use digital cameras with direct georeferencing by GPS- and IMU-measurements (investigated in chapter 3). An additional way is photogrammetry, which provides algorithms known as bundle adjustment to minimize the errors of an image and to figure the needed parameters out. Another obsolete way to extract the parameters is by manually fitting the image over some known Ground Control Points (GCP) without considering the camera model (sampling). The points constitute a relation between unique points in the source images and points located in terrain with known positions due to a GIS. GCPs are typically used in bundle adjustment, too. 6

15 2.1 Creating orthophotos Reprojection Reprojection is the first step of orthophoto rectification, where rays are reprojected from the image onto the model of the terrain. It is possible to do the reprojection in two ways: Forward or backward reprojection. The forward method projects the source image back onto the terrain (figure 2.3). The intersection point of the projection with the terrain (X,Y,Z) is then stored in the orthophoto. If the upper left corner of the orthoimage is placed at X 0, Y 0 the pixel coordinate of a point in the orthoimage is at: [ ] column row = 1 GSD [ ] X X0 Y 0 Y (2.1) where GSD is the Ground Sample Distance, which is the pixel size and consequently the distance between two pixels (from pixel center to the very next pixel center). This equation also takes into consideration that the world coordinate system has the Y coordinate upwards / north and a pixel coordinate system has the Y-axis downwards. Through the forward projection, regularly spaced points in the source image are projected to a set of irregular spaced points on the terrain. To store the pixels, they have to be interpolated into a regular array of pixels of a digital image. This interpolation is the reason for the preference of backward projection. Instead of projecting a point of the source ward and backward projection. Figure 2.3: Main principle of for- image onto the terrain, a pixel of the output ortho [Nie04] image is projected back to the source image. In this case, the interpolation is done in the source image, which is easier to implement and the interpolation can be done right away for each output pixel. On top only needed pixels of the orthophoto are reprojected. 7

16 2 Orthophotos For the backward projection a row / column coordinate of a pixel of the orthophoto needs to be converted to the world coordinate system. The Z coordinate is found at this point in the terrain. The pixel-to-world transformation is done by: [ ] [ ] [ ] X X0 column = +GSD (2.2) Y Y 0 row To identify the point in the source image that corresponds with the foundx,y,z coordinate, the camera needs to be modeled. A description of the camera model and the equations needed for this calculation can be found in chapter Mosaicking Orthophotos often cover an enormous area and will therefore require the rectification of several source images that are merged together afterwards. This process is called mosaicking and involves several steps: - Seamline generation - Color matching - Feathering and dodging of the seamlines The line where the images are stitched together is defined as a seamline and can be generated either automatically or manually. The focus of this process is to mosaick the images along places they look very familiar and in the best case the seamlines are not recognizable. A manual seamline placement is often done along the centerlines of the roads. There exist several ways to place seamlines automatically. The simplest is to place the lines along the center of the overlap. Another way is to subtract the images from each other and place the line along the minimum difference between the two images, doing a so-called least-cost trace [Nie04]. To create a high quality orthophoto, the images mosaicked should have the same color and brightness near seamlines to conceal them. There are several techniques that can be performed to hide seamlines. Color matching and dodging try to remove the radiometric differences in the images by analyzing and comparing the overlapping sections. Feathering 8

17 2.2 Relief displacements tries to hide the remaining differences by making a smooth cut that slowly fades from one image to the other. 2.2 Relief displacements Figure 2.4: Relief displacements [Nie04] The earth curvature for satellite pictures and the flight altitude for aerial images cause relief displacements due to central projection. At the nadir point there are no relief displacements, but they increase with the distance to nadir. On top errors in the elevation model result also in horizontal errors caused by uncontrolled relief displacements. The horizontal error hor (relief displacement) can be found through a geometric analysis of a vertical offset ver (building or object), the flight altitude above the base of the object H, the distance to the image centerr t and the camera constantf as illustrated in figure 2.4. From this figure the following relation is derived: f r t = H = H ver D + hor D (2.3) Isolating hor results in: 9

18 2 Orthophotos hor = r t ver f (2.4) Figure 2.5 illustrates that a higher flying altitude results in smaller relief displacements. A real-world example is illustrated on figure 2.6. Figure 2.5: For images taken with the same kind of lens the relief displacements decrease with an higher altitude, but increase with the distance to the nadir point [Nie04]. Figure 2.6: The two images are taken from roughly the same position but different altitudes and lenses. The building is about 70 meters tall and the relief displacements differ significantly due to the flight altitudes and lenses. [Nie04] 10

19 2.3 True orthophotos 2.3 True orthophotos Orthophotos are usually created using a base earth elevation model and do not consider occlusions. However, due to rapid changes in elevation, the consequential bigger relief displacements for higher buildings can be so large that they will occlude the terrain and objects next to them (figure 2.8). At the German Aerospace Center the image processing software XDibias is used. It consists of several modules for almost any kind of image processing. The orhtophoto is one of the modules, and by means of forward projection it generates a normal orthophoto on basis of a DEM. This approach is capable of handling different types of cameras, but leads to unwanted stretched objects (figure 2.7) in occluded areas, resulting from interpolations in the orthophoto. Interpolation has to be done due to gaps in the orthophoto caused by occluded areas that Figure 2.7: Object Stretching are based on different heights of objects. with forward In this project, backward projection is used in order to eliminate the projection due interpolation in the orthophoto and the simpler investigation of occluded areas. The backward projection rectifies buildings and objects areas to occluded back to their original position, but also leaves a copy of the object on the terrain. The left copy on the terrain - a so-called ghost image - is caused by lack of information; rays are projected back from the elevation model to both the occluded area and the occluding object without detecting that occluded data is being rectified. Therefore, the wrong image data is placed in the occluded areas and is illustrated in figure 2.11b. A true orthophoto reprojects the source images over a digital elevation model as well, but takes occluded areas into account and fills them with data from other images throughout raytracing, seamlining and mosaicking. An orthophoto is understood as true, when the generation process tries to restore any occluded objects while at the same time including as many objects as possible in the surface model. To include anything in the surface model that is visible, like vegetation, people, cars, traffic lights, etc., in the source images would be an impossible task. In a general understanding, true orthophotos are based on surface models that only include terrain, buildings and bridges. A similar definition is found in [A + 98]: 11

20 2 Orthophotos Figure 2.8: Because of the perspective projection rapid elevation chances and tall buildings hide objects next to them. [...] the term true orthophotos is generally used for an orthophoto where surface elements that are not included in the digital terrain model are also rectified to the orthogonal projection. Those elements are usually buildings and bridges. A different definition, which defines the true orthophoto only on basis of removing ghostimage artifacts, is found in [K + 04]: [...] the term True Ortho means a processing technique to compensate for double mapping effects caused by hidden areas. It is possible to fill the hidden areas by data from overlapping aerial photo images or to mark them by a specified solid color. In order to restore the occluded areas - or blindspots - correctly and to automatically fill them with data, imagery of these areas is required. This supplemental information can be gained by pictures of the same area taken from different perspectives (figure 2.10). These pictures have the occluded areas shown and by combining them, full coverage can be achieved. Aerial images are typically captured with sufficient overlap as illustrated in figure 2.9. That means, for every blindspot seamlines have to be generated and will, therefore, result in a significantly higher amount of seamlines compared to regular orthophotos. The con- 12 Figure 2.10: Combination of several images for full coverage

21 2.4 Accuracy of orthophotos Figure 2.9: Possible seamline placement in some orthophotos sequence is a high demand on the mosaic process and good colormatching algorithm, since the match must fit around all the numerous seamlines. Before going through the true orthophoto generation process instead of the ordinary orthophoto generation, a decision based on facts has to be made. For images taken at a high altitude with a small scale or resolution, true orthophoto generation makes no sense because of relief displacements at the subpixel level or even displacements of 2-3 pixels do not really matter. Consequently true orthophoto generation is only interesting for images of high detail or low altitude, tall buildings and rough terrain or off-nadir-images, which are often captured by high resolution satellites. Additionally, the kind of lens matters; normal-angle lenses have less relief displacements than wide-angle lenses. A further interesting field for true orthophoto generation is sideways looking satellite images, especially for mountains. 2.4 Accuracy of orthophotos The accuracy of an orthophoto depends on several parameters. Orthophotos are based on a product derived from other data and consequently dependent of the quality of this data. In detail, these are: - The quality and resolution for the source images, 13

22 2 Orthophotos (a) (b) (c) (d) Figure 2.11: a) Original source image. The building has not been moved to it s correct position yet. b) Image orthorectified with the existing orthophoto generation process. The building is rectified to it s right position, but a ghost image is left at the source position. c) Image with visibility mask. d) True orthophoto with merged imagery. 14

23 2.4 Accuracy of orthophotos - the inner and outer orientation of the images and - the accuracy of the digital elevation model. The general visual quality of a true orthophoto mainly depends on the source images. Some of the parameters that affect the quality of the images are: - a non-influenceable parameter - the weather, - quality of the camera and lens and - resolution, precision and overall quality of digital scanning (if film is used) Nowadays, camera models and lenses used for mapping are of a very high quality and of a resolution of up to 100 Megapixels with eight centimeters ground resolution per pixel. For this project, the imagery used is taken with digital cameras and has a resolution of either 8 or 25 centimeters per pixel. The accuracy of the inner orientation of these aerial cameras is negligible and for the outer orientation, the deviation is at the most about 1 pixel because of bundle adjustment, transformations and different sources. This project is working with a DEM from stereo processed imagery. The advantage of such a DEM is that it is generated based on the source images and consequently works perfectly together with them for true orthophoto generation. On top of that, stereo-matched DEM generation is very cheap compared to most previous approaches in other true orthophoto generation processes [Nie04], which use manually modelled buildings. Regarding equation 2.4 inaccuracies due to a poor DEM (for example by measuring the surface with a laser, vertical errors in the DEM may happen at sharp edges, such as a roof, which are not hit exactly and therefore do not return the correct altitude), increase linearly away from the nadir point and consequently a constant error cannot be used for orthophotos. Ordinary orthophotos often use only the central part, which firstly is derived by the overlapping of neighboring images and secondly and more importantly, reduces the main part of the uncontrolled relief displacements effect. However, for true orthophotos it is difficult to give a good overall estimate of the mean accuracy because they are normally heavily mosaicked. Hence, it all depends on the final mosaic pattern. 15

24 2 Orthophotos One method to give an estimate of a mean standard deviation integrated over the entire image area is given by [Nie04]: σ dg = ver f a 2 +b 2 3 (2.5) The clipping area for the smaller central part used for ordinary orthophotos is scaled by a and b. For a true orthophoto, the effective area is much larger and the length of the sides of the image is 2a and 2b. The probability that the edges of an image are not used is the same as for the central part. Therefore it is not possible to predict a good measure for the standard deviation prior to the mosaicking process. 2.5 Summary In this chapter, the concept of orthophotos was introduced, which steps are included to generate them and most importantly the cause and the problem of relief displacements before and after the ortho rectification were explained. Next, true orthophotos were defined as an ortho rectification, which determines occluded areas and mosaics it with overlapping imagery. Finally, the accuracy of orthophotos and true orthophotos was described and the problem to estimate the accuracy was pointed out. 16

25 3 The Camera Model To work with remote sensing imagery, for instance to merge them to a large mosaick or for cartographic reasons, a relation to a world coordinate system is required. Therefore, the light rays need to be modelled in order to trace rays from the object space to the image plane or the other way around. Knowledge of the orientation and position of the camera and the inner geometry of the camera are needed to accomplish raytracing. Normally, the camera model is split into two sets of orientations: the interior and exterior orientation. The relationship between the image coordinatesξ andη of an image pointp and the coordinatesx,y,z of an object pointp is illustrated in figure 3.2 and is generally formulated in the following equation: X (ξ,η) =f Y (3.1) Z The best way to take aerial images would be with a pinhole camera, which lets light through a very small hole in and projects the image of the world scaled down tof/h onto a surface at the back of the camera (figure 3.1a). The distance from the pinhole to the backside isf (also known as the focal length or the camera constant [Nie04]) andh is the distance from the pinhole to the object imaged. However, the smaller the pinhole is, meaning the better the resolution is, the longer the exposure time is. The exposure time can increase to several hours, which makes it practically unusable for most types of photography and especially aerial images. Like pinhole cameras, push broom is a technology for obtaining images with optical cameras. It is usually used for passive remote sensing from space. In a push broom sensor, 17

26 3 The Camera Model a line of sensors arranged perpendicular to the flight direction of the spacecraft is used. Different areas of the surface are imaged as the spacecraft flies forward. Subsequently, the single lines are merged to a two dimensional picture (figure 3.1). [Wik09] The parametric description of cameras is depicted at the interior orientation section below, and the exterior orientation section at the end of this chapter describes algorithms to eliminate the distortion based on the orientation and position of the camera as well as on the earth curvature. Line Array Optics f Ground track (a) (b) Figure 3.1: (a) Pinhole camera: Rays simply parse the hole without any bending of the rays which makes it simple to model and results in a clear image. (b) Pushbroom camera: Single lines of the earth surface are imaged while the spacecraft flies forward and merged to a two dimensional picture afterwards. 3.1 Interior orientation The interior orientation was a very important part for analog cameras and early digital cameras. With new developments and technologies, the manufacturers offer camera systems with negligible distortions due to interior orientation. Since the camera system used for this study, UltraCam X or DMC, is highly accurate, has algorithms and procedures implemented to fix the already trivial distortions on the fly and provides thereby imagery for which only the exterior orientation has to be considered, only a brief description is given in this study. The interior orientation is described more precisely in [Kra07] and [Nie04]. 18

27 3.2 Exterior orientation Within a camera system distortions due to the lens, focal length and the distance between the principal point in the image plane and the image center may occur. To eliminate them, the interior orientation of the camera has to be known. The three constants are specific to the camera and are normally determined by the manufacturer in the laboratory or test flights. The center of the photograph is found by intersecting lines between opposite pairs of fiducial marks, also referred to as the fiducial center. The Principal Point is given with respect to the center of the photograph. The manufacturer ensures that, as closely as possible, the fiducial center coincides with the Principal Point (ξ 0 =η 0 = 0), also known as Principal Point of Autocollimation (PPAC), so that the origin of the image coordinate system is the center of the image plane. When the image space rays are not parallel to the incoming object space rays, it is caused by a distortion in the lens. The distortion consists of several components, where the radial distortion is usually the largest. With an odd polynomial, the radial distortion can be determined, and by measuring several points in the image, the result is a set of distortions with respect to the distance to the Principal Point of Best Symmetry (PPBS), which is the origin of radial distortions and located very close to the fiducial center and PPAC. The camera constant f is determined during the calibration process as well and is the length that produces a mean overall distribution of lens distortion [Kra07]. The focal point is therefore located directly above PPBS at a distance corresponding to the focal length. [Nie04] 3.2 Exterior orientation To be able to reconstruct the rays, the geometry of the image forming system must be known. The exterior orientation of a camera specifies the orientation and position of the camera in the object space and can be devised in several ways. The one used in this project is based on a Global Positioning System (GPS) combined with an Inertial Measurement Unit (IMU), which is highly accurate and fast. The GPS for example provides an absolute positioning in the object space every second or faster and the IMU measures the orientation of the camera. The inclusion of control points [Kra07], for which the image coordinates and 19

28 3 The Camera Model the object coordinates are known, and bundle adjustment can be used to further increase the accuracy of the exterior and interior orientation. Frame cameras like UltraCam X have one exterior orienation for an image, but line cameras have an exterior orientation for each line, since the satellite is moving while obtaining the single lines. f Figure 3.2: Relation between image and object coordinates. [Kra07] O with coordinates (X 0,Y 0,Z 0 ) as the position of the perspective center (camera location) of a three-dimensional bundle of rays,pp as principal point with coordinatesξ 0,η 0,f as focal length andm as fiducial center, the relation between the camera space (ξ,η) and object space (X,Y,Z) consists of a scale, a transition and a rotation in three dimensions (ω, φ, κ). These operations are expressed in the colinearity equations [Kra07]: ξ =ξ 0 f r 11(X X 0 ) +r 21 (Y Y 0 ) +r 31 (Z Z 0 ) r 13 (X X 0 ) +r 23 (Y Y 0 ) +r 33 (Z Z 0 ) η =η 0 f r 12(X X 0 ) +r 22 (Y Y 0 ) +r 32 (Z Z 0 ) r 13 (X X 0 ) +r 23 (Y Y 0 ) +r 33 (Z Z 0 ) (3.2) 20

29 3.3 Summary The parametersr ik appearing in equation 3.2 are the elements of the rotation matrixr which describes the three-dimensional attitude, or orientation, of the image with respect to thexyz object coordinate system. The single values ofrand how to determine them depends on the used GPS/IMU system. If at least one coordinate is known in the object coordinate system, the reverse calculation from camera to object coordinate system can be done with equation 3.3 [Kra07]: X =X 0 + (Z Z 0 ) r 11(ξ ξ 0 +r 12 (η η 0 ) r 13 f r 31 (ξ xi 0 ) +r 32 (η η 0 ) r 33 f Y =Y 0 + (Z Z 0 ) r 21(ξ ξ 0 +r 22 (η η 0 ) r 23 f r 31 (ξ xi 0 ) +r 32 (η η 0 ) r 33 f (3.3) 3.3 Summary The focus of this chapter was on the camera model in general and on the two main parts, the exterior and interior orientations, in detail. The two orientations are mandatory for an accurate trace of rays from the object space to the camera, through the lens and onto the image plane. The distortion of the lens is removed by the two orientations. 21

30

31 4 Digital Elevation Models The information of the geometric shape and altitude of objects the source images contain are mandatory for the ortho rectification process. The imagery and the knowledge of the camera model used describe the orientation of the camera during the exposure and distortions within the images, but to determine occluded areas, a georeferenced model of the earth surface with altitudes of the objects included is required to intersect the rays of the camera with. 4.1 Elevation models A digital elevation model is a mathematical representation of an existing or virtual object and its environment and in the case of this thesis the earth surface. Based on [KE02], a DEM is a generic concept that may refers to ground elevation but also to any layer above the ground such as vegetation, bridges or buildings. Depending on the usage of an elevation model, there are different levels of detail. When the information is limited to ground elevation, the DEM is called a Digital Terrain Model (DTM) and only provides information about the elevation of any point on ground or water surface. If the pixel information contains the highest elevation of each point, coming from ground or above ground area, the DEM is called a Digital Surface Model (DSM). Simple DSMs only contain the roof edges and ignore roof constructions. More advanced DSMs give a more exact representation of the surface by considering chimneys and the ridges on the roof as well. 23

32 4 Digital Elevation Models An even more advanced surface model that considers eaves and details on the walls would require terrestrial photogrammetry, which would be very expensive and only necessary for 3D imagery. For the case of true orthophotos, it is an unimportant detail because only the topmost object is visible for orthogonal aerial images and needed for this project. If for example the roof covers a balcony below it, this object will not be visible in a correct true orthophoto. Figure 4.1 illustrates these different types of surface models. (a) Terrain (b) Roof edges (c) Roof ridges, chimneys and edge of eaves (d) Wall details and eaves Figure 4.1: Four levels of detail of surface models 4.2 Data collection for digital elevation models This section gives a brief description of the numerous ways digital elevation models may be prepared. They are frequently obtained by remote sensing rather than direct survey. One powerful and common technique for generating DEMs is scanning the earth surface with a laser: With respect to time-of-flight, the laser shines a point on the earth surface and measures the distance to the object point based on the runtime of the reflected light. Further explanations can be found in [ST08]. Alternatively, stereoscopic pairs of images can be employed using the digital image correlation method. Two optical images acquired with different angles taken from the same pass of an airplane or an earth observation satellite. Analog camera images normally have 30 percent sidelap and 60 percent forward overlap, as figure 4.2a illustrates. For dense city areas this coverage is often not enough for stereo-matching, due to the lack of available perspectives. With digital cameras and their easy and convenient way of taking images, a percent sidelap and percent forward overlap has come to be standard. Consequently for any point (except the corners marked in figure 4.2b) several stereoscopic pairs of images are given and the coordinates of any point can be derived by known exposure positions due to GPS/IMU and known 24

33 Flight direction Flight direction 4.3 Surface representation (a) (b) Figure 4.2: (a) 30 percent sidelap and 60 percent forward overlap (b) 60 percent sidelap and 60 percent forward overlap. camera angles. Therefore a significantly cheaper elevation model can be generated. Older methods for generating DEMs often involve interpolating digital contour maps that may have been produced by direct survey of the land surface or manual stereo plotting of aerial imagery. The quality of a digital elevation model is a measure of how accurate elevation is at each pixel (absolute accuracy) and how accurately the morphology is presented (relative accuracy). Numerous factors play an important role for quality of DEM-derived products: - terrain roughness, - sampling density, - grid resolution / pixel size, - interpolation algorithm (i.e. for vegetation), - point location accuracy, - grid structure. 4.3 Surface representation For data processing purposes the DEM has to be represented in a way that the information of each pixel can be easily and quickly read, since raytracing includes a lot of picture points 25

34 4 Digital Elevation Models to work with. Two surface representations are well known and most common: The Regular Raster Grid (RG) and the Triangulated Irregular Network (TIN) Regular Raster Grid Due to [KE02], one of the main advantages of RGs is, that they have the geometry of an image where the pixels are the nodes of the regular raster grid (figure 4.3) and the gray values of the pixels represent the elevations. Therefore, grids should preferably, for data size reasons and reading performance, be stored as images. The transformation from the image coordinates of pixel (i,j) to corresponding 3D coordinates (x,y,z) can be expressed as: x =i x +x 0 y =j y +y 0 (4.1) z =f(i,j) f(i,j) is the height at pixel (i,j). (x 0,y 0 ) are the spatial coordinates of the image s first row and line pixel. ( x, y) are the spatial sampling of the grid, or grid size, respectively along thex,yaxes. These simple calculations are the huge advantage of RGs because a simple and fast location of the correct grid point instead of interpolating between the triangle s vertices with TINs [M + 01] is provided. The benefit of fast and simple calculations due to regular spaced grid cells is also the big limitation of RGs. The grid has only one height in any point, and therefore, rapid elevation changes cannot Figure 4.3: Raster image occur within one grid cell. Consequently, the accuracy depends on the spatial sampling. In addition, plain terrain areas are split to several grid of a grid cells instead of merged together to one huge cell. 26

35 4.3 Surface representation Figure 4.4: DEM image, where the gray level is related to the altitude of the pixel (dark is low altitude - in this case 280 m, white high - in this case 320 m) Triangulated Irregular Network Another method of representing geographical features is a triangulated irregular network that connects irregularly spaced and located spot elevations in an area with lines (edges) to form a continuous system of triangles [E + 00]. Each point (vertex) is connected to at least three other points and most commonly generated based on Delaunay triangulation, a method which attempts to assure a most efficient triangulation, connecting each point only to its nearest neighbors (figure 4.5). This approach maximizes the minimum angles in all the triangles instead of creating them long and narrow. To handle abrupt changes in the surface like cliffs, a very dense network is needed (figure 4.6) or a modified algorithm, which is capable to deal with breaklines, that supplement the points in the surface with lines. Breaklines are placed along edges in the terrain and a constraint is added to the algorithm that prevents edges of the triangles to traverse them. Having triangles and irregularly spaced points overcomes both of the RG disadvantages. Since data points are placed irregularly, they only need to be collected where there is a 27

36 4 Digital Elevation Models (a) (b) Figure 4.5: In a correct (b) Delaunay Triangulation the circumcircle does not contain any points within the circumcircle. Therefore (a) is not a valid Delaunay Triangulation. variation in terrain. Over a large relatively flat or low slope area, only a few points will serve to describe the form; in areas of greater relief and higher, changing slopes more frequent points can be measured and stored. The benefit of TINs are that important points such as local high points and peaks, or low points, stream centerlines, etc. can be measured and incorporated into the model. (a) (b) Figure 4.6: An example for a simple TIN without breaklines (a) and a surface with breaklines (b). [EH01] The downside of TIN models are firstly, that the data structure to describe the triangulation is relatively complex. Tables of points, lines and faces have to be maintained, points have to be linked up to lines (edges), and lines into faces (triangles). On top thez elevation value has to be interpolated for a given (x,y), since it s most likely located within a triangle. Secondly, TINs cannot handle vertical objects, since this requires more than one height per point. A local height for the triangle and a global height for calculations would be needed. 28

37 4.4 DEM generation by stereo image matching 4.4 DEM generation by stereo image matching The request for this project was an easy, fast and as fully-automated as possible true orthophoto generation. To meet this requirement, the true orthophotos generated with the developed application of this project are based on regular raster grids. The software is independent of the elevation model generation, but works best with digital elevation models generated out of stereo satellite or aerial imagery. The benefit of this approach is that the source images are also the source images for the DEM generation and consequently, match perfectly for the ortho rectification and visibility mask generation. The DEM generation process consists of the following main steps, which are implemented as parts of XDibias [d + 09]. 1. Stereo matching in epipolar geometry 2. Forward intersection and outlier removal 3. Interpolation and orthorectification Due to [Tao09], the stereo matching is done pixelwise with Semi-Global-Matching (SGM) and Mutual Information (MI) to compensate radiometric differences of the input images. MI is a cost function that provides a pixelwise probability for every possible gray value combination, which indicates how good these gray values correlate for the stereo images, but is generally ambiguous and wrong matches can easily have a lower cost than correct ones, due to noise for instance. Therefore, an additional constraint is added that supports smoothness by penalizing changes of neighboring disparities. The pixelwise cost and the smoothness constraints are expressed by defining the energy E(D) that depends on the disparity image D [Hir07]: E(D) = p (C(p,D p ) + P 1 T[ D p D q = 1] + P 2 T[ D p D q > 1]) (4.2) q N p q N p The first term is the sum of all pixel matching costs for the disparities ofd. The second term adds a constant penaltyp 1 for all pixelsqin the neighborhoodn p ofq, for which the disparity changes a little bit. The third term adds a larger constant penaltyp 2, for all larger disparity changes [Tao09]. 29

38 4 Digital Elevation Models (a) (b) (c) (d) Figure 4.7: Stereo matching results from aerial UltraCam X images. (a) Small part of aerial image. (b) Disparity against one image. (c) Reprojected disparity. (d) Merged reprojection. After the stereo matching, the disparity is reprojected into a cartographic coordinate system (figure 4.7b). The reprojections of all disparity images are merged using a median filter (figure 4.7d). Occlusions, matching failures or moved objects, lead to holes in the merged DEMs, and are filled by inverse distance weighted interpolation. The SGM is a good trade off between reconstruction quality and computation speed. 4.5 Summary This chapter introduced the concept of digital elevation and surface models. In the beginning, the different DEMs, their difference in the level of detail and included objects were described. The next section focuses on the approach of creating a DEM through stereo matching. Finally the most common surface representations were described; the grid and the triangulated irregular network. 30

39 5 Design description This chapter focuses on characterizing the general methods of the true orthophoto generation, devised and implemented in this project. The process is a step-by-step procedure, and each step is described in detail in the following chapters. The approach in this study is to use regular grids instead of triangulated networks used in [Nie04] and on top no regular color-matching algorithm matches the images, but multiresolution splines, so that the limitations of other true orthophoto generation processes are negotiated. 5.1 Limits of other true orthophoto applications In [Nie04] and in most of other true orthophoto applications manually created TINs are used for the true orthophoto generation. Creating a 3D TIN is very time consuming and very expensive because an automated generation is not possible and therefore has to be done manually. As a result of the manual generation the model is very accurate and includes undersides of eaves and walls. But for the generation of true orthophotos undersides of eaves and walls are not necessary, since they are not visible in the final true orthophoto anyway. It is desirable to reduce the cost and effort of true orthophoto by using automatically generated DSM by stereomatching or laserscanning. Existing applications, that work with elevation models often cannot handle eaves or would require a pre-process that eliminates eaves. Furthermore, they are based on the operating system Microsoft Windows, but as part of the image processing software XDibias the true orthophoto application had to be Linux-based. Since the very often used algorithms for color-matching, like histogram-matching and hue-matching, need a reference image to apply the others to, the process is not fully 31

40 5 Design description automated. The matching algorithm needs to be told which image the reference image is. On the other hand, multiresolution splines mosaicking treats every image equally, and therefore, no manual adjustment has to be done. One of the most difficult tasks in creating true orthophotos is to feather seamlines. In [Nie04] a 3x3 mean filter is applied several times to smooth the seamlines. The final image is smooth and the seamlines are feathered but also blurred. To overcome the blurriness, this study feathers seamlines based on multiresolution splines, which adapt the transition zone from one image to the other to different spacial frequencies. 5.2 Creating true orthophotos - Step by step The complete true orthophoto generation process can be broken down to these crucial steps: 1. Rectify images to orthophotos. 2. Locating the occluded pixels (visibility mask). 3. Seamline placement. 4. Mosaicking. The true orthophoto process is illustrated on the diagram Rectification The orthophoto rectification is a commonly accepted method to trace each pixel of the output image back to the pixel in the input image. It is normal that the trace rarely hits the center of the pixel in the input image when resampling an image. Therefore, methods are required which interpolate between pixels. Some of them are: Nearest neighbor, bilinear, bicubic interpolation and can all be found in [C + 01]. In this project, it s possible to choose between nearest neighbor and bilinear interpolation. Nearest neighbor was implemented due to its simplicity, since it selects the pixel value from the pixel that is closest to the incoming ray. Bilinear interpolation uses the four nearest pixel values, which are located 32

41 5.2 Creating true orthophotos - Step by step Figure 5.1: General approach of the true ortho rectification process. 33

42 5 Design description in diagonal direction from the pixel hit by the ray in order to find the appropriate pixel value of the desired output pixel. The rectification method is illustrated in chapter 2. Needed mathematics and knowledge of the camera is describe in chapter 3. In chapter 6, the actual raytracing implementation is characterized Locating occluded pixels The most important step for the true orthophoto generation is to locate the occluded pixels. A regular orthophoto can be generated without this information, but for mosaicking purposes and to guarantee high accuracy and scale, the location of any occluded pixel is mandatory. Therefore, any ray in the DEM that is blocked by another object on its path from the point on the surface to the camera has to be registered. Since the rays have to be traced for the ortho rectification process too, it makes sense to combine the creation of the visibility mask and the rectification step. The raytracing of the elevation model is described in chapter 6. Figure 5.2: Possible case for occluded pixels Seamline placement To merge the images in a sufficient way, a transition or seamline has to be placed between the images. The placing is usually based on a scoring algorithm and can be done in various ways. In the case of this study, the Nearest Feature Transform also known as Distance Transformation is used, so that the transition line is placed as far as possible from the blindspots and to be able to fade out in all directions, as near as possible to the middle of the intersection. The seamline placement is treated in chapter 7 as part of the mosaicking process. 34

43 5.3 Implementation Mosaicking The final true orthophoto will be heavily mosaicked due to all the occluded pixels. So if the processed images have relatively large differences in color and brightness the seamlines will be visible and the final result is poor. The images could be color matched prior to the rectification process, but to accelerate and optimize the process, multiresolution splines are used. In chapter 7, the multiresolution splines mosaicking is investigated and the advantages towards regular color matching and feathering are pointed out. Figure 5.3: Possible case of a mosaicked image 5.3 Implementation The true orthophoto application is implemented in two parts, to split the two main parts, orthorectification and mosaicking, and to provide the opportunity to mosaick orthorectified imagery of different age. The orthorectification and visibility mask generation are implemented as a XDibias module. The mosaicking is performed using the Enblend [Md04] program. Initially the goal was to have the visibility mask generation merged into the existing orthophoto generation module. Since the approach for ortho rectification and ray tracing differs in reprojection, the true orthophoto generation process became an extra module. The software is written in C and runs on a Linux-based operating system. The first part is designed for multithreading due to its extensive and many calculations. The module needs the source image and DEM as input and creates a rectified image with marked occluded areas as output. The second part takes generated orthorectified images with marked occluded areas into consideration and handles the crucial steps seamline placement, feathering and mosaicking. This module merges all input images to one large ortho image, while trying to fill the occluded areas with information from overlapping images. 35

44 5 Design description The true orthophoto application can be found on the enclosed CD. 36

45 6 Raytracing the elevation model rays Topmost surface point Digital elevation model X, Y, Z coordinates Output pixel Figure 6.1: When raytracing the output pixel back to the source image, the DEM provides the topmost surface point (Z coordinate) of a certain pixel (X, Y coordinate). The ray checks for visibility between the point and the camera. The rightmost ray is occluded by the tower. Orthophotos can be generated in two ways: with forward and backward projection. Due to a simpler implementation and more importance to interpolate in the source images instead of the output image, the backward projection is used for raytracing in this project. Thereby the output pixels are traced from the object space through the camera lens and onto the image plane. The digital elevation model provides for each output pixel the X and Y as well as the Z coordinate. At this position the raytracing back to the camera 37

46 6 Raytracing the elevation model and onto the image plane starts. When creating a true orthophoto, the ray should also be checked whether it intersects with another point in the object space or not. These steps are illustrated in figure 6.1. An 8 cm resolution true orthophoto of 1 km 2 would contain roughly 156 million pixels, even at 25 cm resolution 1 km 2 would contain 16 million pixels and the same number of ray traces. Therefore, an efficient way of performing the ray tracing is needed. Some orthophoto applications speed up this process by doing a ray trace for every 2-3 pixels only and then interpolating between them. This can result in jagged lines along the roof edges where there are rapid changes of height in the surface model. The method is sufficient with DTMs where the surface is smoother and it increases the speed significantly. [Nie04] The preparatory thesis [Nie04] uses TINs in a binary tree data structure to perform the raytracing and has acceptable results. This bachelor thesis is based on elevation models using regular data grids. Therefore no binary tree has to be built and each output pixel can be easily iterated through. To optimize raytracing, several optimizations like bounding boxes, multithreading, global and local height maximum as well as a modified version of the Bresenham raytracing algorithm [Bre65] are devised and all but local height maximum are implemented. 6.1 Data storage In this section a brief description of the data storage of elevation models, the source images and the true orthophoto is given. All three of them are stored as folders on the harddrive which include the actual image and some meta data like the world coordinates of the upper left pixel of the DEM for example. Each image consists of channels, lines and columns, which are imported into an one dimensional array in the application. A regular image consists at least of three channels (red, green, blue) and stores for each pixel in each channel the intensity of the certain color. Through a formula the three channels are merged together and a true color image is generated. The DEM has only one channel, which stores altitude. To get the value of a certain pixel in one of the image arrays, the index has to be calculated with following equation: 38

47 6.2 Bounding box optimization idx =r w tch +w ch +c (6.1) Whereidx is the index,rare is the row of the pixel,wis the width of a row,tch are the total channels of the image,ch indicates the channel of the pixel andcis the column of the pixel. Because of this equation the memory management is faster than creating a three dimensional array and therefore speeds the application up. 6.2 Bounding box optimization A DEM D DEM C Figure 6.2: The orthophotos (A-D) and the DEM can be arranged in various ways to each other. This image shows some arrangements and the required offsets for the bounding boxes (gray areas). To avoid useless calculations, to save memory and to speed up the raytracing a bounding box of the DEM and the ouput image is devised. To create the bounding box, at first the minimum and maximum X, Y coordinates of the source image were calculated so that the arrangement of DEM and orthophoto to each other could be figured (figure 6.2). Only the rows and columns within the bounding box are required for the raytracing. The bounding box considers besides the intersection of DEM and orthophoto also the airplane/satellite position. Offsets are implemented so that if, for example, the airplane position is within the DEM but not within the bounding box only the pixels within the bounding box are traced but not any pixels outside of it. 39

48 6 Raytracing the elevation model If for instance the bounding box is only a fourth of the size of the elevation model, only a fourth of the pixels are traced and therefore this simple modification improves the raytracing significantly. 6.3 Global and local maximum heights Flight height Global maximum height Local maximum height Local maximum height Figure 6.3: Illustration of different heights to trace. Raytracing consists of a tremendous amount of calculations, since every intersected point or pixel has to be checked if it is an object in the object space or just air. As described above images may have over 100 million pixels, so each pixel has to be traced and tracing includes calculations, due to intersections. The regular flight height for aerial images is about 1500 meters. If for instance the traced pixel is at a height of 280 meters and the distance in X and Y direction to the airplane position is 100 meters with a ground resolution of 25 cm includes for one pixel up to 400 calculations (if not intersected with an object). For 100 million pixels that makes in the worst case 40 billion calculations and assuming of 1µs per calculation 11 hours all over computation time. Therefore, a method is needed to speed the raytracing up. 40

49 6.3 Global and local maximum heights In applying a global maximum height, the computation time can be significantly decreased. The global height is the maximum altitude of any object in the bounding box and since the DEM has to be imported anyway, checking for the maximum height is easily done. Instead of checking the entire ray until the airplane, the ray has to be checked until it intersects with the maximum height layer. For an image with flat terrain and two story buildings the height differs about 30 meters throughout the image and the mean distance to the intersection point with the maximum height layer is 5 meters. Doing the same calculation as above leads to 2 billion calculations for 100 million pixel and a computation time of 33 minutes and a performance gain of around 20 times! However, the optimization just works if the altitudes of the objects only differ by a few meters; for rough terrain or skyscrapers the computation time would rapidly increase again. To compensate rough terrain, local maximum heights are introduced. The idea behind local maxima is to break the bounding box down to cells of an adjustable size for which the local maximum heights are stored. A mistake would be to check the ray just until the local height and not against local heights of intersected cells too (figure 6.3). Figure 6.4 illustrates rays for certain points and cells which have to be checked for those points. SP RP CRP(i) MPRP Prior to the actual raytracing of each pointp the intersected cellsc P of the rayr P are calculated. To devise if a cellc RP (i) has to be traced point by point, the minimum intersection pointmp RP of the C RP (i) and rayr P are checked with the maximum local altitude MLH(C RP (i)). If MLH(C RP (i)) is higher or lower than the actual heightrh at MP RP (C RP (i)). Figure 6.4: With local maximum heights only pixels of certain cells have to be checked for occlusions instead of all points on the ray back to the camera. 0 ifrh(mp RP (C RP (i)))>mlh(c RP (i)) C RP (i) = 1 ifrh(mp RP (C RP (i)))<=mlh(c RP (i)) (6.2) 41

50 6 Raytracing the elevation model The cell is checked point by point only ifc SP (i) is 1. This approach provides the opportunity to gain performance by checking only certain points on the raytracing instead of all. Especially for rough terrain or large altitude differences the raytracing is accelerated. Local maximum heights is not implemented in the application version on the enclosed CD and was not considered for the experimentation in chapter 8, since not enough testing could have been done until the deadline to ensure the algorithm to be bug free. Depending on the source images short tests pointed a performance gain out of up to 10 times. This would result in a computation time of about 3-8 minutes for 100 million pixel. 6.4 Raytracing with the Bresenham algorithm The Bresenham algorithm is an algorithm in image processing to raster straights or circles on to bit-mapped graphics [Bre65]. To trace the ray, the continuous world coordinates of the object space have to be transformed into discrete pixel coordinates. If done for each pixel and each step, a large amount of slow multiplications and divisions with floating points would have to be done. Therefore, an algorithm to transform the coordinates once and to trace in pixel coordinates with integer additions as the most complex operations would accelerate the computing time significantly. The Bresenham algorithm fulfills these requirements, is easy to implement and does minimize rounding errors. The basic variant of this algorithm expects a straight line in the first octant, that means a line with a slope between 0 and 1 from (x start,y start ) to (x end,y end ) (figure 6.5). Thendx =x end x start anddy =y end y start with 0<dy dx. For octant 1 the upcoming iteration is not like octant 0 based ondx but ondy. If the slope is in octant 2-7,xory is not raised by 1, but the signum value of y respectively x. Furthermore for these octant the iteration is done backwards instead of forward [Wik10b]. 0 Figure 6.5: Slopes in A step in the fast direction (larger difference between end different octants and start point; in the case of figure 6.6adx) is done every [Wik10b] iteration.once the deviation from the ideal line becomes too large, a step in the slow direction is performed, too. To determine the correct iteration step for the slower step is by means of an error variablee, which is decreased by the smaller value (dy) every step 42

51 6.4 Raytracing with the Bresenham algorithm (a) (b) Figure 6.6: (a) and (b) describe the raster process of a straight line with the Bresenham algorithm. (b) illustrates the states of the error variable [Wik10a] inxdirection. Ife<0, a step iny has to be done and the largerdx value is added toe. Due to the repeated crossover subtractions and additions the division of the slope triangle is broken down just to basic operations [Wik10a]. Furthermore, the error variable has to be initialized wisely. Consider the case ofdy = 1, for which the step iny direction has to be done at the middle or shortly after dx 2. Mathematically that means, that y =y start + (x x start ) dy dx (6.3) gets transformed to e =dx (y y start ) dy (x x start ) (6.4) If for instance one step inxdirection is done, the error variable gets decreased by 1 dy. Assuming thate<0after the decrease, results in an increase bydx, due to a step iny direction, which is supposed to result ine 0 based ondx dy. The following listing describes the Bresenham algorithm for all octant in pseudo code [Wik10a]: 1 istartx = x coordinate of startpoint ; istarty = y coordinate of startpoint ; 2 iendx = x coordinate of endpoint; iendy = y coordinate of endpoint; 3 43

52 6 Raytracing the elevation model 4 / measure distances / 5 dx = iendx istartx ; dy = iendy istarty ; 6 7 / determine direction and prefix / 8 incx = sgn(dx) ; incy = sgn(dy) ; 9 i f (dx<0) dx = dx; i f (dy<0) dy = dy; / determine greater distance / 12 i f (dx>dy) 13 { 14 / x is fast / 15 pdx=incx ; pdy=0; / parallel step / 16 ddx=incx ; ddy=incy ; / diagonal step / 17 ef =dy; es =dx; / error steps fast, slow / 18 } 19 else 20 { 21 / y is fast / 22 pdx=0; pdy=incy ; 23 ddx=incx ; ddy=incy ; 24 ef =dx; es =dy; / error steps fast, slow / 25 } / initialize / 28 ix = istartx ; iy = istarty ; 29 err = es/2; for ( i=0; i<es ; ++i ) 32 { 33 / update error term / 34 err = ef ; 35 i f (err<0) 36 { 37 err += es ; 38 / step in slow direction / 39 ix += ddx; iy += ddy; 40 } 41 else 42 { 43 / step in fast direction / 44 ix += pdx; iy += pdy; 45 } 46 SetPixel(x, y) ; / check height of this pixel / 47 } 44

53 6.5 Parallel processing 6.5 Parallel processing Determine global and local height maximum Thread A Raytracing Ray 1,3,5,... Thread B Ray 2,4,8,... Thread C Ray 6,7,9,... Figure 6.7: Illustration of the fork and join procedure with OpenMP for 3 threads. Nowadays not only supercomputers consist of more than one computing cores, also regular computers and notebooks have at least two cores and in some cases four or eight cores. The advantage of multi-core systems is that processes and threads can be computed simultaneously and therefore at a fraction of the time as a single-core system. To use the extra cores efficiently, the software has to be designed for parallel processing, since operating systems and CPU instructions are not capable to distribute operations on their own, yet. A process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system, a process may consist of multiple threads of execution that execute instructions concurrently. A thread results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. The difference between threads and multitasking operating system process are [N + 96]: - processes are typically independent, while threads exist as subsets of a process - processes carry considerable state information, whereas multiple threads within a process share state as well as memory and other resources - processes have separate address spaces; threads share theirs - processes interact only through system-provided inter-process communication (like semaphores or message queues [Nor96]) mechanisms. 45

54 6 Raytracing the elevation model - context switching between threads in the same process is typically faster than context switching between processes, due to less overhead. Lately another new technology was introduced, called Hyperthreading. It is an approach from Intel for hardware-based multithreading. The idea is to utilize the cores of a CPU better by filling the gaps in the pipeline with instructions of another thread. Those gaps occur due to a cache-miss for instance and a second process or thread can compute in the meantime. According to Intel a performance gain of up to 33 percent is possible [Cor04]. Not all algorithms are suitable for parallel processing. If for instance the calculations in an iteration are based on each other, they have to be synchronized very often. In this case, synchronization and thread switching produces a large overhead, and the multithreaded program will be slower than a single threaded one. The ideal data for parallel processing are totally independent and may have to be synchronized at the end of all the calculations. In this study, parallel processing is used to investigate the global and local maximum altitudes as well as for the raytracing itself. The Bresenham algorithm includes calculations based on prior calculations and cannot be parallelized efficiently. Instead, multiple rays are traced concurrently. The columns of a row of the bounding box are parallelized and at the end of the row the investigated tracing results are joined, to export the finished row to the true orthophoto image file. Since shared data is only read, race conditions do not have to be considered. All other variables are created within a thread and therefore, not shared with the other threads and cannot be manipulated unnoticed. The implementation was easily done with the application programming interface OpenMP (Open Multi-Processing) that supports multi-platform shared memory multiprocessing programming in C on many architectures, including Linux. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior. OpenMP was defined by a group of major computer hardware and software vendors and gives programmers a simple and flexible interface for developing applications for desktops as well as supercomputers [NF02]. Most of the compilers have OpenMP implemented and iterations of a for-loop are concurrently computed with the compiler directive #pragma omp parallel for. After the for-iteration the threads are joined and only the master thread remains to continue with the single-processing parts of the software. 46

55 6.6 Rectification 6.6 Rectification Source Image Rectified Image Figure 6.8: Illustration of orthorectification. Ray does not always hit the center of the pixel of the source image. If a pixel is not occluded it has to be filled with the correct data. With the equations of chapter 3 and 4 the target position of the ray in the source image is investigated. Often the ray does not hit the center of the source pixel due to distortions as figure 6.8 illustrates. Therefore the pixel value of the output image has to be transformed or resampled. Several ways are common, but Nearest Neighbor and Bilinear Interpolation are used in this thesis [db + 08]. The nearest neighbor algorithm simply selects the value of the nearest point, and does not consider the values of other neighboring points at all. In the case of this thesis the nearest point is investigated by rounding the float value to an integer value. Figure 6.9a illustrates the equation below. i(ix,iy) iffx ix<0.5 andfy iy< 0.5 i(ix + 1,iy) iffx ix 0.5 andfy iy< 0.5 o(ox,oy) = (6.5) i(ix,iy + 1) iffx ix<0.5 andfy iy 0.5 i(ix + 1,iy + 1) iffx ix 0.5 andfy iy 0.5 whereois the output image, with (ox,oy) as the X, Y coordinates andiis the source image with (ix,iy) as the integer values of the resampled values (fx,fy). The bilinear interpolation calculates the value of pointp = (x,y) by means of the four neighboring pointsq 11,Q 12,Q 21 andq 22. The bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables on a regular grid. A linear 47

56 6 Raytracing the elevation model 1 Q12 Q22 P(x,y) 0 (a) Q11 (b) Q21 Figure 6.9: (a) Nearest Neighbor Transformation. (b) Bilinear Interpolation. interpolation is firstly performed in one direction and then again in the other. The value of the unknown functionf at the pointp is found by f(x,y) f(0, 0)(1 x)(1 y) +f(1, 0)x(1 y) +f(0, 1)(1 x)y +f(1, 1)xy (6.6) if a coordinate system is chosen wheref for the four neighboring pointsq 11,Q 12,Q 21 and Q 22 is known as (0, 0), (0, 1), (1, 0) and (1, 1). This is accomplished by computingf(x,y) only with the fractional digits ofxandy. For evaluation purposes the nearest neighbor algorithm was used, since it consists of less calculations and the quality gain by bilinear interpolation is negligible as figure 6.10 illustrates. 6.7 Summary One way for intersecting a surface model was introduced and highly optimized through bounding boxes, global and local maximum heights and parallel processing. Two methods to resample pixel values that are not occluded are described. The performance is highly dependent of the source images, but compared to the raytracing library of [Nie04] 48

57 6.7 Summary (a) (b) Figure 6.10: (a) Nearest Neighbor Transformation Example. (b) Bilinear Interpolation Example. a performance gain of 20 times for city imagery with altitude differences of 60 meters was accomplished. An optimization that makes raytracing feasible even for large images. 49

58

59 7 Mosaicking To generate large scale ortho imagery, multiple images have to be merged to form a mosaic of images. Adjacent images are usually assembled along seamlines that are automatically or manually placed roughly along the middle of the overlapping areas. For orthophotos, the seamlines are often placed along roads or flat terrain so that no buildings or other objects are intersected, which would result in visible seams due to relief displacements. True orthophotos instead have the advantage that relief displacements are mostly removed, depending on the quality of the DEM. Therefore, placing the seamlines along roads is not necessary. However, seamline placement is more crucial for true orthophotos than for orthophotos, since they have significantly more seamlines, due to the large amount of occluded areas. Radiometric differences are an inherent part of imagery and the reason for clearly visible seamlines. Since the images rely on the light from the sun, the relative angle to the sun may also have great influence as illustrated in figure 7.1. To avoid poor results the seamlines have to be feathered and a smooth transition has to be guaranteed. Figure 7.1: The overall brightness of an image relies on the reflection of surfaces and more important on the angles of airplane, surface and sun to each other. The gray-scale below shows the amount of light reflected. 51

60 7 Mosaicking 7.1 Mosaicking and Merging methods The mosaicking methods presented in this section rely on a pixel-by-pixel score method, inspired by methods presented in [Nie04] and [BA83]. The method used in this thesis is cutline generation by Nearest Feature Transform (NFT), also known as Distance Transformation and multiresolution spline merging as implemented in the open source enblend program [Md04]. 7.2 Mosaicking by Nearest Feature Transform The first step after rectifying, is to mosaick the images. In this study the nearest feature transform due to it s simplicity and overall good result is used. Moreover the NFT ensures that the pixel information of any image is used, for which the distance to the blindspots is the largest, and therefore, inaccuracies in the surface model are compensated. It s a method that maps binary images into distance images (1 channel), where the distance to the nearest object corresponds to the color level. In this case, the visibility mask is the binary image with blindspots as the ojects (figure 7.2). Figure 7.2: Nearest Feature Transformation of blindspot image (superimposed in white). The more red the greater the distance to a blindspot. For each source image, a corresponding blindspot distance map is created, where the score of each pixel indicates the distance to blindspots. The distance maps are used to determine 52

61 7.3 Seamline feathering the source image to take the pixel data from. For now it is only possible to merge two images with enblend [Md04] at a time, and therefore, the order of the merging process could be of some importance, since no color-matching or histogram-matching algorithm is used in this study. If for instance the perspectives of the merged images differ only a little, the final true orthophoto will be tremendously mosaicked and the feathering algorithm might be overstrained, which could result in a poor final image. But if always the two images with the most different perspective are merged, the blindspots will be mainly filled by one image instead of five or ten. Consequently, less feathering has to be done. However, the tested imagery in chapter 8 shows that the feathering algorithm smooths every seamline precisely and therefore the order is not important. The only importance is, that the images overlap. Figure 7.3 illustrates an image mosaick of two images. Figure 7.3: Distance map for joining to images in late stage, where the dark areas correspond to one image and the white areas to the other (surface outlines are superimposed). 7.3 Seamline feathering As figure 7.4a shows, without any adjustment and feathering the seamlines are clearly visible and the overall result is poor. Therefore, a method is required which easily joins images. The merging thus needs to handle ratiometric differences in the input images. In the traditional orthorectification mosaicking, color values are first adjusted to match 53

62 7 Mosaicking (a) (b) Figure 7.4: (a) is a trueorthophoto without seamline feathering so that the seamlines are obvious. (b) is feathered by Multiresolution spline [BA83] and no seamlines are visible. a reference image and then merged with a simple feathering. The disadvantages of this method are: Firstly, calculations besides feathering, due to the matching have to be done and secondly, the reference image might has radiometric distortions and would lead to an addition of these distortions to all other images. In this thesis instead, the promising multiresolution splines [BA83] are used. The approach is to distort the surfaces gently, so that they can be joined together with a smooth seam while still preserving as much of the original image information as possible [BA83]. This means that, no reference image is required and a smooth transition is guaranteed. Figure 7.5: The weighted average method may be used to avoid seams. Example weighting functions are shown here in one dimension. The width of the transition zonet is a critical parameter for this method [BA83]. 54

63 7.3 Seamline feathering An image consists of different spatial frequencies. Spatial frequency is a measure of how often a certain structure repeats per unit of distance. In image processing applications, the spatial frequency is often measured as lines per millimeter and differences in this frequencies convey different information about the appearance of a stimulus. High spatial frequencies represent abrupt spatial changes in the image, such as edges, and generally correspond to fine detail. Low spatial frequencies, on the other hand, represent global information about the shape and smooth areas like grass [Bar04]. So, to make a seamline really smooth the various spatial frequencies of an image have to be joined differently. Figure 7.5 describes the merging of two images through a weighted average (Hl(i) for the left image andhr(i) for the right image) within a transition zone T. If the transition zone is the same for every spatial frequency the resulting image will be highly distorted, since high spatial frequency for instance has to be joined in a smaller zone than low spatial frequency like the grass, that has to be joined slowly and smoothly. Therefore the image should be decomposed into a set of band-pass component images for the different spatial frequencies. A separate spline with an appropriately selected T can then be performed in each band. Finally, the splined band-pass components are recombined into the desired mosaic image [BA83] Generating the Gaussian Pyramid The key of a good overall result is to blend image features across a transition zone proportional in size to the spatial frequency of the features. This is accomplished by blending two images together, one spatial frequency level at a time. Each level uses a different blending mask or distance map. At the top level, a sharp blend mask is used so that high-frequency details are blended over a narrow region. At the bottom level, a wide blend mask is used so that low-frequency details are blended over a large region. To do so, a sequence of low-pass filtered images G 0,G 1,...,G N are obtained by repeatedly convolving a small weighting function with an image [BA83]. As figure 7.6 showsg 0 is the original image and from that one on the value of each node in the next level (forg 0 G 1, for G 1 G 2 and so on) is computed as a weighted average of Figure 7.6: A onedimensional graphical representation of the iterative REDUCE operation used in pyramid 55 construction [BA83].

64 7 Mosaicking a 5 x 5 subarray of the current level. If this approach is imagined, the result looks like a pyramid. Sample density and resolution are decreased from level to level of the pyramid and can be described with this equation and 0<l<N: G l (i,j) =REDUCE[G l 1 (i,j)] = 5 m,n=1 w(m,n)g l 1 (2i +m, 2j +n) (7.1) Whereiandj identify the pixel andw(m,n) is a pattern of weight used to generate each pyramid level. Figure 7.7 illustrates different levels and a collapsed version of the highest level, which clearly shows a really smooth transition zone for lowest spatial frequency. (a) (b) (c) (d) (e) (f) Figure 7.7: (a) original image - sharpest (b), (c) and (d) are intermediate levels (e) is top level for smoothest transition and (f) is the mask (e), but scaled up the original size using multiple EXPAND operations [Md04] Generating the Laplacian pyramids Images broken up into components based on spatial frequency are called Laplacian pyramids, which contain the highest spatial frequency components at the lowest level and the lowest spatial frequency components at the top level. Intermediate levels contain features gradually decreasing in one-octave steps in spatial frequency from high to low. 56

65 7.3 Seamline feathering A Laplacian pyramid is made by repeatedly applying a high-pass filter to the image. The high-pass filter picks out all of the high spatial frequency components of the image and passes everything else to the next level. This process can be compared to subtracting each level of the pyramid from the next lowest level. Because these arrays differ in sample density, it is necessary to interpolate new samples between those of a given array before it is subtracted from the next lowest array [BA83]. LetG l,k be the image obtained by expandingg l k times. Then G l,0 =G l (7.2) and the interpolation can be described as G l,k (i,j) =EXPAND[G l,k 1 (i,j)] = 4 2 m,n= 2 G l,k 1 ( 2i +m, 2 2j +n ) (7.3) 2 Only terms for which (2i +m)/2 and (2j +n)/2 are integers contribute to the sum. The size ofg l =G l 1 =G 0. L 0,...L N are defined as a sequence of band-pass images and for 0<l<N with L l =G l EXPAND[G l+1 ] =G l G l+1,l and L N =G N (7.4) Summation and splinning overlapped images The final image is obtained as a combination of expanding and summing. With one image at the top pyramid level,l N is first expanded and added tol N 1 to recoverg N 1 and so forth. This can be written as [BA83]: 57

66 7 Mosaicking (a) (b) (c) (d) (e) (f) Figure 7.8: (a) highest spatial frequency (b), (c) and (d) are intermediate levels (e) is top level for smoothest transition as (f) shows. [Md04] N G 0 = L l,l (7.5) l=0 The complete algorithm consists of the following steps: 1. Construct Laplacian pyramidsla andlb for imagesa(left) andb (right), 2. If the center line for levellof the final image is ati=2 N 1, then the final Laplacian pyramid is calculated by: LA l (i,j) if 1<2 N 1 LS l (i,j) = (LA l (i,j) +LB l (i,j))/2 ifi = 2 N 1 (7.6) LB l (i,j) if 1>2 N 1 3. The final image is then obtained by expanding and summing the levels ofls. 7.4 Summary In this chapter, a process for seamline placement and mosaicking images, to form a seamless true orthophoto was described. The used NST is a good foundation for the multiresolution spline, since it places the transition line as far as possible from the blindspots so that the 58

67 7.4 Summary spline has enough space to fade out and feather the seamlines really smooth. The enblend [Md04] program contains an efficient implementation of this algorithm, which can be applied to large images. 59

68 7 Mosaicking (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) (s) (t) (u) (v) Figure 7.9: (a) shows the two original images and (b) the final image. (e),(i),(m),(q),(u) are the Laplacian pyramid of imagea. (f),(j),(n),(r),(v) are the Laplacian pyramid of image B. (d),(h),(l),(p),(t) are the Gaussian pyramid of distance map. (c),(g),(k),(o),(s) are the Laplacian pyramid of the final image [Md04]. 60

69 8 Experimentation and Evaluation The previous chapters described the investigated approach of this thesis to generate true orthophotos. Througout this chapter the developed application is tested for the city Terrassa in Spain and Vaihingen an der Enz in Germany. The problems are illustrated and commented. The tested imagery is exposed with the UltraCam X camera model or DMC and has either a ground resolution of 8 cm or 25 cm. The software is tested with stereo-matching based and laser scanned DEMs and differences based on the DEM are drawn out. Due to the size and resolution of the images, only close-up results are shown in this thesis. The true orthophotos in full resolution can be found on the companion CD. Below an overview of the generated true orthophotos is shown. (a) Terrassa: 34 million pixel. Pixel size: (b) Vaihingen: 3.18 million pixel. Pixel 0.25 m size: 0.08 m 61

70 8 Experimentation and Evaluation 8.1 Performance As mentioned in chapter 6, the feasibility of large-scale true orthophoto imagery depends on the computation time of the orthorectification. To evaluate the single performance optimizations, the computation time for some reference images was registered. The reference images had a size between 3.18 million pixel and million pixel and either a ground resolution of 0.08 m or 0.25 m. Additionally, an image that covered an area of about 630 m x 1550 m with a ground resolution of 0.25 cm and roughly million pixels was raytraced for a comparison with the raytracing library of [Nie04]. Table 8.1 illustrates the single computation times and points out that a performance gain of 20 times compared to [Nie04] was accomplished. The experimentation was done on what is comparable to a standard laptop at the time of writing this thesis. The overall specifications are: Processor: Intel Centrino Core 2 Duo, 2.4 GHz 800 MHz Front Side Bus, 4MB L2 Cache Memory: 2048 MB DDR2 RAM Operating System: Ubuntu 9.10 Table 8.1 points out the processing time of some images depending on the optimizations activated. From left to right another optimization is applied on top: Size Res Master thesis BBox GMH MT 3.18 million pixel 0.08 m 719 s 132 s 91 s 6.75 million pixel 0.25 m 2596 s 101 s 64 s 15.7 million pixel 0.25 m 424 s 256 s million pixel 0.25 m 1 hr 615 s 371 s million pixel 0.25 m 703 s 624 s Size as the image size andres as the ground resolution of the image describe the processed photograph. Column M asterthesis contains the computation time of the raytracing library of [Nie04]. The computation time of the Bresenham algorithm plus Bounding Box is drawn out inbbox,gmh is the computation time with global maximum height and M T is the computation time with parallel processing and all optimizations actived. 62

71 8.2 Pros... The Bresenham algorithm was tested without any optimization only for images with less than 7 million pixels, since the computation time was not feasible for any larger imagery. The biggest performance gain is done by means of global maximum heights. For a 3.18 million pixel image, the performance was 7 times faster, for a 6.75 million pixel image 26 times and increasing. The performance gain due to global maximum altitude can be drawn as an exponential graph, since the pixels to trace get more and the tracing steps get more, too. Additionally, interesting is the performance gain of the 7 million pixel image compared to the 3 million pixel image. This fact points out that the shape of the elevation model has a huge impact on the performance. If the difference between the global mean height and the global maximum height is small, raytracing is done very fast, since only a few steps have to be done. To test the orthorectification as well as mosaicking due to the processing time of the true orthophoto generation, 15 images of Vaihingen an der Enz with a resolution of 3.18 million pixel and a ground resolution of 0.08 m were used. The processing time is given below: Orthorectification and locating blindspots: Mosaicking and feathering: 25 minutes ( 100 seconds per image) 2 minutes 8.2 Pros... The overall result of the true orthophoto generation is very good. Narrow backyards are fully visible, all roof tops are moved to their correct position and no walls are visible. Even tall objects with large relief displacements are rectified correctly, and the large occluded areas are filled with data from other images. Figure 8.1a shows a tall rectified objects. [Nie04] is based on 3D models which only consist of buildings and terrain, but do not include trees or cars. Stereo-matched DEMs even include trees and cars, which are visible in both source images of SGM. Therefore trees do not look weird and cut-through cars do not exist. Also narrow backyards of Terrassa are clearly visible, which is for normal orthophotos only given for areas very close to the nadir point (figure 8.1b). 63

72 8 Experimentation and Evaluation (a) (b) Figure 8.1: (a) Rectified tall building (50 meters tall), some occluded areas are left due to too less perspectives. (b) Narrow backyards visible in a true orthophoto and cons Some minor problems are still left in the true orthophoto generation. These remaining errors are small and usually only noticeable if they are investigated. Deviations in the DEM can cause poor rectifications. Terrain pixels might be treated as roof pixels and they also get resampled to the roof s rectified position (figure 8.2a). The other way around is possible too, as the figure 8.2b illustrates, if the SGM process identifies one roof point wrong, the result is holes in the roof. (a) (b) Figure 8.2: (a) Terrain pixels in roof. (b) Holes in roof. 64

73 8.4 Using a simpler DEM If sufficient overlap and perspectives are not available, occluded areas may remain occluded in the final true orthophoto. This is a huge problem as illustrated below. Some city parts of Terrassa have many occluded pixels, because only one image provides coverage here. Figure 8.3: Remaining blindspots in one part of Terrassa, due to only one image coverage. 8.4 Using a simpler DEM The developed application was also evaluated based on other sets of source data. The true orthophoto generation based on a laser-scanned DEM points out the importance of a correct and accurate elevation model (figure 8.4). The visibility mask based on the DEM is correct as figure 8.4a illustrates. But laser-scanned DEMs cut off sharp edges, so that objects on the terrain are smaller modeled than they really are. Therefore the source image does not fit properly over the DEM as figure 8.4b illustrates. The calculated visibility masks are smaller than they are supposed to be and roof parts are pulled down to the terrain level. On top the center of the roof is not placed at the actual center of the building and walls are visible. The mosaicking process treats the pulled down roof parts as correct data and might uses these parts to fill occluded areas. The result is a poor true orthophoto that has plenty of ghost images left??. 65

74 8 Experimentation and Evaluation (a) (b) Figure 8.4: (a) shows the DEM merged with the visibility mask. (b) points the displacement of the visibility mask out. Due to different sources of DEM and ortho image and poor accuracy at sharp edges the objects tend to be smaller. 8.5 Considering all images for Nearest Feature Transform It was interesting to see what the mosaic pattern would look like, if enblend [Md04] considers all images at the same time instead of two at a time. As expected the image would be even more mosaicked, because the farthest pixel of any blindspot is taken. The result could be poorer than the current approach, since even single pixels are taken from different images instead of areas only. A master image cannot be clearly identified as figure 8.5 shows, but an order of consideration of the images can be investigated. To decrease mosaicking, the inclusion of the distance to nadir as mentioned in [Nie04], could be helpful. 8.6 Summary The implemented methods were tested on different sets of source images and elevation models. The overall result is good and small remaining errors were pointed out in this chapter. The significant errors are all caused by limitations or inaccuracies of the used 66

75 8.6 Summary Figure 8.5: Illustration of a Nearest Feature Transform of 15 images. DEM. Enough overlap and available perspectives are crucial for no remaining occluded areas. 67

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Photogrammetric mapping: introduction, applications, and tools GNSS/INS-assisted photogrammetric and LiDAR mapping LiDAR mapping: principles, applications, mathematical model, and

More information

True orthophoto generation. Morten Ødegaard Nielsen. Kgs. Lyngby 2004 IMM-THESIS

True orthophoto generation. Morten Ødegaard Nielsen. Kgs. Lyngby 2004 IMM-THESIS True orthophoto generation Morten Ødegaard Nielsen Kgs. Lyngby 2004 IMM-THESIS-2004-50 True orthophoto generation Morten Ødegaard Nielsen Kgs. Lyngby 2004 Technical University of Denmark Informatics and

More information

POSITIONING A PIXEL IN A COORDINATE SYSTEM

POSITIONING A PIXEL IN A COORDINATE SYSTEM GEOREFERENCING AND GEOCODING EARTH OBSERVATION IMAGES GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF REMOTE SENSING AN INTRODUCTORY TEXTBOOK CHAPTER 6 POSITIONING A PIXEL IN A COORDINATE SYSTEM The essential

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Chapters 1-4: Summary

Chapters 1-4: Summary Chapters 1-4: Summary So far, we have been investigating the image acquisition process. Chapter 1: General introduction Chapter 2: Radiation source and properties Chapter 3: Radiation interaction with

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Photogrammetry: DTM Extraction & Editing

Photogrammetry: DTM Extraction & Editing Photogrammetry: DTM Extraction & Editing Review of terms Vertical aerial photograph Perspective center Exposure station Fiducial marks Principle point Air base (Exposure Station) Digital Photogrammetry:

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information

Training i Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Training i Course Remote Sensing Basic Theory & Image Processing Methods September 2011 Training i Course Remote Sensing Basic Theory & Image Processing Methods 19 23 September 2011 Geometric Operations Michiel Damen (September 2011) damen@itc.nl ITC FACULTY OF GEO-INFORMATION SCIENCE AND

More information

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization) Geometry of Aerial photogrammetry Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization) Image formation - Recap The geometry of imaging system

More information

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for

More information

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Submitted to GIM International FEATURE A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Dieter Fritsch 1, Jens Kremer 2, Albrecht Grimm 2, Mathias Rothermel 1

More information

Photogrammetry: DTM Extraction & Editing

Photogrammetry: DTM Extraction & Editing Photogrammetry: DTM Extraction & Editing How can one determine the x, y, and z of a location? Approaches to DTM Extraction Ground surveying Digitized topographic maps Traditional photogrammetry Hardcopy

More information

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION Mohamed Ibrahim Zahran Associate Professor of Surveying and Photogrammetry Faculty of Engineering at Shoubra, Benha University ABSTRACT This research addresses

More information

Low-Cost Orthophoto Production Using OrthoMapper Software

Low-Cost Orthophoto Production Using OrthoMapper Software Low-Cost Orthophoto Production Using OrthoMapper Software Rick Day Penn State Cooperative Extension, Geospatial Technology Program, RGIS-Chesapeake Air Photos Historical air photos are available from a

More information

COMPARATIVE CHARACTERISTICS OF DEM OBTAINED FROM SATELLITE IMAGES SPOT-5 AND TK-350

COMPARATIVE CHARACTERISTICS OF DEM OBTAINED FROM SATELLITE IMAGES SPOT-5 AND TK-350 COMPARATIVE CHARACTERISTICS OF DEM OBTAINED FROM SATELLITE IMAGES SPOT-5 AND TK-350 Dr. V. F. Chekalin a*, M. M. Fomtchenko a* a Sovinformsputnik, 47, Leningradsky Pr., 125167 Moscow, Russia common@sovinformsputnik.com

More information

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 23 November 2001 Two-camera stations are located at the ends of a base, which are 191.46m long, measured horizontally. Photographs

More information

2. POINT CLOUD DATA PROCESSING

2. POINT CLOUD DATA PROCESSING Point Cloud Generation from suas-mounted iphone Imagery: Performance Analysis A. D. Ladai, J. Miller Towill, Inc., 2300 Clayton Road, Suite 1200, Concord, CA 94520-2176, USA - (andras.ladai, jeffrey.miller)@towill.com

More information

Reality Modeling Drone Capture Guide

Reality Modeling Drone Capture Guide Reality Modeling Drone Capture Guide Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with ContextCapture, Bentley s reality modeling software. Learn the

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Terrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens

Terrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens Terrain correction and ortho-rectification Terrain correction Rüdiger Gens Why geometric terrain correction? Backward geocoding remove effects of side looking geometry of SAR images necessary step to allow

More information

Chapters 1 9: Overview

Chapters 1 9: Overview Chapters 1 9: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 9: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapters

More information

Introduction Photogrammetry Photos light Gramma drawing Metron measure Basic Definition The art and science of obtaining reliable measurements by mean

Introduction Photogrammetry Photos light Gramma drawing Metron measure Basic Definition The art and science of obtaining reliable measurements by mean Photogrammetry Review Neil King King and Associates Testing is an art Introduction Read the question Re-Read Read The question What is being asked Answer what is being asked Be in the know Exercise the

More information

TrueOrtho with 3D Feature Extraction

TrueOrtho with 3D Feature Extraction TrueOrtho with 3D Feature Extraction PCI Geomatics has entered into a partnership with IAVO to distribute its 3D Feature Extraction (3DFE) software. This software package compliments the TrueOrtho workflow

More information

Overview. Image Geometric Correction. LA502 Special Studies Remote Sensing. Why Geometric Correction?

Overview. Image Geometric Correction. LA502 Special Studies Remote Sensing. Why Geometric Correction? LA502 Special Studies Remote Sensing Image Geometric Correction Department of Landscape Architecture Faculty of Environmental Design King AbdulAziz University Room 103 Overview Image rectification Geometric

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Extracting Elevation from Air Photos

Extracting Elevation from Air Photos Extracting Elevation from Air Photos TUTORIAL A digital elevation model (DEM) is a digital raster surface representing the elevations of a terrain for all spatial ground positions in the image. Traditionally

More information

LIDAR MAPPING FACT SHEET

LIDAR MAPPING FACT SHEET 1. LIDAR THEORY What is lidar? Lidar is an acronym for light detection and ranging. In the mapping industry, this term is used to describe an airborne laser profiling system that produces location and

More information

Using ArcGIS Server Data to Assist in Planimetric Update Process. Jim Stout - IMAGIS Rick Hammond Woolpert

Using ArcGIS Server Data to Assist in Planimetric Update Process. Jim Stout - IMAGIS Rick Hammond Woolpert Using ArcGIS Server Data to Assist in Planimetric Update Process Jim Stout - IMAGIS Rick Hammond Woolpert Using ArcGIS Server Data to Assist in Planimetric Update Process Jim Stout - IMAGIS Rick Hammond

More information

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke* ADS40 Calibration & Verification Process Udo Tempelmann*, Ludger Hinsken**, Utz Recke* *Leica Geosystems GIS & Mapping GmbH, Switzerland **Ludger Hinsken, Author of ORIMA, Konstanz, Germany Keywords: ADS40,

More information

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras Chapters 1 5 Chapter 1: Photogrammetry: Definition, introduction, and applications Chapters 2 4: Electro-magnetic radiation Optics Film development and digital cameras Chapter 5: Vertical imagery: Definitions,

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Contents of Lecture. Surface (Terrain) Data Models. Terrain Surface Representation. Sampling in Surface Model DEM

Contents of Lecture. Surface (Terrain) Data Models. Terrain Surface Representation. Sampling in Surface Model DEM Lecture 13: Advanced Data Models: Terrain mapping and Analysis Contents of Lecture Surface Data Models DEM GRID Model TIN Model Visibility Analysis Geography 373 Spring, 2006 Changjoo Kim 11/29/2006 1

More information

DIGITAL TERRAIN MODELLING. Endre Katona University of Szeged Department of Informatics

DIGITAL TERRAIN MODELLING. Endre Katona University of Szeged Department of Informatics DIGITAL TERRAIN MODELLING Endre Katona University of Szeged Department of Informatics katona@inf.u-szeged.hu The problem: data sources data structures algorithms DTM = Digital Terrain Model Terrain function:

More information

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras Chapters 1 5 Chapter 1: Photogrammetry: Definition, introduction, and applications Chapters 2 4: Electro-magnetic radiation Optics Film development and digital cameras Chapter 5: Vertical imagery: Definitions,

More information

Producing Ortho Imagery In ArcGIS. Hong Xu, Mingzhen Chen, Ringu Nalankal

Producing Ortho Imagery In ArcGIS. Hong Xu, Mingzhen Chen, Ringu Nalankal Producing Ortho Imagery In ArcGIS Hong Xu, Mingzhen Chen, Ringu Nalankal Agenda Ortho imagery in GIS ArcGIS ortho mapping solution Workflows - Satellite imagery - Digital aerial imagery - Scanned imagery

More information

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University

More information

Iowa Department of Transportation Office of Design. Photogrammetric Mapping Specifications

Iowa Department of Transportation Office of Design. Photogrammetric Mapping Specifications Iowa Department of Transportation Office of Design Photogrammetric Mapping Specifications March 2015 1 Purpose of Manual These Specifications for Photogrammetric Mapping define the standards and general

More information

DIGITAL TERRAIN MODELS

DIGITAL TERRAIN MODELS DIGITAL TERRAIN MODELS 1 Digital Terrain Models Dr. Mohsen Mostafa Hassan Badawy Remote Sensing Center GENERAL: A Digital Terrain Models (DTM) is defined as the digital representation of the spatial distribution

More information

Rectification Algorithm for Linear Pushbroom Image of UAV

Rectification Algorithm for Linear Pushbroom Image of UAV Rectification Algorithm for Linear Pushbroom Image of UAV Ruoming SHI and Ling ZHU INTRODUCTION In recent years, unmanned aerial vehicle (UAV) has become a strong supplement and an important complement

More information

Geometric Correction

Geometric Correction CEE 6150: Digital Image Processing Geometric Correction 1 Sources of Distortion Sensor Characteristics optical distortion aspect ratio non-linear mirror velocity detector geometry & scanning sequence Viewing

More information

APPLICATION AND ACCURACY EVALUATION OF LEICA ADS40 FOR LARGE SCALE MAPPING

APPLICATION AND ACCURACY EVALUATION OF LEICA ADS40 FOR LARGE SCALE MAPPING APPLICATION AND ACCURACY EVALUATION OF LEICA ADS40 FOR LARGE SCALE MAPPING WenYuan Hu a, GengYin Yang b, Hui Yuan c,* a, b ShanXi Provincial Survey and Mapping Bureau, China - sxgcchy@public.ty.sx.cn c

More information

Lecture 4: Digital Elevation Models

Lecture 4: Digital Elevation Models Lecture 4: Digital Elevation Models GEOG413/613 Dr. Anthony Jjumba 1 Digital Terrain Modeling Terms: DEM, DTM, DTEM, DSM, DHM not synonyms. The concepts they illustrate are different Digital Terrain Modeling

More information

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS Dor Yalon Co-Founder & CTO Icaros, Inc. ABSTRACT The use of small and medium format sensors for traditional photogrammetry presents a number

More information

Lecture 5. Relief displacement. Parallax. Monoscopic and stereoscopic height measurement. Photo Project. Soft-copy Photogrammetry.

Lecture 5. Relief displacement. Parallax. Monoscopic and stereoscopic height measurement. Photo Project. Soft-copy Photogrammetry. NRMT 2270, Photogrammetry/Remote Sensing Lecture 5 Relief displacement. Parallax. Monoscopic and stereoscopic height measurement. Photo Project. Soft-copy Photogrammetry. Tomislav Sapic GIS Technologist

More information

LPS Project Manager User s Guide. November 2009

LPS Project Manager User s Guide. November 2009 LPS Project Manager User s Guide November 2009 Copyright 2009 ERDAS, Inc. All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property

More information

EVOLUTION OF POINT CLOUD

EVOLUTION OF POINT CLOUD Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)

More information

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES AUTOMATIC EXTRACTING DEM FROM DSM WITH CONSECUTIVE MORPHOLOGICAL FILTERING Junhee Youn *1 & Tae-Hoon Kim 2 *1,2 Korea Institute of Civil Engineering

More information

Geomatica OrthoEngine Orthorectifying VEXCEL UltraCam Data

Geomatica OrthoEngine Orthorectifying VEXCEL UltraCam Data Geomatica OrthoEngine Orthorectifying VEXCEL UltraCam Data Vexcel s UltraCam digital camera system has a focal distance of approximately 100mm and offers a base panchromatic (black and white) resolution

More information

3D DIGITAL MODELING OF MODERN TIMES BUILDING FOR PRESERVATION AND RESTORATION

3D DIGITAL MODELING OF MODERN TIMES BUILDING FOR PRESERVATION AND RESTORATION 3D DIGITAL MODELING OF MODERN TIMES BUILDING FOR PRESERVATION AND RESTORATION W.J. Oh a *, S.H. Han b, H.C. Yoon c, Y.S. Bae d, S.H. Song e a Dept. of Land Information Management, ChungCheong University,

More information

SimActive and PhaseOne Workflow case study. By François Riendeau and Dr. Yuri Raizman Revision 1.0

SimActive and PhaseOne Workflow case study. By François Riendeau and Dr. Yuri Raizman Revision 1.0 SimActive and PhaseOne Workflow case study By François Riendeau and Dr. Yuri Raizman Revision 1.0 Contents 1. Introduction... 2 1.1. Simactive... 2 1.2. PhaseOne Industrial... 2 2. Testing Procedure...

More information

Mosaics. Today s Readings

Mosaics. Today s Readings Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections

More information

CHAPTER 10. Digital Mapping and Earthwork

CHAPTER 10. Digital Mapping and Earthwork CHAPTER 10 Digital Mapping and Earthwork www.terrainmap.com/rm22.html CE 316 March 2012 348 10.1 Introduction 349 10.2 Single Images 10.2.1 Rectified Photograph With a single photograph, X,Y data can be

More information

DEVELOPMENT OF ORIENTATION AND DEM/ORTHOIMAGE GENERATION PROGRAM FOR ALOS PRISM

DEVELOPMENT OF ORIENTATION AND DEM/ORTHOIMAGE GENERATION PROGRAM FOR ALOS PRISM DEVELOPMENT OF ORIENTATION AND DEM/ORTHOIMAGE GENERATION PROGRAM FOR ALOS PRISM Izumi KAMIYA Geographical Survey Institute 1, Kitasato, Tsukuba 305-0811 Japan Tel: (81)-29-864-5944 Fax: (81)-29-864-2655

More information

GIS Data Collection. This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues.

GIS Data Collection. This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues. 9 GIS Data Collection OVERVIEW This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues. It distinguishes between primary (direct measurement)

More information

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO Yan Li a, Tadashi Sasagawa b, Peng Gong a,c a International Institute for Earth System Science, Nanjing University,

More information

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points)

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points) Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points) Overview Agisoft PhotoScan Professional allows to generate georeferenced dense point

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

MASI: Modules for Aerial and Satellite Imagery. Version 3.0 Aerial Modules. Tutorial

MASI: Modules for Aerial and Satellite Imagery. Version 3.0 Aerial Modules. Tutorial MASI: Modules for Aerial and Satellite Imagery Version 3.0 Aerial Modules Tutorial VisionOnSky Co., Ltd. www.visiononsky.com File Version: v1.0 Sept. 12, 2018 Special Notes: (1) Before starting the tour

More information

COORDINATE TRANSFORMATION. Lecture 6

COORDINATE TRANSFORMATION. Lecture 6 COORDINATE TRANSFORMATION Lecture 6 SGU 1053 SURVEY COMPUTATION 1 Introduction Geomatic professional are mostly confronted in their work with transformations from one two/three-dimensional coordinate system

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

KEY WORDS: IKONOS, Orthophotos, Relief Displacement, Affine Transformation

KEY WORDS: IKONOS, Orthophotos, Relief Displacement, Affine Transformation GRATIO OF DIGITAL ORTHOPHOTOS FROM IKOOS GO IMAGS Liang-Chien Chen and Chiu-Yueh Lo Center for Space and Remote Sensing Research. ational Central University Tel: 886-3-47151 xt.76 Fax: 886-3-455535 lcchen@csrsr.ncu.edu.tw

More information

Measurement of Direction: Bearing vs. Azimuth

Measurement of Direction: Bearing vs. Azimuth Week 5 Monday Measurement of Direction: Bearing vs. Azimuth Bearing Is an angle of 90 o or less Measured from either North or South in easterly & westerly directions. North 22 o West, South 89 o West,

More information

AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES

AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES J. J. Jaw a *, Y. S. Wu b Dept. of Civil Engineering, National Taiwan University, Taipei,10617, Taiwan, ROC a jejaw@ce.ntu.edu.tw b r90521128@ms90.ntu.edu.tw

More information

Leica Photogrammetry Suite Automatic Terrain Extraction

Leica Photogrammetry Suite Automatic Terrain Extraction Leica Photogrammetry Suite Automatic Terrain Extraction Copyright 2006 Leica Geosystems Geospatial Imaging, LLC All rights reserved. Printed in the United States of America. The information contained in

More information

ABSTRACT 1. INTRODUCTION

ABSTRACT 1. INTRODUCTION Published in SPIE Proceedings, Vol.3084, 1997, p 336-343 Computer 3-d site model generation based on aerial images Sergei Y. Zheltov, Yuri B. Blokhinov, Alexander A. Stepanov, Sergei V. Skryabin, Alexander

More information

Technical Considerations and Best Practices in Imagery and LiDAR Project Procurement

Technical Considerations and Best Practices in Imagery and LiDAR Project Procurement Technical Considerations and Best Practices in Imagery and LiDAR Project Procurement Presented to the 2014 WV GIS Conference By Brad Arshat, CP, EIT Date: June 4, 2014 Project Accuracy A critical decision

More information

Blacksburg, VA July 24 th 30 th, 2010 Georeferencing images and scanned maps Page 1. Georeference

Blacksburg, VA July 24 th 30 th, 2010 Georeferencing images and scanned maps Page 1. Georeference George McLeod Prepared by: With support from: NSF DUE-0903270 in partnership with: Geospatial Technician Education Through Virginia s Community Colleges (GTEVCC) Georeference The process of defining how

More information

DATA MODELS IN GIS. Prachi Misra Sahoo I.A.S.R.I., New Delhi

DATA MODELS IN GIS. Prachi Misra Sahoo I.A.S.R.I., New Delhi DATA MODELS IN GIS Prachi Misra Sahoo I.A.S.R.I., New Delhi -110012 1. Introduction GIS depicts the real world through models involving geometry, attributes, relations, and data quality. Here the realization

More information

6. Rectification 2 hours

6. Rectification 2 hours Lecture 6-1 - 11/2/2003 Concept Hell/Pfeiffer February 2003 6. ectification 2 hours Aim: Principles for rectification Theory: Indirect rectification in digital image processing Methods of rectification

More information

Aerial and Mobile LiDAR Data Fusion

Aerial and Mobile LiDAR Data Fusion Creating Value Delivering Solutions Aerial and Mobile LiDAR Data Fusion Dr. Srini Dharmapuri, CP, PMP What You Will Learn About LiDAR Fusion Mobile and Aerial LiDAR Technology Components & Parameters Project

More information

Engineering, Korea Advanced Institute of Science and Technology (hytoiy wpark tjkim Working Group

Engineering, Korea Advanced Institute of Science and Technology (hytoiy wpark tjkim Working Group THE DEVELOPMENT OF AN ACCURATE DEM EXTRACTION STRATEGY FOR SATELLITE IMAGE PAIRS USING EPIPOLARITY OF LINEAR PUSHBROOM SENSORS AND INTELLIGENT INTERPOLATION SCHEME Hae-Yeoun Lee *, Wonkyu Park **, Taejung

More information

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Institute for Photogrammetry (ifp) University of Stuttgart ifp Geschwister-Scholl-Str. 24 D M. Cramer: Final report

More information

MASI: Modules for Aerial and Satellite Imagery

MASI: Modules for Aerial and Satellite Imagery MASI: Modules for Aerial and Satellite Imagery Product Descriptions and Typical Applied Cases Dr. Jinghui Yang jhyang@vip.163.com Sept. 18, 2017 File Version: v1.0 VisionOnSky Co., Ltd. Contents 1 Descriptions

More information

By Colin Childs, ESRI Education Services. Catalog

By Colin Childs, ESRI Education Services. Catalog s resolve many traditional raster management issues By Colin Childs, ESRI Education Services Source images ArcGIS 10 introduces Catalog Mosaicked images Sources, mosaic methods, and functions are used

More information

Announcements. Mosaics. How to do it? Image Mosaics

Announcements. Mosaics. How to do it? Image Mosaics Announcements Mosaics Project artifact voting Project 2 out today (help session at end of class) http://www.destination36.com/start.htm http://www.vrseattle.com/html/vrview.php?cat_id=&vrs_id=vrs38 Today

More information

Files Used in this Tutorial

Files Used in this Tutorial Generate Point Clouds and DSM Tutorial This tutorial shows how to generate point clouds and a digital surface model (DSM) from IKONOS satellite stereo imagery. You will view the resulting point clouds

More information

ACCURACY ANALYSIS AND SURFACE MAPPING USING SPOT 5 STEREO DATA

ACCURACY ANALYSIS AND SURFACE MAPPING USING SPOT 5 STEREO DATA ACCURACY ANALYSIS AND SURFACE MAPPING USING SPOT 5 STEREO DATA Hannes Raggam Joanneum Research, Institute of Digital Image Processing Wastiangasse 6, A-8010 Graz, Austria hannes.raggam@joanneum.at Commission

More information

DIGITAL ORTHOPHOTO GENERATION

DIGITAL ORTHOPHOTO GENERATION DIGITAL ORTHOPHOTO GENERATION Manuel JAUREGUI, José VÍLCHE, Leira CHACÓN. Universit of Los Andes, Venezuela Engineering Facult, Photogramdemr Institute, Email leirac@ing.ula.ven Working Group IV/2 KEY

More information

Map Compilation CHAPTER HISTORY

Map Compilation CHAPTER HISTORY CHAPTER 7 Map Compilation 7.1 HISTORY Producing accurate commercial maps from aerial photography began in the 1930s. The technology of stereomapping over the last 70 years has brought vast technological

More information

TO Ka Yi, Lizzy 6 May

TO Ka Yi, Lizzy 6 May TO Ka Yi, Lizzy 6 May 2017 1 Contents Basic concepts Practical Issues Examples 2 Data Acquisition Source of Energy Data Products Interpretation & Analysis Digital Propagation through the atmosphere Data

More information

The raycloud A Vision Beyond the Point Cloud

The raycloud A Vision Beyond the Point Cloud The raycloud A Vision Beyond the Point Cloud Christoph STRECHA, Switzerland Key words: Photogrammetry, Aerial triangulation, Multi-view stereo, 3D vectorisation, Bundle Block Adjustment SUMMARY Measuring

More information

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (without Ground Control Points)

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (without Ground Control Points) Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (without Ground Control Points) Overview Agisoft PhotoScan Professional allows to generate georeferenced dense point

More information

Chapter 1: Overview. Photogrammetry: Introduction & Applications Photogrammetric tools:

Chapter 1: Overview. Photogrammetry: Introduction & Applications Photogrammetric tools: Chapter 1: Overview Photogrammetry: Introduction & Applications Photogrammetric tools: Rotation matrices Photogrammetric point positioning Photogrammetric bundle adjustment This chapter will cover the

More information

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Jiann-Yeou RAU, Liang-Chien CHEN Tel: 886-3-4227151 Ext. 7651,7627,7622 Fax: 886-3-4255535 {jyrau, lcchen} @csrsr.ncu.edu.tw

More information

OPTIMIZED PATCH BACKPROJECTION IN ORTHORECTIFICATION FOR HIGH RESOLUTION SATELLITE IMAGES

OPTIMIZED PATCH BACKPROJECTION IN ORTHORECTIFICATION FOR HIGH RESOLUTION SATELLITE IMAGES OPTIMIZED PATCH BACKPROJECTION IN ORTHORECTIFICATION FOR HIGH RESOLUTION SATELLITE IMAGES Liang-Chien Chen *, Tee-Ann Teo, Jiann-Yeou Rau Center for Space and Remote Sensing Research, National Central

More information

CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION

CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION 1 CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION Lecture 1 Correcting Remote Sensing 2 System Detector Error Ideally, the radiance recorded by a remote sensing system in various bands is an accurate

More information

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA Changjae Kim a, Ayman Habib a, *, Yu-Chuan Chang a a Geomatics Engineering, University of Calgary, Canada - habib@geomatics.ucalgary.ca,

More information

Stereoscopic Models and Plotting

Stereoscopic Models and Plotting Stereoscopic Models and Plotting Stereoscopic Viewing Stereoscopic viewing is the way the depth perception of the objects through BINOCULAR vision with much greater accuracy. رؤيه البعد الثالث و االحساس

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

DENSE 3D POINT CLOUD GENERATION FROM UAV IMAGES FROM IMAGE MATCHING AND GLOBAL OPTIMAZATION

DENSE 3D POINT CLOUD GENERATION FROM UAV IMAGES FROM IMAGE MATCHING AND GLOBAL OPTIMAZATION DENSE 3D POINT CLOUD GENERATION FROM UAV IMAGES FROM IMAGE MATCHING AND GLOBAL OPTIMAZATION S. Rhee a, T. Kim b * a 3DLabs Co. Ltd., 100 Inharo, Namgu, Incheon, Korea ahmkun@3dlabs.co.kr b Dept. of Geoinformatic

More information

TRIMBLE BUSINESS CENTER PHOTOGRAMMETRY MODULE

TRIMBLE BUSINESS CENTER PHOTOGRAMMETRY MODULE TRIMBLE BUSINESS CENTER PHOTOGRAMMETRY MODULE WHITE PAPER TRIMBLE GEOSPATIAL DIVISION WESTMINSTER, COLORADO, USA July 2013 ABSTRACT The newly released Trimble Business Center Photogrammetry Module is compatible

More information

ON THE USE OF MULTISPECTRAL AND STEREO DATA FROM AIRBORNE SCANNING SYSTEMS FOR DTM GENERATION AND LANDUSE CLASSIFICATION

ON THE USE OF MULTISPECTRAL AND STEREO DATA FROM AIRBORNE SCANNING SYSTEMS FOR DTM GENERATION AND LANDUSE CLASSIFICATION ON THE USE OF MULTISPECTRAL AND STEREO DATA FROM AIRBORNE SCANNING SYSTEMS FOR DTM GENERATION AND LANDUSE CLASSIFICATION Norbert Haala, Dirk Stallmann and Christian Stätter Institute for Photogrammetry

More information

Merging LiDAR Data with Softcopy Photogrammetry Data

Merging LiDAR Data with Softcopy Photogrammetry Data Merging LiDAR Data with Softcopy Photogrammetry Data Cindy McCallum WisDOT\Bureau of Technical Services Surveying & Mapping Section Photogrammetry Unit Overview Terms and processes Why use data from LiDAR

More information

HEIGHT GRADIENT APPROACH FOR OCCLUSION DETECTION IN UAV IMAGERY

HEIGHT GRADIENT APPROACH FOR OCCLUSION DETECTION IN UAV IMAGERY HEIGHT GRADIENT APPROACH FOR OCCLUSION DETECTION IN UAV IMAGERY H. C. Oliveira a, A. F. Habib b, A. P. Dal Poz c, M. Galo c a São Paulo State University, Graduate Program in Cartographic Sciences, Presidente

More information

L7 Raster Algorithms

L7 Raster Algorithms L7 Raster Algorithms NGEN6(TEK23) Algorithms in Geographical Information Systems by: Abdulghani Hasan, updated Nov 216 by Per-Ola Olsson Background Store and analyze the geographic information: Raster

More information

Photogrammetric Procedures for Digital Terrain Model Determination

Photogrammetric Procedures for Digital Terrain Model Determination Photogrammetric Procedures for Digital Terrain Model Determination Hartmut ZIEMANN and Daniel GROHMANN 1 Introduction Photogrammetric procedures for digital terrain model (DTM) data determination fall

More information

Creating an Event Theme from X, Y Data

Creating an Event Theme from X, Y Data Creating an Event Theme from X, Y Data In Universal Transverse Mercator (UTM) Coordinates Eastings (measured in meters) typically have 6 digits left of the decimal. Northings (also in meters) typically

More information

ENVI Automated Image Registration Solutions

ENVI Automated Image Registration Solutions ENVI Automated Image Registration Solutions Xiaoying Jin Harris Corporation Table of Contents Introduction... 3 Overview... 4 Image Registration Engine... 6 Image Registration Workflow... 8 Technical Guide...

More information

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN OVERVIEW National point clouds Airborne laser scanning in the Netherlands Quality control Developments in lidar

More information