TRUE-ORTHOPHOTO GENERATION FROM UAV IMAGES: IMPLEMENTATION OF A COMBINED PHOTOGRAMMETRIC AND COMPUTER VISION APPROACH

Similar documents
Dense pointclouds from combined nadir and oblique imagery by object-based semi-global multi-image matching

Positioning of a robot based on binocular vision for hand / foot fusion Long Han

Color Correction Using 3D Multiview Geometry

Goal. Rendering Complex Scenes on Mobile Terminals or on the web. Rendering on Mobile Terminals. Rendering on Mobile Terminals. Walking through images

THE SOLID IMAGE: a new concept and its applications

An Assessment of the Efficiency of Close-Range Photogrammetry for Developing a Photo-Based Scanning Systeminthe Shams Tabrizi Minaret in Khoy City

17/5/2009. Introduction

IP Network Design by Modified Branch Exchange Method

Illumination methods for optical wear detection

QUANTITATIVE MEASURES FOR THE EVALUATION OF CAMERA STABILITY

Journal of World s Electrical Engineering and Technology J. World. Elect. Eng. Tech. 1(1): 12-16, 2012

Prof. Feng Liu. Fall /17/2016

A Memory Efficient Array Architecture for Real-Time Motion Estimation

Mono Vision Based Construction of Elevation Maps in Indoor Environments

Augmented Reality. Integrating Computer Graphics with Computer Vision Mihran Tuceryan. August 16, 1998 ICPR 98 1

ERSO - Acquisition, Reconstruction and Simulation of Real Objects 1

Optical Flow for Large Motion Using Gradient Technique

A modal estimation based multitype sensor placement method

Detection and Recognition of Alert Traffic Signs

LIDAR SYSTEM CALIBRATION USING OVERLAPPING STRIPS

LINE-BASED REFERENCING BETWEEN IMAGES AND LASER SCANNER DATA FOR IMAGE-BASED POINT CLOUD INTERPRETATION IN A CAD-ENVIRONMENT

Controlled Information Maximization for SOM Knowledge Induced Learning

A NEW GROUND-BASED STEREO PANORAMIC SCANNING SYSTEM

Segmentation of Casting Defects in X-Ray Images Based on Fractal Dimension

A Novel Automatic White Balance Method For Digital Still Cameras

EYE DIRECTION BY STEREO IMAGE PROCESSING USING CORNEAL REFLECTION ON AN IRIS

A METHOD FOR INTERACTIVE ORIENTATION OF DIGITAL IMAGES USING BACKPROJECTION OF 3D DATA

3D inspection system for manufactured machine parts

View Synthesis using Depth Map for 3D Video

Frequency Domain Approach for Face Recognition Using Optical Vanderlugt Filters

Point-Biserial Correlation Analysis of Fuzzy Attributes

A Minutiae-based Fingerprint Matching Algorithm Using Phase Correlation

Development and Analysis of a Real-Time Human Motion Tracking System

Motion Estimation. Yao Wang Tandon School of Engineering, New York University

4.2. Co-terminal and Related Angles. Investigate

Obstacle Avoidance of Autonomous Mobile Robot using Stereo Vision Sensor

Lecture 3: Rendering Equation

Image Registration among UAV Image Sequence and Google Satellite Image Under Quality Mismatch

A Two-stage and Parameter-free Binarization Method for Degraded Document Images

Accurate Diffraction Efficiency Control for Multiplexed Volume Holographic Gratings. Xuliang Han, Gicherl Kim, and Ray T. Chen

XFVHDL: A Tool for the Synthesis of Fuzzy Logic Controllers

Research Article. Regularization Rotational motion image Blur Restoration

Transmission Lines Modeling Based on Vector Fitting Algorithm and RLC Active/Passive Filter Design

INTERACTIVE RELATIVE ORIENTATION BETWEEN TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNING DATA

Voting-Based Grouping and Interpretation of Visual Motion

ADDING REALISM TO SOURCE CHARACTERIZATION USING A GENETIC ALGORITHM

Fifth Wheel Modelling and Testing

(a, b) x y r. For this problem, is a point in the - coordinate plane and is a positive number.

A Novel Image-Based Rendering System With A Longitudinally Aligned Camera Array

Color Interpolation for Single CCD Color Camera

A New and Efficient 2D Collision Detection Method Based on Contact Theory Xiaolong CHENG, Jun XIAO a, Ying WANG, Qinghai MIAO, Jian XUE

Cardiac C-Arm CT. SNR Enhancement by Combining Multiple Retrospectively Motion Corrected FDK-Like Reconstructions

SYSTEM LEVEL REUSE METRICS FOR OBJECT ORIENTED SOFTWARE : AN ALTERNATIVE APPROACH

Cellular Neural Network Based PTV

Multi-azimuth Prestack Time Migration for General Anisotropic, Weakly Heterogeneous Media - Field Data Examples

3D Reconstruction from 360 x 360 Mosaics 1

Assessment of Track Sequence Optimization based on Recorded Field Operations

COMPARISON OF CHIRP SCALING AND WAVENUMBER DOMAIN ALGORITHMS FOR AIRBORNE LOW FREQUENCY SAR DATA PROCESSING

MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

DEVELOPMENT OF A PROCEDURE FOR VERTICAL STRUCTURE ANALYSIS AND 3D-SINGLE TREE EXTRACTION WITHIN FORESTS BASED ON LIDAR POINT CLOUD

Improved Fourier-transform profilometry

9-2. Camera Calibration Method for Far Range Stereovision Sensors Used in Vehicles. Tiberiu Marita, Florin Oniga, Sergiu Nedevschi

A Recommender System for Online Personalization in the WUM Applications

A MULTIRESOLUTION AND OPTIMIZATION-BASED IMAGE MATCHING APPROACH: AN APPLICATION TO SURFACE RECONSTRUCTION FROM SPOT5-HRS STEREO IMAGERY

Directional Stiffness of Electronic Component Lead

Image Enhancement in the Spatial Domain. Spatial Domain

Detection and tracking of ships using a stereo vision system

Monte Carlo Techniques for Rendering

Kalman filter correction with rational non-linear functions: Application to Visual-SLAM

High performance CUDA based CNN image processor

Module 6 STILL IMAGE COMPRESSION STANDARDS

Performance Optimization in Structured Wireless Sensor Networks

Environment Mapping. Overview

RANDOM IRREGULAR BLOCK-HIERARCHICAL NETWORKS: ALGORITHMS FOR COMPUTATION OF MAIN PROPERTIES

Lecture # 04. Image Enhancement in Spatial Domain

CSE 165: 3D User Interaction

Topic -3 Image Enhancement

Conservation Law of Centrifugal Force and Mechanism of Energy Transfer Caused in Turbomachinery

Layered Animation using Displacement Maps

Coordinate Systems. Ioannis Rekleitis

Ego-Motion Estimation on Range Images using High-Order Polynomial Expansion

Lecture 27: Voronoi Diagrams

Shape Matching / Object Recognition

An Extension to the Local Binary Patterns for Image Retrieval

An Unsupervised Segmentation Framework For Texture Image Queries

Topological Characteristic of Wireless Network

A VECTOR PERTURBATION APPROACH TO THE GENERALIZED AIRCRAFT SPARE PARTS GROUPING PROBLEM

High Performance Computing on GPU for Electromagnetic Logging

COLOR EDGE DETECTION IN RGB USING JOINTLY EUCLIDEAN DISTANCE AND VECTOR ANGLE

AN ARTIFICIAL NEURAL NETWORK -BASED ROTATION CORRECTION METHOD FOR 3D-MEASUREMENT USING A SINGLE PERSPECTIVE VIEW

a Not yet implemented in current version SPARK: Research Kit Pointer Analysis Parameters Soot Pointer analysis. Objectives

Gravitational Shift for Beginners

Introduction to Medical Imaging. Cone-Beam CT. Introduction. Available cone-beam reconstruction methods: Our discussion:

A Shape-preserving Affine Takagi-Sugeno Model Based on a Piecewise Constant Nonuniform Fuzzification Transform

Title. Author(s)NOMURA, K.; MOROOKA, S. Issue Date Doc URL. Type. Note. File Information

A General Characterization of Representing and Determining Fuzzy Spatial Relations

Elliptic Generation Systems

Adaptation of Motion Capture Data of Human Arms to a Humanoid Robot Using Optimization

10/29/2010. Rendering techniques. Global Illumination. Local Illumination methods. Today : Global Illumination Modules and Methods

A ROI Focusing Mechanism for Digital Cameras

Transcription:

TRUE-ORTHOPHOTO GENERATION FROM UAV IMAGES: IMPLEMENTATION OF A COMBINED PHOTOGRAMMETRIC AND COMPUTER VISION APPROACH L. Baazzetti a, *, R. Bumana a, D. Oeni a, M. Pevitali a, F. Roncooni b a Politecnico di Milano, Depatment of Achitectue, Built Envionment and Constuction Engineeing (ABC) Via Ponzio 31, 20133 Milano, Italy (luigi.baazzetti, affaella.bumana, daniela.oeni, mattia.pevitali)@polimi.it b Politecnico di Milano, Polo Teitoiale di Lecco, Via Peviati 1/c, 23900 Lecco, Italy fabio.oncooni@polimi Commission V KEY WORDS: Automation, Matching, Tue-Othophoto, UAV ABSTRACT: This pape pesents a photogammetic methodology fo tue-othophoto geneation with images acquied fom UAV platfoms. The method is an automated multistep woflow made up of thee main pats: (i) image oientation though featue-based matching and collineaity equations / bundle bloc adjustment, (ii) dense matching with coelation techniques able to manage multiple images, and tue-othophoto mapping fo 3D model tetuing. It allows automated data pocessing of spase blocs of convegent images in ode to obtain a final tue-othophoto whee poblems such as self-occlusions, ghost effects, and multiple tetue assignments ae taen into consideation. The diffeent algoithms ae illustated and discussed along with a eal case study concening the UAV flight ove the Basilica di Santa Maia di Collemaggio in L Aquila (Italy). The final esult is a igoous tue-othophoto used to inspect the oof of the Basilica, which was seiously damaged by the eathquae in 2009. Fig. 1. Some phases of the tue-othoectification pocess with the UAV images acquied ove the Basilica di Santa Maia di Collemaggio. Fom left to ight: image oientation, dense model geneation, tue-othophoto mapping and 3D epojection on a BIM. 1. INTRODUCTION Othophotos ae common poducts of photogammetic applications. They ae useful fo both epet opeatos and beginnes because they combine geomety and photoealism in ode to povide a metic visualization of the aea. Aeial othophotos, i.e. those geneated fom epensive aibone sensos, can be ceated following the standad photogammetic pipeline that compehends (i) image oientation, (ii) e-pojection with a digital teain model, and (iii) image mosaicing. Image oientation is caied out stating fom a set of gound contol points and tie points and a mathematic fomulation based on collineaity equations (bundle bloc adjustment). Digital teain models (DTMs) can aleady be available o they can be geneated fom images (with dense image matching techniques) o fom LiDAR data. Although aeial othophotos do not povide 3D infomation they can be geneated following the descibed poduction chain, that can be assumed as a poven technique fo most aeial suveys. In addition, the geat success of web sevices (e.g. Google Eath Google Maps, Bing Maps, etc.) inceased the demand of othophotos (and the success of photogammetic applications) leading to the development of new algoithms and sensos. It is well-nown that the othophoto quality depends on image esolution, accuacy of camea calibation and oientation, and DTM accuacy (Kaus, 2007). As digital cameas poduce high esolution images (centimete level) one of the most impotant consequence in othophoto poduction concens the spatial esolution of the DTM: standing objects (e.g. buildings, vegetation, etc.) have a adial displacement in the final othophoto. The spatial eo of othophotos becomes moe significant in the case of images gatheed with UAV platfoms. This is mainly due to thei bette geometic esolution and the details visible with flights at lowe altitudes (Eisenbeiss, 2008; 2009). Hee, bealines and discontinuities becomes moe impotant and influence the quality of the final othophoto. A possible solution (fo the case of aeial images) was poposed by Amha et al. (1998). The poduct was defined tueothophoto and can be geneated by using a digital suface model (DSM). Fom a theoetical point of view, the geneation of a tue-othophoto does not significantly diffe fom that of * Coesponding autho. doi:10.5194/ispsannals-ii-5-57-2014 57

classical othophotos. On the othe hand, a tue-othophoto leads to additional issues, among which the impotance of occlusions. Indeed, tue-othophotos can conside the diffeent standing objects and must be able to manage thei multiple self-occlusions in ode to avoid ghost effects. A solution is the combined used of images captued fom diffeent points of view (Rau et al., 2002, Biasion et al., 2004) so that aeas occluded in some images can be filled by othe views. Diffeent implementations and eseach activities wee caied out on this topic, obtaining diffeent methods based on vaiable data souces (e.g. 3D City Models, spatial databases, Dense Digital Teain Models DDSMs, dense image matching fom multiple aeial images, etc., see Bown, 2003; Dequal and Lingua, 2004; Schicle, 1998, Baazzetti et al., 2008; 2010a). This pape illustates a igoous photogammetic pocedue able to poduce a tue-othophoto fom a set of unoiented images acquied with an UAV platfom. It is based on the peliminay ceation of a detailed model of the object with dense image matching techniques. The 3D model can be tetued with the diffeent images (a colo coection and a self-occlusion algoithm is used). The last step is the pojection of the 3D object on the efeence plane fo tue-othoectification. Moe theoetical details along with a eal case study ae given in the following sections. 2. ALGORITHM AND DATA OVERVIEW The initial equiement to un the algoithm consists in a set of images (and thei camea calibation paametes) and some gound contol points (both image and gound coodinates) used to fi the datum and contol netwo defomations. The method is a sequence of in-house algoithms able to deive the oientation paametes of the images, ceate a 3D model of the object by means of a 3D mesh, epoject the diffeent images accoding to specific constaints, and pefom tue-othophoto mapping. The case study pesented is a suvey in uban aea (L Aquila) and is a pat of a poject of estoation of the Basilica di Santa Maia di Collemaggio. The 2009 L'Aquila eathquae caused seious damages to the Basilica (Fig. 2) and a estoation wo is cuently in pogess. The photogammetic suvey of the basilica was caied out with the UAV platfom AscTec Falcon 8. The system is equipped with a RGB camea Sony NEX-5N photogammetically calibated. The Falcon 8 (70 cm 60 cm, weight 2 g) has 8 motos and is able to fly up to 20 minutes with a single battey. The electonic equipment includes a GPS antenna and a system of acceleometes detemining the system oll, pitch and yaw. The communication system allows the gound station to eceive telemety data and video signals fom the on-boad sensos. The aveage flying height ove the Basilica was 60 m (Fig. 3), obtaining a piel size (on the oof) of about 13.5 mm, i.e. moe than sufficient to obtain a tue-othophoto with scale facto 1:100. Fig. 3. The UAV fligh ove the Basilica di Collemaggio. The whole photogammetic bloc is made up of 52 images acquied with the softwae AscTec AutoPilot Contol. The softwae allows the opeato to impot a geoefeenced image whee waypoints can be added (manually o in an automated way by defining the ovelap). The flight plan is then tansfeed to the Falcon that flies autonomously (the use has to tae off and land). The global pipeline fo photogammetic image pocessing can be synthetically descibed as a multistep pocess made up of the following phases pocessed with the implemented pacages: image oientation ATiPE; mesh geneation MGCM+; tue-othophoto mapping ToMap. They allow the use to automate the typical photogammetic chain and povide igoous photogammetic econstuctions with the same level of automation of compute vision softwae. In addition, they give all the statistics to inspect the esult and have dedicated modules to chec global accuacy (e.g. the use of chec points). Moe details about the diffeent steps and thei implementation ae epoted in the following sections. 3. IMAGE ORIENTATION Fig. 2. Some Google Eath images acquied in 2006, 2009 (afte the eathquae), and 2011. Eteio oientation (EO) paametes ae estimated with the ATiPE algoithm (Baazzetti et al., 2010b), that pefoms automated image matching and photogammetic bundle bloc adjustment via collineaity equations. The input elements of ATiPE ae the images, the full set of inteio oientation paametes, and a visibility map between the images (optional). All images ae nomally used with thei calibation paametes in ode to avoid self-calibation which is geneally not appopiate and eliable in pactical 3D modelling pojects (Fase, 1997; Remondino and Fase, 2006; Con et al., 2006). The visibility map might contain infomation about the ovelap between all images and can be deived (i) fom GNSS/INS data with an appoimate DTM/DSM o (ii) with a doi:10.5194/ispsannals-ii-5-57-2014 58

peliminay and quic oientation pocedue pefomed on low esolution images. Points ae matched with the SIFT (Lowe, 2004) o SURF (Bay et al., 2008) opeatos along with the obust estimation of the fundamental mati fo outlie ejection (see Fig. 4). INPUT DATA images calibation paametes COMPRESSION & RGB2GRAY CONVERTION VISIBILITY MAP CREATION fom images fom GPS/INS & DSM CREATION OF M IMAGE PAIRS Image pai i = 1 KEYPOINT DETECTION (SIFT/SURF) KEYPOINT MATCHING (quadatic/d-tee) OUTLIER DETECTION obust estimation fundamental (essential) mati False i = Tue IMAGE PAIR CONCATENATION POINT REDUCTION IMAGE COORDINATES BUNDLE ADJUSTMENT Fig. 4. The flowchat of the image oientation phase. with images having the typical aeial configuation (e.g. the UAV bloc poposed in this pape). Hee, the camea is tanslated and otated aound its optical ais duing the acquisition of the images. SIFT and SURF ae completely invaiant to these effects and often povide too many points, moe than those stictly necessay fo a taditional manual oientation. These poblems ae also inceased by the use of high esolution images, pocessed without any peliminay geometic image compession. To ovecome this dawbac an ad-hoc pocedue fo tie point decimation was implemented. Afte the matching of all image pai combinations, points can be educed accoding to thei multiplicity (i.e. the numbe of images in which the same point can be matched). A egula gid is pojected onto each image, and fo each cell only the point with the highest multiplicity is stoed. Obviously, the same point must be ept fo the othe images. The second limit hee listed is elated to the numbe of images. Fo blocs made up of seveal tens of photos the CPU time can significantly incease. In fact, (n 2 -n)/2 image pai combinations must be analyzed fo a bloc of n images, with a consequent pocessing time popotional to the global numbe of combinations. Howeve, only a limited numbe of pais shae tie points, theefoe the emaining ones should be emoved. The method used to discad these useless couples of images is a visibility map, which must be estimated at the beginning of data pocessing. As mentioned, the visibility map contains the connections between all image pais shaing tie points, and can be estimated as follows: visibility fom images: if high-esolution images ae employed, a peliminay pocessing is apidly pefomed with compessed images (e.g., less than 2 Mpiel). This povides the image combinations of the whole bloc. Then, the same matching pocedue is epeated with the oiginal images but taing into account the poduced map; visibility fom GNSS/INS data: these values combined with an appoimate DSM of the aea allow the estimation of the ovelap between the images. This method is faste than the pevious one. Howeve, it can only be applied fo images with a configuation simila to that of an aeial bloc. In some cases, the DSM can be appoimated with a plane. The oientation esults fo the image bloc acquied ove the Basilica di Collemaggio ae shown in Fig.5. The use of FBM techniques lie SIFT and SURF fo the detection of image points allows data pocessing of comple close-ange blocs. A lage vaiety of defomities, fo instance scale vaiations, adiometic changes, convegent angles, and wide baselines, can be taen into consideation in ode to obtain a good set of image points. Nomally, the image points detected in a fully automated way ae moe than sufficient to estimate of the EO paametes. Howeve, two opposite situations could occu: a geat numbe of image points is the final esults of the featue-based matching; the image bloc is composed of tens of images, which must be pogessively analyzed. The fome, which seems a good esult, has a significant dawbac: if too many points ae used duing the bundle adjustment, it is impossible to obtain a solution due to the high computational cost. This is the usual case of well-tetued bodies Fig. 5. Camea position / attitude and 3D points afte the image oientation step. The bloc is simila to a classical aeial one (although some convegent images wee acquied) and has seveal stips with a total of 52 images, which poduce 1,326 image combinations. Fo doi:10.5194/ispsannals-ii-5-57-2014 59

each image pai, the SIFT eypoints wee compaed in less than 5 seconds with a d-tee appoach. 18 gound contol points (tagets and points on the oof measued by means of a total station) wee included in the adjustment to fi the datum. Sigma-naught afte Least Squaes adjustment (collineaity equations ae the mathematical model employed, Ganshaw, 1980; Mihail et al., 2001) is ±0.67 piels, wheeas the RMS of image coodinates was 0.67 piels. The RMSE values on 5 chec points wee 5.3 mm (X), 6.1 mm (Y), and 11.2 mm (Z). This fist algoithm completes eteio oientation paamete estimation (along with 3D coodinates, i.e. a peliminay spase econstuction) used to un the dense matching phase. 4. MESH GENERATION Images, eteio oientation and additional paametes allow one to un the implemented algoithm fo mesh geneation fom multiple images (see Fig. 6 fo the flowchat). The algoithm, coined MGCM+, combines (i) LSM based on intensity obsevations (Guen, 1985) with (ii) collineaity conditions used as geometical constaints fo the detemination of all object point coodinates (Baltsavias, 1991; Guen and Baltsavias, 1988). The use of the collineaity constaint and the oppotunity to simultaneously match multiple scenes incease matching eliability. The adaptation hee synthetically descibed deal with 3D blocs of convegent images (even 360 scenes). INPUT DATA images EO & additional paametes GENERATION OF AN INITIAL MODEL eteio oientation paametes initial seed points Yes ENOUGH SEED POINTS? Fig. 6. The flowchat of the dense matching phase. An impotant equiement to un dense matching is an appoimate model of the object. The impotance of a good seed model is emaable not only fo the geometic quality of the final poduct, but also in tems of CPU time because it can limit the seach along the 3D pojective ay given by collineaity No PATCH DENSIFICATION VIA PMVS dense seed model MODEL SEGMENTATION DSM/TIN GENERATION initial appoimate model MGCM MATCHING dense point cloud & mesh equations. This esults in a eduction of the numbe of tials duing the tanslation of the coelation window. An initial seed model can be deived fom the 3D coodinates of tie points etacted duing the matching and oientation steps. An altenative solution is instead the use of a pacage fo multi-view steeo (Hiep et al., 2009; Hischmuelle, 2008) that does not need initial appoimations. Both solutions ae available in ou MGCM+ implementation: the method uns the patch-based matching appoach poposed by Fuuawa and Ponce (2010). Thei pocedue was incopoated into a new matching pipeline in ode to geneate a low esolution initial model. This choice is motivated by the obustness of the method that combines multiple images duing the dense matching step: if at least thee images ae pocessed simultaneously, blundes and spuious points can be emoved by analyzing the local data edundancy. The MGCM+ algoithm assumes an affine defomation between the template and each slave. The elationship descibing the intensity value of each piel in the template is given by the discete function f(,y), and the n slaves ae epesented by functions g1(,y),, gn(,y). An intensity obsevation equation fo each piel of the template and the coesponding piel on the slave i is witten as follows: f 0 i i i 10i g da g y da g da i o 11i i o 12i yi 20i, y e, y g, y g da (1) g da g y da yi o 21i yi o 22i whee the unnown quantities ae coections to the paametes of the affine tansfomation daji. The coefficient gi 0 (,y) is the obseved value in the appoimate position of the slave, while gi and gyi ae the patial deivatives of the function g(,y). The function ei(,y) gives the esidual eo with espect to the affine model. The MGCM+ algoithm combines the intensity obsevation equations (1) with the collineaity condition. In fact, fo a pinhole (cental pespective) image the constaint between the geneic object point (Xp=[Xp Yp Zp] T ) and its coesponding 2D point (p,yp) on the image is given by the well-nown collineaity equations: p y p T X X 1 p o c ˆ F T X X 3 p o T X X 2 p o y c ˆ F T X X 3 p o whee c is the pincipal distance, X0 is the vecto epessing the pespective cente coodinates, R=[1 2 3] T is the otation mati. Image coodinates (p,yp) ae computed with espect to the pincipal point. If both inteio and eteio oientation (EO) paametes ae nown, the unnown paametes ae shifts (Δ, Δy) and object point coodinates (Xp). Afte a peliminay lineaization, obsevation equations become: F F F (0) dx dy dz F 0 0 (3) p X Y Z p y y y F F F y(0) y dx dy dz F y 0 0 p X Y Z p (2) doi:10.5194/ispsannals-ii-5-57-2014 60

Shifts allow one to lin both sets of eq.s 1 and 3, because Δp=da10 and Δyp=da20 fo the same set of images and point P. Theefoe, the esulting joint system can be solved using conventional Least Squaes methods. One of the most impotant aspects of MGCM+ is the oppotunity to wo with multiple convegent images. This means that an inteactive selection of the maste (and slaves) is caied out duing data pocessing: the maste changes with the diffeent potions of the object. The initial seed model is split into seveal sub-egions fom which a mobile maste appoach povide multiple point clouds in the same efeence system (Fig. 7). The diffeent sub-egions ae then independently pocessed and the final point clouds ae then meged (this solution is also good fo paallel computing). The automated choice of the maste was implemented by consideing diffeent issues. Fist of all, a selection based on the infomation deived fom the appoimated model is accomplished. Fo a specified 3D point all images in which the point is visible ae included with a simple bac-pojection. The selection of the maste is then caied out inside this set. 5. TRUE-ORTHOPHOTO MAPPING The last step of the poposed pipeline is caied out with the ToMap (Tue-othophoto Mapping) pacage implemented by the authos. As descibed in the pevious paagaph input data fo this algoithm ae the images along with thei EO paametes and a detailed model (Fig. 9). The implemented pocedue fo RGB tetue-mapping can be split into two almost independent steps: a geometic and a adiometic pat. The geometic pat includes both visibility analysis and tetue assignment. The adiometic adjustment concens colo/bightness coection. Stating fom an object modelled with a mesh the visibility analysis is caied out to detect occluded aeas. If a tiangle T0 is not visible fom viewpoint Ij thee is at least an occluding tiangle Ti. The occlusion can be detected by an intesection between the two bac-pojected tiangles in the image space. The poblem elies on the identification of both occluded and occluding tiangles with the ecipocal distance between the vetices of the tiangles and the image pojection cente. INPUT DATA images mesh EO & additional paametes VISIBILITY ANALYSIS TEXTURE ASSIGNMENT Fig. 7. Densification of the seed model (left) and selection of the maste image, i.e. image CS4 in this case (ight). The local nomal diection is compaed to all optical aes of images whee the point is visible. The image whose nomal is close to the suface nomal diection is chosen as maste. This stategy can easily handle objects that ae not completely visible in a single image, without equiing intemediate use s inteaction. Fig. 8 shows the dense matching esult fo the Basilica. The point cloud (top) is intepolated to obtain a mesh (bottom), i.e. a continuous suface epesentation fo tetue mapping. COLOR CORRECTION TRUE-ORTHOPHOTO GENERATION final GeoTIFF image Fig. 9. The flowchat fo tue-othophoto geneation. The visibility algoithm has a diect impact on CPU time because the chec should be epeated fo all the tiangles of the mesh, leading to a compleity O 2. A stategy able to educe the numbe of tiangles uses two diffeent pocedues: view fustum culling and bac-facing culling. The view fustum culling (Fig. 10) is based on the peliminay identification of the tiangles outside the camea view fustum. An additional eduction of CPU time can be obtained with the bac-facing culling, which eploits the nomal vecto of the diffeent tiangles and the optical ais of the oiented images. Fig. 10. View fustum and bac-facing culling pinciples (only the blue pat of the bo must be tetued). Fig. 8. The point cloud (top) and the final mesh (bottom). The second pat of the tetue-mapping algoithm concens a colo coection. Fist of all, the best image fo a specific tiangle must be detemined. Two tetue quality paametes ae consideed: the esolution of the image in object space and the camea viewing diection. doi:10.5194/ispsannals-ii-5-57-2014 61

The image whee quality paametes each a maimum is used as main tetue souce. Tetue coodinates ae calculated by bacpojecting the tiangle coodinates fom the object space to the selected image. As the algoithm sepaates the diffeent tiangles the bightness levels can be diffeent and colo diffeences can be found in the tetued model. The inhomogeneity of the tetue can be educed using the colo/bightness coection. The adiometic coection is not pefomed in the taditional RGB space, but the L*a*b* colo space is chosen. Indeed, the L*a*b* colo space is designed to appoimate human vision. In paticula, the L component matches human peception of lightness, wheeas a and b channels define the colo plane. The colo/bightness coection eploits the ovelapping aeas between two (o moe) images. Homologous points ae used to estimate the whole coection with an intepolation. The esult fo the Basilica ae shown in Fig. 11. Stating fom the tetued digital model of the tue-othophoto is obtained by pojecting the model tetue on the defined pojection plane. In this case the othogaphic pojection was caied out in the XY plane defined by the local efeence system (geodetic netwo). et al., 2013; Oeni et al., 2012; Muphy et al., 2013). Fig. 12 shows the esult with the BIM geneated fom a combined photogammetic (with close-ange images) and lase scanning suvey. This BIM is cuently one of the most impotant tool of the estoation wo. Fig. 12. The BIM along with the pojected tue-othoimage. CONCLUSIONS AND OUTLOOKS The pape pesented a thee step solution fo the geneation of tue-othophoto fom a set of UAV images. The diffeent steps ae encapsulated into thee in-house softwae pacages that allow the opeato to complete the typical photogammetic pipeline fo image pocessing: image oientation, suface econstuction, and tue-othophoto geneation. The case study epoted in this pape showed the potential of UAV infomation combined with igoous algoithms fo data pocessing. It should be mentioned that the UAV suvey of the Basilica di Collemaggio was a poweful tool to inspect the cuent condition of the oof and complete the teestial econstuction. The final tue-othophoto is one of the boads of the poject Ripatie da Collemaggio, financed by Eni (www.eni.com). The method is not limited to UAV images, but it can handle blocs of teestial and aeial images that follow the cental pespective camea model. As things stand at the pesent, data pocessing is caied out with RGB images. On the othe hand, the UAV suvey with the Falcon 8 is not limited to this ind of data: a sequence of themal images (Fig. 13) was acquied with a FLIR camea. TIR infomation is cuently unde investigation in this poject (Pevitali et al., 2013). The use of a geometically ectified tue-othophoto offes a suppot fo the pecise localization of TIR images. These images will tae into consideation in futue activities. Fig. 11. The final tue-othophoto fom the UAV poject. The final GSD is 13.5 mm, i.e. moe than sufficient to obtain a tue-othophoto with scale facto 1:100. As can be seen, the algoithm detected occluded aeas that emains invisible. It avoids the ghost effect and ceates blac aeas in the final tueothophoto. Finally, the GeoTIFF file fomat has geoefeencing paametes that allow automated pocessing in BIM envionments (Bumana Fig. 13. Two themal images acquied with the Falcon 8. ACKNOWLEDGEMENTS This wo was suppoted by the poject Ripatie da Collemaggio financed by Eni (www.eni.com). doi:10.5194/ispsannals-ii-5-57-2014 62

REFERENCES Amha F., Jansa J., Ries C., 1998. The Geneation of the Tue- Othophotos Using a 3D Building Model in Conjunction With a Conventional DTM. Intenational Achives of Photogammety, Remote Sensing and Spatial Infomation Sciences, Vol. 32(4), pp.16-22. Bay, H., Ess, A., Tuytelaas, T. and van Gool, L., 2008. Speededup obust featues (SURF). Compute Vision and Image Undestanding, 110(3): 346-359. Baltsavias, E.P, 1991. Multiphoto Geometically Constained Matching. Ph. D. thesis, Inst. of Geodesy and Photogammety, ETH Zuich, Switzeland, Mitteilungen No. 49, 221 pp. Baazzetti, L., Bovelli, M., Scaioni, M., 2008. Geneation of tue-othophotos with LiDAR dense digital suface models. The Photogammetic Jounal of Finland, Vol. 21, N.1, pp. 26-34 Baazzetti, L., Bovelli, M., Valentini, L., 2010a. LiDAR digital building models fo tue othophoto geneation. Applied Geomatics, Volume 2, Issue 4, pp. 187-196 Baazzetti, L., Remondino, F. and Scaioni, M., 2010b. Oientation and 3D modelling fom maeless teestial images: combining accuacy with automation. Photogammetic Recod, 25(132), pp. 356-381. Biason A., Dequal S., Lingua A., 2003. A New Pocedue fo the automatic poduction of tue othophotos. Intenational Achives of Photogammety, Remote Sensing and Spatial Infomation Sciences, Vol. 35, pp.538-543. Bown, J., 2003. Aspects on Tue-Othophoto Poduction. Poc. of Phot. Wee 03, Fitsch, D., Hobbie, D. (ed.s), Wichmann, Stuttgat, Gemany, pp.205-214. Bumana, R., Oeni, D., Raimondi, A., Geogopoulos, A., Begianni, A., 2013. Fom suvey to HBIM fo documentation, dissemination and management of built heitage. The case study of St. Maia in Scaia d Intelvi. 1 st Intenational Congess of Digital Heitage, pp. 497-504. Con, S., Fase, C., Hanley, H., 2006. Automatic metic calibation of colou digital cameas. Photogamm. Rec., 21(116): 355-372. Dequal S., Lingua A., 2004. Tue othophoto of the whole town of Tuin. Intenational Achives of Photogammety, Remote Sensing and Spatial Infomation Sciences, Vol. 34(5/C15), pp. 263-268. Eisenbeiss, H., 2008. The autonomous mini helicopte: a poweful platfom fo mobile mapping. Intenational Achives of Photogammety, Remote Sensing and Spatial Infomation Sciences, 37(B1): 977-983. Eisenbeiss, H., 2009. UAV photogammety. Diss. ETH No.18515, Institute of Geodesy and Photogammety Zuich, Switzeland, Mitteilungen N.105, p. 235. Ganshaw, S.I., 1980. Bundle adjustment methods in engineeing photogammety. Photogammetic Recod, 10(56): 181-207. Guen, A., 1985. Adaptative least squaes coelation: a poweful image matching technique. South Afican Jounal of Photogammety, Remote Sensing and Catogaphy, 14(3): 175-187. Guen, A. and Baltsavias, E.P. 1988. Geometically Constained Multiphoto Matching. PE&RS, 54(5), pp. 663-671. Hiep, V., Keiven, R., Labatut, P., Pons J., 2009. Towads highesolution lage-scale multi-view steeo. In: Poc. of CVPR 2009, Kyoto, Giappone. Hischmuelle H., 2008. Steeo pocessing by semi-global matching and mutual infomation. IEEE T, Patten Anal., 30(2): 328-41. Kaus, K., 2007. Photogammety: Geomety fom Images and Lase Scans, Walte de Guyte, 459 pages. Lowe, D.G., 2004. Distinctive image featues fom scaleinvaiant eypoints. Intenational Jounal of Compute Vision, 60(2): 91-110. Mihail, E.M., Bethel, J.S., and J.C McGlone, 2001. Intoduction to Moden Photogammety. John Wiley & Sons Inc., U.S.A. Muphy, M., McGoven, E., Pavia, S., 2013. Histoic Building Infomation Modelling: adding intelligence to lase and image based suveys. ISPRS Jounal of Photogammety and Remote Sensing, n. 76, pp. 89-102. Oeni, D., Cuca, B., Bumana, R., 2012. Thee-dimensional vitual models fo bette compehension of achitectual heitage constuction techniques and its maintenance ove time. Lectue Notes in Compute Science (including subseies Lectue Notes in Atificial Intelligence and Lectue Notes in Bioinfomatics) 7616 LNCS, pp. 533-542. Pevitali, M., Baazzetti, L., Bumana, R., Roncooni, F., 2013. Themogaphic analysis fom uav platfoms fo enegy efficiency etofit applications. Jounal of Mobile Multimedia, 9 (1-2), pp. 66-82. Rau J.Y., Chen N.Y., Chen L.C, 2002. Tue Othophoto Geneation of Built-Up Aeas Using Multi-View Images. PE&RS, 68(6), pp. 581-588. Remondino, F. and Fase, C., 2006. Digital camea calibation methods: consideations and compaisons. Intenational Achives of Photogammety, Remote Sensing and Spatial Infomation Sciences, 36(5): 266-272. Schicle W., 1998. Opeational Pocedue fo Automatic Tue Othophoto Geneation. Intenational Achives of Photogammety, Remote Sensing and Spatial Infomation Sciences, Vol. 32(4), pp.527-532. Fuuawa, Y. and Ponce, J., 2010. Accuate, dense, and obust multi-view steeopsis. IEEE Tans. PAMI, 32(8): 1362-1376. Fase, C.S., 1997. Digital camea self-calibation. ISPRS Jounal of Photogammety and Remote Sensing, 52: 149-159. doi:10.5194/ispsannals-ii-5-57-2014 63