Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft

Size: px
Start display at page:

Download "Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft"

Transcription

1 applied sciences Article Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft Zoltan Koppanyi * ID and Charles K. Toth Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, 470 Hitchcock Hall, 2070 Neil Ave, Columbus, OH 43210, USA; toth.2@osu.edu * Correspondence: koppanyi.1@osu.edu; Tel.: Received: 16 January 2018; Accepted: 1 February 2018; Published: 3 February 2018 Featured Application: Tracking aircraft during taxiing, landing and takeoffs at airports; acquiring statistical data for centerline deviation analysis and veer-off modeling as well as for airport design in general. Abstract: Mobile light detection and ranging (LiDAR) sensors used in car navigation and robotics, such as Velodyne s VLP-16 and HDL-32E, allow for sensing surroundings of platform with high temporal resolution to detect obstacles, tracking objects and support path planning. This study investigates feasibility of using LiDAR sensors for tracking taxiing or landing aircraft close to ground to improve airport safety. A prototype system was developed and installed at an airfield to capture point clouds to monitor aircraft operations. One of challenges of accurate object tracking using Velodyne sensors is relatively small vertical field of view (30, 41.3 ) and angular resolution (1.33, 2 ), resulting in a small number of points of tracked object. The point density decreases with object sensor distance, and is already sparse at a moderate range of m. The paper introduces our model-based tracking algorithms, including volume minimization and cube trajectories, to address optimal estimation of object motion and tracking based on sparse point clouds. Using a network of sensors, multiple tests were conducted at an airport to assess performance of demonstration system and algorithms developed. The investigation was focused on monitoring small aircraft moving on runways and taxiways, and results indicate less than 0.7 m/s and 17 cm velocity and positioning accuracy achieved, respectively. Overall, based on our findings, this technology is promising not only for aircraft monitoring but for airport applications. Keywords: mobile LiDAR; object tracking; entropy minimization; object recognition; airport safety 1. Introduction Remote sensing technologies have long been used for mapping static part of natural and man-made environments, including shape of Earth, transportation infrastructure, and buildings. As sensing technologies continue to advance, more powerful remote sensing systems have been developed [1], allowing us to observe and monitor dynamic components of environment. Light detection and ranging (LiDAR), as remote sensing technology, is already successfully applied to track pedestrian movements [2 4] and ground vehicle traffic from stationary [5] or on-board sensors [6,7]. Similar to ground vehicle or pedestrian traffic monitoring, airfield operations, such as taxiing, takeoffs and landings, can be observed and tracked using LiDAR sensors installed on airfield. Safety at airports is a serious concern worldwide, since 30% of fatal accidents occurred during landing and takeoff and 10% during taxiing or or ground operations between 2007 and 2016 based on Boeing Company s annual report [8]. Aircrafts motion around airports are tracked by RADAR, Appl. Sci. 2018, 8, 234; doi: /app

2 Appl. Sci. 2018, 8, of 22 but airfield surveillance primarily relies on human observations from control towers. Clearly, active remote sensing in general is able to reduce human error factor and operates in 24/7 in bad visibility conditions [9]. This feasibility study examines achievable accuracy of a LiDAR-based monitoring system installed at an airfield to track and derive trajectory of taxiing aircraft. At close range, LiDAR allows for directly capturing a more detailed and accurate 3D representation of taxiing or landing aircraft compared to existing radar or camera systems due to higher spatial resolution. The 3D data enables accurately estimating driving patterns or trajectories, detecting aircraft types, monitoring aircraft operations in general, and can be applied to pilot education and training. Analysis of a large number of trajectories can be used for creating stochastic models of airplane behavior during various operations, and this information can n be used for airport infrastructure and operation planning. Various researchers have already investigated LiDAR technology for improving airport safety [9,10], and our earlier studies [11,12] were first attempt to use LiDAR sensors to track aircraft ground movements. As continuation of our previous work, in this study, we apply a more complex sensor network, rigorously address sensor georeferencing, and for first time, track airplanes during landings. The main contribution of paper is introduction of a novel algorithm to derive motion parameters and extract trajectory of objects from sparse point clouds. Unlike terrestrial laser scanners that are able to produce large and detailed point clouds over a long scan time, mobile LiDAR sensors applied in autonomous vehicle or robotic applications capture sparse point clouds, but with high temporal resolution; typically, at Hz. These sensors make complete 360 scans around rotation axis and capture data along multiple planes, producing high resolution in se planes. For instance, Velodyne VLP-16 [13] and HDL-32E [14] LiDAR sensors, used in this study, have a 30 and 41.3 field of view, respectively, and 16 and 32 planes represent an angular resolution of 1.33 and 2, which much coarser than resolution around rotation axis. These sensor characteristics translate to only about points per scan captured from a Cessna-sized airplane at a 40-m object sensor distance. Note that minimum distance to deploy sensors around runways and taxiways is regulated by airport authorities for safety reasons. Clearly, challenge is to track aircraft from se low-resolution LiDAR scans. The algorithm addressing this issue and presented in this paper could be of interest to autonomous vehicle applications or robotics to track objects that are farr away from LiDAR sensor. The structure of paper is as follows. In next Background subsection, classification and brief description of existing tracking algorithms, such as center of gravity (COG) and iterative closest point (ICP) approaches, are presented briefly. Then, Section 2 presents hardware components of demonstration system and procedure for sensor georeferencing. This section also introduces proposed algorithms to estimate parameters of underlying motion model from point clouds of low point density, including details of algorithms and implementation challenges. In Section 3, two tests are presented using demonstration system installed at an airfield. During two tests, data from taxiing and landing aircraft were captured. Section 4 presents results of two tests followed by Section 5 that provides performance analysis, comparison of algorithms as well as qualitative and quantitative error assessments. The papers ends with a conclusion and appendices. The terminology and most common definitions used throughout paper can be found in Appendix A. Background LiDAR as well as or sensors used for tracking objects can be deployed on mobile or static platforms. The first case is typical setup for mobile mapping, autonomous vehicles, and robotics. This is a challenging case, because platform is moving, as is tracked object, and consequently, sensor has to be accurately georeferenced at all time. In contrast, static setup is a simpler case, since sensor georeferencing has to be done only once. This arrangement is commonly used in surveillance or transportation applications for traffic monitoring. In this study, static sensor configuration is applied.

3 Appl. Sci. 2018, 8, of 22 Several algorithms exist for tracking objects, such as pedestrians or vehicles, in 3D LiDAR data, and can be divided into model-free and model-based approaches according to Wang et al. [15]. The model-free approaches do not assume any properties, such as shape of object or motion model, which tracked object is following. In contrast, model-based approaches use prior information regarding object to achieve more reliable tracking. Obviously, model-free algorithms are more general, but model-based methods perform better. Methods introduced in this study are model-based and consider type of object assuming certain types of motion models. The general workflow of object tracking in point clouds includes object segmentation, recognition and tracking. The dynamic elements of each scan are separated from static background during segmentation step, n object is recognized in model-based approach [2,3]. Afterwards, trajectory or motion parameters extracted from consecutive scans, and thus, captured points from moving objects, are known epoch-by-epoch. One can easily estimate trajectory by calculating, for example, center of gravity (COG) of segmented points from each scan. Or solutions are based on point cloud registration algorithms. The iterative closest point (ICP) method [16] can be used to estimate, for instance, six degrees of freedom (rigid body) transformation matrix between consecutive and segmented scans (see for instance [17]). The transformation matrices contain rotation and translation parameters that can be used for computing trajectory. In anor approach, instead of using whole segmented scan, common feature points or points sets can be extracted from scans, and transformation matrix is estimated based on se correspondences [18]. The tracking can be improved with considering previous scans using Bayesian or Markov approaches [18,19], or final trajectory can be refined with Kalman filtering or or smoothing techniques. 2. Prototype System and Methods 2.1. System Overview The prototype sensor network consists of sensor stations, deployed along runway or taxiway to be monitored. Sensor stations have Velodyne s LiDAR sensors and a GPS antenna, rigidly attached to column of a standard airport light fixture. Besides georeferencing, GPS provides accurate timing for LiDAR sensors using 1PPS (pulse per second) and NMEA (National Marine Electronic Association) GPS signals. In addition, each sensor station has a data logging component, deployed farr away from runway or taxiway for safety reasons. The adjustable sensor configuration allows for various sensor alignments. For example, Figure 1a,b show horizontally and vertically aligned VLP-16 sensors, respectively, where Z is rotation axis and κ is horizontal rotation angle or azimuth ranging from 0 to 360 with angular resolution ( κ). The vertical and horizontal alignments mean that sensor s rotation plane is eir perpendicular or parallel to horizontal plane, respectively. The narrower field of view (δ) is 30 and 41.3 with 1.33 and 2 angular resolution ( δ) for VLP-16 and HDL-32E systems, respectively. In an installation shown in Figure 1c, left side shows two vertically aligned VLP-16s and one horizontally aligned HDL-32E scanner, while right side shows two vertically aligned VLP-16 scanners. The alignment of sensors was optimized to cover largest field of view to capture maximum number of points from tracked object for a given number of scanners. A simulation study on this subject was conducted using Blender (see [12,20]). Since Velodyne s VLP-16 sensors have a smaller vertical field of view with coarser angular resolution, refore se sensors are aligned vertically. While with this arrangement, horizontal coverage of runway is shorter, yet more points can be captured from aircraft body. HDL-32E LiDAR is installed horizontally, due to its better vertical angular resolution and larger field of view. The minimum distance at airport where our tests were conducted was specified as 20 m and 30 m next to taxiways and runways, respectively.

4 Appl. Sci. 2018, 8, of 22 The two sensor stations 40 m apart from each or provides an effective coverage of runway of about 70 m (see Figure 1d). Appl. Sci. 2018, 8, x FOR PEER REVIEW 4 of 22 (a) (b) (c) (d) Figure 1. Sensor system overview: (a) horizontally and (b) vertically aligned VLP 16 sensor, (b) Figure 1. Sensor overview: horizontally and (b) vertically aligned VLP-16 sensor, components of system demonstration sensor(a) network, (c) an installation at airfield utilizing including (c) components of demonstration sensor network, and (d) sensor coverage. sensors fields of view three VLP 16 scanners and one HDL 32E, and (d) sensor coverage Sensor Georeferencing

5 Appl. Sci. 2018, 8, of Sensor Georeferencing In a sensor network, point clouds acquired by individual sensors have to be transformed into a common reference frame that could be local or global. When rigidly connected sensors Appl. Sci. 2018, 8, x FOR PEER REVIEW 5 of 22 are used toger, n coordinate system of one of sensors is considered as a local frame, and relative In a sensor translation network, and point rotation clouds acquired of or by sensors individual are sensors estimated. have to be This transformed procedure into is called point cloud a common co-registration. reference frame There that are could many be local co-registration or global. When approaches rigidly connected and most sensors of are se used methods toger, n coordinate system of one of sensors is considered as a local frame, and require a measurable overlap between point clouds that may not be feasible in proposed system relative translation and rotation of or sensors are estimated. This procedure is called point configuration here. Therefore, in contrast to co-registration, coordinates and rotations of cloud co registration. There are many co registration approaches and most of se methods require sensorsa are measurable independently overlap estimated between point aclouds global that system. may not This be procedure feasible in is generally proposed system referred to as sensor georeferencing. configuration here. Therefore, in contrast to co registration, coordinates and rotations of Sensor sensors georeferencing are independently includes estimated in determination a global system. of This procedure position is and generally orientation referred into a as reference frame. The sensor coordinates georeferencing. of LiDAR sensor location are measured with conventional land surveying Sensor georeferencing includes determination of position and orientation in a reference technique using total station. The sensor orientation is usually indirectly estimated by using object frame. The coordinates of LiDAR sensor location are measured with conventional land surveying space targets. technique Inusing our case total station. for estimating The sensor orientation three attitude is usually angles, indirectly a 1.8estimated m x 1.2by musing calibration object board is used space as a targets. (see In our Figure case for 2). estimating This smooth three andattitude flat calibration angles, a 1.8 board m x 1.2 ism large calibration enough board to provide is a sufficient used number as a target of points (see Figure obtained 2). This from smooth it. and Prisms flat calibration attachedboard to is four large corners enough to ofprovide boarda were used tosufficient measurenumber ir global of points coordinates obtained from with it. Prisms a total attached station. to The four board corners was of observed board were at various used to measure ir global coordinates with a total station. The board was observed at various locations and orientations in field of view of sensors (see Figure 2). locations and orientations in field of view of sensors (see Figure 2). Figure 2. Sensor georeferencing: Calibration board with surveying prism (upper left corner) measured Figure 2. with Sensor total station georeferencing: and sensor station. calibration board with surveying prism (upper left corner) measured with total station and sensor station. The boresight estimation is based on comparing total station measured target locations and orientations to same parameters, estimated from point clouds of targets. Obviously, re The is boresight no chance estimation that prism is based locations on are comparing measured by total point station cloud, measured but fitting target a plane locations and orientations orientation to of same board parameters, surface can estimated be estimated. fromconsequently, point clouds norm of vectors targets. of Obviously, board in re is no chance sensor that and global prismsystem locations can be are calculated measured from by LiDAR pointpoints cloud, and bu fitting corner a plane measurements, orientation of board respectively. surface Using can be estimated. two results, finding Consequently, unknown norm orientation vectors parameters of board can be formulated in sensor and global system as a least can squares be calculated estimation problem: from LiDAR points and corner measurements, respectively. Using two results, finding unknown orientation parameters can minimize w. r. t,,,, be formulated as a least squares, (1) estimation problem: n where is minimize norm vector (w.r.t of α, board β, γ) in R(α, global β, system γ)n S i at n G i th 2, position, is (1) 2 norm vector of board in sensor coordinate i=1 system at th position, is number of board positions, and,, 3 is rotation matrix as function of,, 0, 2 where n G i R 3 is norm vector of board in global system at ith position, n S i R 3 is norm vector of board in sensor coordinate system at ith position, n is

6 Appl. Sci. 2018, 8, of 22 number of board positions, and R(α, β, γ) SO(3) is rotation matrix as function of Appl. Sci. 2018, 8, x FOR PEER REVIEW 6 of 22 α, β, γ [0, 2π] orientation parameters; for detailed notations and definitions see Appendix A. The Levenberg Marquardt method is used to solve Equation (1). The positions and orientations of orientation parameters; for detailed notations and definitions see Appendix A. The Levenberg sensors Marquardt along method with is used time to synchronization solve Equation allow (1). The for positions transforming and orientations all points of captured sensors at different along epochs with to time same synchronization global coordinate allow system. for transforming all points captured at different epochs to same global coordinate system Aircraft Segmentation from LiDAR Scans 2.3. The Aircraft first Segmentation step of object from tracking LiDAR isscans separation of dynamic and static content of scene, and n The identification first step of object of tracking dynamicis objects. separation A reference of or dynamic baseline and scan static of content background of scene, can be obtained and n from identification scans thatof include dynamic no aircraft, objects. and A n, reference an occupancy or baseline grid scan is of created background from can average be background obtained from scan. scans During that tracking, include no this aircraft, grid is and compared n, an occupancy to incoming grid is scan, created andfrom thus, average dynamic content background could be scan. easily During filtered. tracking, The this performance grid is compared is furr to an improved incoming byscan, defining and thus, a region dynamic of interest from content scans, could such be easily filtered. runway The or performance taxiway in view. is furr improved by defining a region of interest from In our tests, scans, only such one as airplane runway presents or taxiway at any in epoch, view. and refore, this segmentation is a relatively easy task. In our It istests, noteworthy only one that airplane almost presents alwaysat just any one epoch, plane and is refore, in network s this segmentation field of view is a for landing relatively scenarios easy task. due It tois mandatory noteworthy aircraft that almost separation. always Obviously, just one plane reis can in be more network s than one field plane of in view case for landing of taxiing. scenarios In due latter to mandatory case, airplanes aircraft separation. can be segmented Obviously, from re each can or be more by simply than tracking one plane ir in motion case inof taxiing. regionin of interest. latter case, At end airplanes of can segmentation be segmented process, from each 3D coordinates or by of simply aircraft tracking bodyir points motion within ir timestamps region of interest. are available At for end each of scan. segmentation process, 3D coordinates The density of of aircraft captured body points scans with depends ir timestamps on object sensor are available distance. for each scan. Figure 3 shows The density of captured scans depends on object sensor distance. Figure 3 shows segmented data captured by five LiDAR sensors at 23 m and 33 m object sensor distance. Various colors segmented data captured by five LiDAR sensors at 23 m and 33 m object sensor distance. Various indicate times, and thus, point cloud in figures shows all scans of plane while crossing colors indicate times, and thus, point cloud in figures shows all scans of plane while sensor network s field of view. Clearly, shape of airplane is barely recognizable in Figure 3a, crossing sensor network s field of view. Clearly, shape of airplane is barely recognizable and not recognizable in Figure 3b. These scans contain a very small number of points of aircraft in Figure 3a, and not recognizable in Figure 3b. These scans contain a very small number of points of body due to large object sensor distance, and thus, dedicated algorithms are needed to extract aircraft body due to large object sensor distance, and thus, dedicated algorithms are needed motion to extract information motion or information trajectories. or trajectories. (a) (b) Figure 3. Top down views from an airplane captured at different times by sensor network. The Figure 3. Top-down views from an airplane captured at different times by sensor network. The two two subfigures illustrate point densities at different object sensor distances: color differentiates subfigures scans: (a) illustrate at 23 m object sensor point densities distance, at different average object sensor number of points distances: per scan color is 130 differentiates (26 per sensor), scans: (a) and at 23 (b) m at object sensor 33 m object sensor distance, distance, average average number number of points of points per scan per is scan 130 is (26 20 per (4 points sensor), from and each (b) at 33sensor). m object sensor distance, average number of points per scan is 20 (4 points from each sensor) Motion Models

7 Appl. Sci. 2018, 8, of Motion Models This study uses a model-based object tracking approach to address issue of small number of captured points. Three types of motion models, namely, constant velocity (CV), constant acceleration (CA), and polynomial motion model, are used to describe dynamics of taxiing or landing airplanes. The models for one coordinate axis are shown in Table 1; details of se models can be found in Appendix B. For all three models, following two assumptions are made. First, tracked plane follows a rigid body motion, and second, heading of aircraft body points towards motion direction. The second condition is approximate for landing aircraft, as side wind may introduce an error in heading and aircraft attitude. These assumptions mean that all points from aircraft have same motion parameters. The faster aircraft moves better assumptions hold. We also assumes 2D motion for taxiing and 3D motion for landing. Note that, in simplest case, trajectory of CV model is a straight line, and thus, limitation of this model is that it cannot be used for tracking turns, accelerations or decelerations. However, this motion is still valid within a short time window. The CA model is able to describe more complex motion, namely curvilinear motion, but it is still inadequate to model sharp turns. The general or polynomial model allows for describing arbitrary trajectories besides assumptions presented above. Table 1. Motion models used in this study; more details can be found in Appendix B. Constant Velocity Constant Acceleration General or Polynomial s = s 0 + v s t t s = s 0 + v s t + a s 2 s = s 0 + n a s t i i= Object Space Reconstruction with Various Motion Models If motion model is chosen and parameters of model, such as velocity, acceleration or coefficients of polynomial model are known, n all points of aircraft body taken at different epochs can be transformed back to a common frame at zero time, resulting in a reconstructed point cloud in or words, in reconstruction of aircraft body (see end of Appendix B). Thus, motion model with known parameters can be interpreted as a time-varying transformation. Figure 4 shows an example of reconstruction assuming CV motion at 4.5, 6.5, 8.5 and 10.5 m/s velocities. Note that as velocity parameter in X direction approaches real value, reconstructed point cloud becomes sharper and more consistent, and finally, aircraft body is clearly recognizable at 10.5 m/s. The idea behind proposed approach of finding motion parameters is to measure consistency, or order, of reconstruction. Obviously, if motion parameters are known, trajectory can be simply calculated using model Estimating Motion Model with Volume Minimization The proposed algorithm, called volume minimization (VM), adjusts motion parameters to find optimal reconstruction that occupies minimum volume of reconstructed point cloud of aircraft. In this way, volume of reconstruction is used as a metric for measuring point cloud consistency: minimize (w.r.t x) Vol [ U pt PT(p t, t, x) ], (2) where p t = [x, y, z, t] P is a point from P C = 2 P point cloud at t R time, T (P, R, R n ) P depicts motion model that transforms a point to epoch t based on underlying motion model with given x R n motion parameters, and finally, Vol C N is a function that measures volume of a point cloud. During volume minimization, space is decimated into cubes, and volume is measured by number of cubes occupied by reconstructed point cloud, as shown in Figure 5. Figure 4 shows cubes from top view, and clearly illustrates that as velocity approaches real value,

8 Appl. Sci. 2018, 8, of 22 and reconstruction becomes more realistic, cube numbers also decrease, demonstrating that Appl. Sci. 2018, 8, x FOR PEER REVIEW 8 of 22 proposed metric is consistent with reconstruction quality. It is noteworthy that instead of volume, one volume, can use one entropy can use similar entropy to [21,22]. similar to For [21,22]. our dataset, For our we dataset, found we that found that volume volume as metric as requires metric lessrequires computation less computation and has better and has convergence better convergence properties properties as opposed as opposed to entropy. to entropy. Figure 4. The idea of volume minimization: figures show reconstructions (with red dots) assuming a Figure 4. The idea of volume minimization: figures show reconstructions (with red dots) assuming a constant velocity (CV) model with different velocities; green points are raw data, black rectangles constant velocity (CV) model with different velocities; green points are raw data, black rectangles show occupied cubes, and titles show velocities and number of cubes obtained. show occupied cubes, and titles show velocities and number of cubes obtained. An objective function of Equation (2) for CV model is presented in Figure 6 in order to analyze An objective problem function space. ofnote Equation that (2) figure for shows CV model function is presented at different inscales. FigureAlthough 6 in order to analyze objective function problemat space. large Note scale, that shown in figure Figure shows 6a, seems function quasi convex, at different medium scales. Although scale in Figure6b, objectivereveals function that atthis large function scale, shown is highly innon convex. Figure 6a, seems The complexity quasi-convex, of objective mediumfunction scale in Figure depends 6b, reveals on cube that this size, function number is highly of points non-convex. captured from The complexity aircraft, and of objective applied motion function depends model. on Obviously, cube size, objective number function of points is non smooth captured due fromto aircraft, volume and calculation applied presented motion model. above. Obviously, This can be clearly objective seen at function small scale is non-smooth in Figure 6c, which due toshows volume that calculation objective function presented is above. a multivariate This can be step clearly function. seenconsequently, at small scale one in Figure possible 6c, solution which shows can be that to approximate objective function objective is a multivariate function with stepa smooth function. function Consequently, to solve one problem possible with solution one of cansmooth be to approximate optimization techniques, objective function which with solution a smooth is very function likely to to be solve sub optimal. problem Alternatively, with one of non smooth smooth optimization optimization techniques techniques, might be applicable. which solution is very likely to be sub-optimal. Alternatively, non-smooth optimization techniques In this study, we implemented a simulated annealing (SA) for solving Equation (2) [23]. The might be applicable. input of algorithm is initial search boundaries for all motion parameters and cube size. In this study, we implemented a simulated annealing (SA) for solving Equation (2) [23]. The input The algorithm is relatively robust against searching space due to favorable characteristics of of algorithm is initial search boundaries for all motion parameters and cube size. objective function at large scale. This allows for finding a value close to optimum in a couple The algorithm is relatively robust against searching space due to favorable characteristics of iterations, even if search space is large. The cube size, however, has to be chosen carefully. If of objective size is too function large, at n large it scale. is not This adequate allows for to describe finding a value volume close to of optimum aircraft, in and a couple of iterations, reconstruction evenwill if be search blurry. space If it is is large. too small, The n cubenot size, enough however, points haswill to be be chosen located carefully. in cubes. If sizein isor too large, words, n one it can is easily not adequate imagine to a situation, describe when volume cube ofsize is aircraft, so small and that each reconstruction point only will occupies be blurry. one cube If it is at too almost small, n chosen not motion enough parameter points will values, be located which inclearly cubes. results Inin or same words, oneobjective can easily (cost) imagine values. a situation, In addition, when large cube scale size characteristics so small that of each objective point only function occupies highly one cube depends at almost on all cube chosen size, motion and this parameter property values, can be lost which if clearly cube size results is too insmall. same For objective a Cessna size (cost) values. airplane, In addition, a m cube large sizes scale performed characteristics well in of our dataset. objective function highly depends on cube

9 Appl. Sci. 2018, 8, of 22 size, and this property can be lost if cube size is too small. For a Cessna size airplane, a m cube sizes performed well in our dataset. Appl. Sci. 2018, 8, x FOR PEER REVIEW 9 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 9 of 22 Figure 5. Measuring volume of reconstruction: space is decimated with cubes, and Figure Figure Measuring volume of of reconstruction: space space is decimated is decimated with cubes, with cubes, and and volume is represented by number of cubes occupied by reconstruction. volume volume is is represented by by number of of cubes occupied by by reconstruction. (a) (b) (c) (a) (b) (c) Figure 6. Illustration of problem space for CV motion model at various scales: (a) large, (b) Figure 6. Illustration of problem space for CV motion model at various scales: (a) large, (b) Figure 6. Illustration of problem space medium, and (c)small scale. medium, for CV and motion (c)small model scale. at various scales: (a) large, (b) medium, and (c) small scale. In followings, details of SA parameterization are presented; pseudocode of In followings, details of SA parameterization are presented; pseudocode of algorithm can be found in Appendix C. A geometric cooling function is applied with 0.92 base. algorithm can be found in Appendix C. A geometric cooling function is applied with 0.92 base. In The followings, initial searching boundaries details of ( ) SA are parameterization 100 m/s and 100 m/s are 2 for presented; velocities and pseudocode accelerations, of The initial searching boundaries ( respectively. The search boundaries ) are 100 m/s and 100 m/s ( ) decrease with cooling 2 for velocities and accelerations, algorithm function and randomly respectively. can be found The search in Appendix boundaries C. ( picked neighboring values within ) Adecrease geometric with cooling cooling function function is and applied with randomly c = 0.92 base. bounds are evaluated at each iteration. The acceptance The initial picked searching neighboring boundaries values within (r 0 ) arebounds 100 m/s are and evaluated 100 m/s at each 2 foriteration. velocities The and acceptance accelerations, probability has an exponential distribution over absolute difference between candidate and respectively. probability Thehas search an exponential boundaries current best solutions, i.e., distribution (r. i ) decrease over with absolute cooling difference function between and candidate randomly picked The stopping criterion is satisfied when search bounds are neighboring current smaller values best solutions, than 0.01 withini.e., for all parameters. bounds. The The are stopping overall evaluated criterion running at time each is satisfied depends iteration. when The search cube acceptance bounds size, applied probability are has an smaller exponential than 0.01 motion model, distribution for all parameters. and point over cloud The size. absolute overall running This algorithm difference time depends typically between on terminates in candidate cube size, less than 100 and applied iterations; current best motion model, and point cloud size. This algorithm typically terminates in less than 100 iterations; solutions, note i.e., that e SA does e. The not stopping guarantee criterion optimality. isthis satisfied means when a roughly search 1 3 s runtime bounds on are an Intel smaller Core than i note that SA does not guarantee optimality. This means a roughly 1 3 s runtime on an Intel Core i7 for all 4.4GHz CPU. Since current implementation is not optimized, a less than 1 s execution time is very 4.4GHz parameters. CPU. Since The overall current running implementation time depends is not optimized, on a cube less than size, 1 s execution applied time motion is very model, likely to be achievable with code optimization and faster CPU, allowing for real time applications. and likely point to be cloud achievable size. This with code algorithm optimization typically and terminates faster CPU, allowing less than for real time 100 iterations; applications. note that SA The results provided by SA algorithm can be assessed visually based on consistency of does not guarantee The results optimality. provided by This SA means algorithm a roughly can be assessed 1 3-s runtime visually on based an Intel on Core consistency i7 4.4GHz of CPU. reconstruction. For instance, Figure 5 shows a reconstruction, where edges of Since reconstruction. For instance, Figure 5 shows a reconstruction, where edges of reconstructed current implementation point cloud are is sharp not and optimized, major a less parts than of 1-s airplane execution are clearly time isrecognizable, very likely to be achievable reconstructed point cloud are sharp and major parts of airplane are clearly recognizable, indicating with code that optimization parameter estimation and faster performed CPU, allowing well. for real-time applications. For source indicating that parameter estimation performed well. code, see Supplementary Materials. The2.7. Refinement of Volume Minimization: Motion Estimation with Cube Trajectories 2.7. results Refinement provided of Volume by Minimization: SA algorithm Motion canestimation be assessed with visually Cube Trajectories based on consistency of reconstruction. The presented For instance, volume Figure minimization 5 shows algorithm a reconstruction, preforms well where for constant edges velocity of and reconstructed constant The presented volume minimization algorithm preforms well for constant velocity and constant point cloud acceleration motion models. Unfortunately, it is not stable for higher order trajectories, supposedly, acceleration are sharp motion and models. major Unfortunately, parts of it is not airplane stable for arehigher order clearly recognizable, trajectories, indicating supposedly, that because objective function is more complex. In this section, we present a solution, called cube parameter because estimation objective performed function well. is more complex. In this section, we present a solution, called cube trajectory estimation (CT) that is used for refining trajectory provided by volume trajectory estimation (CT) that is used for refining trajectory provided by volume minimization algorithm to solve for higher order general motion models Refinement minimization of algorithm Volumeto Minimization: solve for higher order Motion general Estimation motion with models. Cube Trajectories The presented volume minimization algorithm preforms well for constant velocity and constant acceleration motion models. Unfortunately, it is not stable for higher-order trajectories, supposedly,

10 Appl. Sci. 2018, 8, x FOR PEER REVIEW 10 of 22 Appl. Sci. Appl. 2018, Sci. 2018, 234 8, x FOR PEER REVIEW 10 of 2210 of 22 After obtaining initial reconstruction with assuming CV or CA motion models and using After obtaining initial reconstruction with assuming CV or CA motion models and using volume minimization, proposed CT algorithm decimates this reconstruction into cubes similar to because volume minimization, objective function proposed is more CT complex. algorithm In decimates this section, this reconstruction we presentinto a solution, cubes similar called to space decimation presented for volume calculation in previous section (see Figure 7). cube trajectory space estimation decimation (CT) presented that is used for for refining volume calculation trajectory in provided previous bysection volume (see Figure minimization 7). Those points that are located in same cube can be assumed to be captured from same spot of Those points that are located in same cube can be assumed to be captured from same spot of algorithm aircraft to solve body forbut higher-order at different times. general Thus, motion se models. points represent correspondences. The X, Y, and aircraft body but at different times. Thus, se points represent correspondences. The X, Y, and After Z coordinate obtaining components initial of reconstruction cube trajectories with can be assuming calculated CV for or each CAcube motion based models on se points; and using Z coordinate components of cube trajectories can be calculated for each cube based on se points; volume se minimization, trajectories are shown proposed in black CT lines algorithm in Figure 7. decimates Next, velocity this reconstruction of each cube is into derived cubes based se trajectories are shown in black lines in Figure 7. Next, velocity of each cube is derived based similar to on space time decimation differences presented of ir points. for This volume allows calculation visualization in of previous se velocities section on (see a time on time differences of ir points. This allows visualization of se velocities on a time Figure 7). velocity diagram (see Figure 8). Assuming that all cubes, and thus all points of aircraft, have Those velocity pointsdiagram that are(see located Figure in8). Assuming same cube that can all cubes, assumed and thus toall bepoints captured of from aircraft, have same spot of same motion, a chosen general motion model can be fitted to velocity instances with regression aircraft same motion, body but a chosen at different general times. motion Thus, model se can be points fitted represent to velocity correspondences. instances with The regression X, Y, and Z (see Figure 8). The residuals from regression provide a feedback and used later to assess errors. coordinate (see Figure Note components 8). The residuals that correspondences of from cube trajectories regression rely on can provide initial be calculated a feedback reconstruction. for and each used If cube later to initial based assess point cloud on se errors. is not points; seclose trajectories Note that to real are solution, shown correspondences in n black correspondences lines rely on in Figure initial 7. reconstruction. might Next, not be velocity If correct. ofinitial Thus, each point executing cube cloud is derived is not steps based on close presented time to differences real solution, above results of ir n in a points. correspondences reconstruction This allows where might visualization not be correct. correspondences of sethus, are velocities executing different than on a y time velocity steps were presented above results in a reconstruction where correspondences are different than y were diagram in (seeinitial Figure reconstruction, 8). Assuming requiring that all cubes, repetition and thusof all points entire of trajectory aircraft, estimation have until same motion, in initial reconstruction, requiring repetition of entire trajectory estimation until a chosen correspondences general motion do not model change. can Since be fitted refinement to velocity requires instances only a simple with regression computation, (see Figure 8). correspondences do not change. Since refinement requires only a simple regression computation, The residuals runtime fromwith an regression optimized provide code is one a feedback or two orders and used of magnitude later to assess less than it errors. was for volume runtime with an optimized code is one or two orders of magnitude less than it was for volume minimization algorithm. minimization algorithm. Figure 7. Illustration of cube trajectory (CT) algorithm: initial reconstruction (red dots) is Figure Figure Illustration of of cube trajectory (CT) algorithm: initial initial reconstruction (red (red dots) dots) is is calculated from data (blue dots), n this reconstruction is decimated into cubes, and n calculated from from data data (blue dots), n this reconstruction is is decimated into into cubes, cubes, and and n n trajectories (black lines) of all cubes are determined based on points in cubes. trajectories trajectories (black (black lines) lines) of of all all cubes are determined based on points in in cubes. cubes. Figure 8. Time velocity graph of cubes from curvy linear motion: cube velocities along X and Figure 8. Time velocity graph of cubes from curvy linear motion: cube velocities along X and Y directions are depicted with blue and red dots, respectively; in this example, a constant acceleration Figure Y directions 8. Time velocity are depicted graph with ofblue cubes and from red dots, curvyrespectively; linear motion: in this example, cube velocities a constant along acceleration X and Y (CA) motion model is used, thus parameters a line are estimated with regression. directions (CA) motion are depicted model is with used, blue thus and parameters red dots, respectively; a line are estimated in thiswith example, regression. a constant acceleration (CA) 3. Field motion Tests model is used, thus parameters a line are estimated with regression. 3. Field Tests Note that correspondences rely on initial reconstruction. If initial point cloud is not close to real solution, n correspondences might not be correct. Thus, executing steps presented above results in a reconstruction where correspondences are different than y were in initial reconstruction, requiring repetition of entire trajectory estimation until

11 Appl. Sci. 2018, 8, of 22 correspondences do not change. Since refinement requires only a simple regression computation, runtime with an optimized code is one or two orders of magnitude less than it was for volume minimization algorithm. 3. Field Tests The tests were conducted at Ohio State University (OSU) Don Scott Airport (IATA: OSU, ICAO: KOSU) in Columbus, OH, USA. The airport is used for government and private purposes as well as provides education services. The airport is capable of accepting small-size single/twin engine airplanes, Appl. helicopters Sci. 2018, 8, x FOR and PEER corporate REVIEW jets. Prior to data acquisition, a local surveying 11 of 22 network with three control points was established to support surveying of sensor stations, pavement The tests were conducted at Ohio State University (OSU) Don Scott Airport (IATA: OSU, markings ICAO: on KOSU) runway, in Columbus, and foroh, USA. georeferencing The airport is (see used Section for government 2.2). The and local private network purposes is tied as to UTM (Universal well as provides Transverse education Mercator) services. The projection airport is with capable using of accepting positions small size determined single/twin at engine control points from airplanes, GPShelicopters observations. and corporate All positions jets. Prior and to orientations data acquisition, are derived a local surveying with respect network towith this global coordinate three system. control points was established to support surveying of sensor stations, pavement Two markings tests were on conducted runway, and to for demonstrate georeferencing and evaluate (see Section 2.2). performance The local network of is proposed tied to system. UTM (Universal Transverse Mercator) projection with using positions determined at control These tests are summarized in Table 2. The goal of first test was to assess performance of points from GPS observations. All positions and orientations are derived with respect to this global system coordinate for tracking system. taxiing aircraft, i.e., modeling 2D motion. The prototype system was deployed next to one Two of tests taxiways were conducted at OSU to demonstrate airport. and Theevaluate sensor network performance consisted of proposed of two sensor system. stations. The firstthese station tests had are two summarized vertically in Table aligned 2. The VLP-16 goal of sensors, first test andwas to or assess station performance was equipped of with a vertically system aligned for tracking VLP-16taxiing and aaircraft, horizontally i.e., modeling aligned 2D HDL-32E motion. The scanners. prototype Figure system 9a was shows deployed sensor next to one of taxiways at OSU airport. The sensor network consisted of two sensor stations. configuration and field of views. The distance between two sensor stations was about 40 m, The first station had two vertically aligned VLP 16 sensors, and or station was equipped with and stations were deployed 20 m from centerline. This sensor configuration allows for 70 m a vertically aligned VLP 16 and a horizontally aligned HDL 32E scanners. Figure 9a shows sensor effectiveconfiguration runway coverage, and field measured of views. on The distance centerline. between During two sensor three-hour stations was data about acquisition, 40 m, 18 taxiing Cessna-type and stations and were corporate deployed jet 20 airplanes m from were centerline. tracked; This sensor velocity configuration of taxing allows airplanes for 70 m varied from 8 12 effective m/s. runway coverage, measured on centerline. During three hour data acquisition, 18 The taxiing goal Cessna type of second and test corporate was tojet capture airplanes landing were tracked; aircraft, i.e., velocity to track of intaxing 3D. The airplanes system was varied from 8 12 m/s. deployed next to 9L/27R runway; Figure 9b shows locations of sensor stations and The goal of second test was to capture landing aircraft, i.e., to track in 3D. The system was field of deployed views. The next sensor to 9L/27R configuration runway; Figure is slightly 9b shows different locations than Test of #1, sensor as one stations of and stations was equipped field with of views. an extra The sensor VLP-16. configuration The sensor is slightly stations different were located than Test at #1, 40as mone distance of stations from was each or, stations equipped werewith deployed an extra 33 VLP 16. m from The sensor centerline, stations were andlocated effective at 40 m coverage distance from measured each or, along centerline was stations 100were m. deployed During 33 m six-hour from data centerline, acquisition, and effective 34 landing coverage Cessna-type measured along airplanes were centerline was 100 m. During six hour data acquisition, 34 landing Cessna type airplanes were tracked, velocity of landing airplanes was around 20 m/s. tracked, velocity of landing airplanes was around 20 m/s. (a) Figure 9. Cont.

12 Appl. Sci. 2018, 8, of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 12 of 22 (b) Figure 9. Test sensor configurations: (a) Test #1 sensor network (taxiing), and (b) Test #2 sensor Figure 9. network Test sensor configuration configurations: (landing). (a) Test #1 sensor network (taxiing), and (b) Test #2 sensor network configuration (landing). Table 2. Overview of field tests (* projected to runway centerline). Field of Object Sensor Test Case Table 2. Overview Goal of field tests (* projected Sensors to runway View centerline). * Distance 3 vertical VLP 16s, #1 Taxiing 2D performance at taxiing 70 m 20 m 1 horizontal HDL 32E Field of Object Sensor Test Case Goal Sensors 3D performance at 4 vertical VLP 16s, View * Distance #2 Landing 100 m 33 m landing 1 horizontal 3 verticalhdl 32E VLP-16s, #1 Taxiing 2D performance at taxiing 70 m 20 m 1 horizontal HDL-32E 4. Results 4 vertical VLP-16s, #2 Landing The first step 3D performance of data processing at landing is sensor georeferencing, described 100 in m Section 2.3. Figure 33 m 1 horizontal HDL-32E 10 shows arrangement for one sensor using calibration board at five different locations; points captured by sensor in sensor coordinate system are shown in red, and red lines are 4. Results surface norms of boards calculated from se points. The blue lines are norms of boards at five locations global coordinate system; se norms are derived from board The first step of data processing is sensor georeferencing, described in Section 2.3. Figure 10 corners (blue dots) measured with a total station. Note that global coordinate system is connected shows arrangement for one sensor using calibration board at five different locations; points to local coordinate system of which origin is sensor position calculated with GPS. The captured transformation by sensor parameters in sensor between coordinate two systems systemare are estimated shownbased in red, on and Equation red (1). In lines are surface Figure norms10, of green boards color calculated depicts boards from se after transforming points. The blue sensor lines points areinto norms global of frame. boards at five The locations standard deviation in global of coordinate residuals of system; parameter se estimation norms are is 0.3. derived from board corners (blue dots) measured Once all individual with asensor total station. data were Note transformed that into global global coordinate system, is connected dynamic to content, i.e., aircraft body, is separated from static background according to Section 2.3. Three local coordinate system of which origin is sensor position calculated with GPS. The transformation algorithms listed in Table 3 are compared for Test #1. The COG algorithm is a straight forward modelfree object tracking method. In our implementation, COG points for each scan is calculated as parameters between two systems are estimated based on Equation (1). In Figure 10, green color depicts average (center boards of gravity) after of transforming object points. Then, sensor points trajectory into is derived global with, frame. first, smoothing The standard deviation of COG points residuals with of a Gaussian parameter filter, and estimation n, resampled is 0.3. with spline interpolation in order for Once densifying all individual trajectory sensorpoints. data were The transformed second investigated into method globalis coordinate model based system, volume dynamic content, minimization i.e., aircraft introduced body, in Section is separated 2.6; a 2D from CA motion static is assumed background due to according taxiing scenario to Section in 2.3. Test #1. The third CT algorithm uses output of a VM algorithm assuming CV motion as initial Three algorithms listed in Table 3 are compared for Test #1. The COG algorithm is a straight-forward reconstruction. Then, CT algorithm refines this solution by estimating a 2D fourth degree model-free polynomial object tracking motion model. method. Figure In11 our illustrates implementation, points of airplane COG points body at for different each scan times isand calculated as average n reconstruction (center of gravity) by CT of algorithm. objectfigure points. 12 presents Then, reconstructions trajectory is derived for three with, first, smoothing COG points with a Gaussian filter, and n, resampled with spline interpolation in order for densifying trajectory points. The second investigated method is model-based volume minimization introduced in Section 2.6; a 2D CA motion is assumed due to taxiing scenario in Test #1. The third CT algorithm uses output of a VM algorithm assuming CV motion as initial reconstruction. Then, CT algorithm refines this solution by estimating a 2D fourth-degree polynomial motion model. Figure 11 illustrates points of airplane body at different times

13 Appl. Sci. 2018, 8, of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 22 and methods. n The reconstruction quality of by reconstructions CT algorithm. can Figure be 12 easily presents compare reconstructions by visual assessment. for three For methods. Appl. Sci. 2018, The8, quality x FOR PEER of REVIEW reconstructions can be easily compare by visual assessment. For trajectory 13 of 22 trajectory solutions, a reference point of airplane body has to be chosen, and lowermost point solutions, a reference point of airplane body has to be chosen, and lowermost point of front methods. of front The wheel quality was of selected reconstructions here. Figure can 13 presents be easily compare trajectories by visual calculated assessment. by three For wheel methods. was selected here. Figure 13 presents trajectories calculated by three methods. trajectory solutions, a reference point of airplane body has to be chosen, and lowermost point of front wheel was selected here. Figure 13 presents trajectories calculated by three methods. Figure 10. Sensor georeferencing: points from calibration board in sensor coordinate system Figure 10. Sensor georeferencing: points from calibration board in sensor coordinate system depicted with red, norm of board surface is shown with red lines, blue dots are board depicted with red, norm of board surface is shown with red lines, blue dots are board corners measured with total station, blue lines are norms of board surface in global corners Figure 10. measured Sensor georeferencing: with total station, points blue from lines calibration are norms board of in board sensor surface coordinate system global coordinate system, green points depict points after transforming board points from sensor coordinate depicted with system, red, green norm points of depict board points surface after is shown transforming with red lines, boardblue points dots from are sensor board to global coordinate system, and red circle marks sensor position. to corners global measured coordinate with system, total station, and red blue circle lines marks are norms sensorof position. board surface in global coordinate system, green points depict points after transforming board points from sensor Table 3. Investigated algorithms for Test #1. to global coordinate system, and red circle marks sensor position. Method Table 3. Investigated algorithms for Parameters Test #1. Table The 3. Investigated COG points algorithms are filtered for with Test #1. a Gaussian kernel and Center of Gravity Method (COG) resampled with spline interpolation Parameters Method Parameters Volume Center ofminimization Gravity (COG) with The COG points are filtered with a Gaussian kernel and The Cube COG resampled size points is 1 m, are with or filtered spline parameters with a Gaussian interpolation are presented kernel in and Center Section 2.6. CA motion of Gravity model (COG) resampled with spline interpolation Refinement Volume Minimization with Cube with CA Volume Minimization with Initial Cube reconstruction size is 1 m, or using parameters a CV model are presented and SA, motion in Section model 2.6. is motion model Trajectories Cube fourth degree size is 1 m, polynomial or parameters are presented in Section 2.6. CA motion model Initial reconstruction using a CV model and SA, motion model is Refinement with Cube TrajectoriesInitial fourth-degree reconstruction polynomial using a CV model and SA, motion model is Trajectories fourth degree polynomial Figure 11. Green points mark raw dataset captured from a taxiing aircraft during Test #1, and red points show reconstructed airplane body using CT method. Figure 11. Green points mark raw dataset captured from a taxiing aircraft during Test #1, and red Figure Landing 11. Green airplanes points were markcaptured raw dataset from a captured 33 m object sensor from a taxiingdistance aircraft during Test Test #1, #2. andthe red point points show reconstructed airplane body using CT method. density points is show about 20 reconstructed points per scan. airplane This body is very usinglow point CT method. density for COG algorithm, and thus, algorithm Landing airplanes was not were able to captured produce from any a acceptable 33 m object sensor results; subsequently, distance during only Test #2. results The point of density CT Landing algorithm is about airplanes are 20 presented. points were per captured Figure scan. This 14 from shows is very a 33-m raw low airplane object sensor point density points distance in for green, COG during and algorithm, Test reconstruction #2. The and point thus, in density blue, algorithm and is about Figure was 2015 points not visualizes able per to scan. produce reconstructed Thisany very acceptable low airplane point results; body density in subsequently, perspective for COGview. only algorithm, Finally, results and Figure of thus, 16 CT shows algorithm estimated was are presented. notcoordinates able tofigure produce of 14 a landing any shows acceptable raw aircraft. airplane results; The red points subsequently, line in is green, initial only and VM solution reconstruction results of assuming CT blue, CV motion and Figure model. 15 visualizes The blue line is reconstructed refined CT airplane solution. body in perspective view. Finally, Figure 16 shows estimated coordinates of a landing aircraft. The red line is initial VM solution assuming CV motion model. The blue line is refined CT solution.

14 Appl. Sci. 2018, 8, of 22 algorithm are presented. Figure 14 shows raw airplane points in green, and reconstruction in blue, and Figure 15 visualizes reconstructed airplane body in perspective view. Finally, Figure 16 shows estimated coordinates of a landing aircraft. The red line is initial VM solution assuming CV motion model. The blue line is refined CT solution. Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22 (a) (b) (c) (a) (b) (c) Figure 12. Reconstructions by three algorithms for Test #1 from data presented in Figure 11: (a) center Figure of 12. gravity Reconstructions by three for Test #1 from data presented in Figure 11: (a) (COG), (b) by volume three minimization algorithms (b) for with Test CA #1 motion from data model presented (VM CA), (c) in Figure and (c) 11: cube (a) (a) center of gravity (COG), (b) volume with CA (VM-CA), trajectories center of gravity (CT). (COG), (b) volume minimization with CA motion model (VM CA), and (c) cube trajectories Figure 12. Reconstructions (CT). by three algorithms for Test #1 from data presented in Figure 11: (a) center of gravity (COG), (b) volume minimization with CA motion model (VM CA), and (c) cube trajectories (CT). Figure 13. Trajectory solutions for three methods. Figure 13. Trajectory solutions for three methods. Figure 13. Trajectory solutions for three methods. Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or 7.2 points/scan/sensor), Figure 14. Reconstruction using from CT algorithm sparse point for a landing clouds Cessna (average airplane point during density Test is 36/scans #2: captured or 7.2 airplane points/scan/sensor), are shown using in green, CT algorithm reconstruction for a landing is Cessna in blue. airplane during Test #2: captured airplane Figure 14. points Reconstruction are shown in from green, sparse reconstruction point clouds is in (average blue. point density is 36/scans or 7.2 Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or points/scan/sensor), using CT algorithm for a landing Cessna airplane during Test #2: captured 7.2 points/scan/sensor), using CT algorithm for a landing Cessna airplane during Test #2: airplane points are shown in green, reconstruction is in blue. captured airplane points are shown in green, reconstruction is in blue.

15 Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or 7.2 Appl. Sci. 2018, 8, of 22 points/scan/sensor), using CT algorithm for a landing Cessna airplane during Test #2: captured airplane points are shown in green, reconstruction is in blue. Figure Figure Reconstruction Reconstruction from from sparse sparse point point clouds clouds (total (total number number of points of is points 4320) from is 4320) data from presented data Appl. Sci. in presented Figure 2018, 8, 14: x in FOR Figure PEER shape 14: REVIEW of shape plane of including plane including wings, tails wings, are recognizable. tails are recognizable. 15 of 22 Figure Figure Estimated Estimatedcoordinates coordinatesof of alanding landingcessna Cessnaairplane airplaneusing usingvm CV VM-CV (red) (red) and andct (blue) (blue) algorithmsfor for X, X, Y, and Z directions; X direction is is along centerline of of runway. 5. Discussion 5. Discussion The authors introduced VM algorithm in an earlier study and used this algorithm assuming The authors introduced VM algorithm in an earlier study and used this algorithm assuming 2D constant velocity model [11]. In that study, we showed that VM outperforms ICP algorithm 2D constant velocity model [11]. In that study, we showed that VM outperforms ICP algorithm based on data captured by a single sensor. The analysis with using GPS installed on airplane and based on data captured by a single sensor. The analysis with using GPS installed on airplane used as reference showed that RMSEs (root mean square errors) are 0.72 m/s with VM and 5.29 and used as reference showed that RMSEs (root-mean-square errors) are 0.72 m/s with VM and m/s with ICP even at short (10 m) object sensor distances. The main reason that ICP tends to fail 5.29 m/s with ICP even at short (10 m) object sensor distances. The main reason that ICP tends to is that it requires corresponding points between two point clouds; point clouds acquired by fail is that it requires corresponding points between two point clouds; point clouds acquired by Velodyne scanners have relatively low point density or resolution, especially when object sensor Velodyne scanners have relatively low point density or resolution, especially when object sensor distance is large. Consequently, capturing good corresponding points is not possible. distance is large. Consequently, capturing good corresponding points is not possible. While VM performed well for CV model in our earlier studies, later it turned out that While VM performed well for CV model in our earlier studies, later it turned out VM approach cannot adequately handle more complex motions due to structure of objective that VM approach cannot adequately handle more complex motions due to structure of function at small scale. In our next study, we tried to address this issue of approximating general objective function at small scale. In our next study, we tried to address this issue of approximating motion model by applying constant velocity model for shorter time periods [12]. This solution general motion model by applying constant velocity model for shorter time periods [12]. performed well, yet it was not robust. This study introduced cube trajectories to improve motion This solution performed well, yet it was not robust. This study introduced cube trajectories to improve estimation. motion estimation. Results have to be compared in terms of accuracy, reliability under various aircraft motions. In following, two indicators are used for qualitatively assessing and comparing performance. The reconstruction quality is one indicator that can be used to compare various algorithms and to assess performance in general. If reconstruction is good, aircraft and its details are recognizable, indicating that derived motion pattern is also close to actual trajectory.

16 Appl. Sci. 2018, 8, of 22 Results have to be compared in terms of accuracy, reliability under various aircraft motions. In following, two indicators are used for qualitatively assessing and comparing performance. The reconstruction quality is one indicator that can be used to compare various algorithms and to assess performance in general. If reconstruction is good, aircraft and its details are recognizable, indicating that derived motion pattern is also close to actual trajectory. A fuzzy, blurry reconstruction indicates that estimated motion is not adequate. The reconstructions of COG, VM-CA and CT algorithms in Figure 15 show that VM and CT outperform COG and CT provides a better reconstruction than VM. The CT reconstruction is fuzzy compared to reconstruction provided by CT algorithm. The reason of underperformance of COG method is that different body points are backscattered at different epochs, and consequently, COG does not accurately represent center of gravity of airplane body. Clearly, COG tends to fail when scans contain small number of points from tracked object. This is also case for Test #2, when object sensor distance is around 33 m, and thus, not enough points are captured from Cessna-size airplanes. In addition, sensor network is installed on one side of runway, which means that sensors capture points from one side of airplane. This clearly pulls COG to direction of that of sensors. In contrast to COG, CT is able to provide a fairly good quality reconstruction (see Figure 15), indicating that algorithm performs well even if scans contain a small number of points. The shape of trajectory and velocity profiles can be also used to assess performance. For instance, in Figure 13 for Test #1, trajectory derived from COG is not realistic, it is not possible that airplane could have followed such a curvy trajectory. Note that large tail at end of trajectory, which is definitely not feasible motion, is caused by lack of a sufficient number of captured points. In contrast, VM-CA and CT results are feasible trajectories. However, it is unclear wher small waves of CT trajectory is actual motion of airplane, or it is result of overfitting due to fourth-degree motion model. Neverless, largest difference between two trajectories is about 10 cm, which is within error envelope detected earlier in our previous studies. The trajectory along X, Y, and Z directions for Test #2 can be seen in Figure 16. This figure shows decreasing altitude (see Z component) as expected for landing aircraft. Also note that CV model depicted with red in Figure 16 is a line; however, it is very unlikely that this motion component behaves such a way. The CT solution depicted with blue follows a more realistic trajectory. Unfortunately, reference solution, for instance, from GPS, is not available for quantitative analysis of methods. However, residuals from CT algorithm can describe internal accuracy between measured cube velocities and underlying motion model. The average and standard deviation of absolute residuals from CT algorithm are presented in Table 4. Test Table 4. Quantitative statistics of CT method. Average of Absolute Residuals Standard Deviation of Absolute Residuals X (m/s) Y (m/s) Z (m/s) X (m/s) Y (m/s) Z (m/s) #1 Taxiing #2 Landing The statistics of average absolute residuals ( m/s) are similar to those we experienced earlier for simpler motions. The X velocity errors are dominant for both cases, but significantly larger than Y component in case of Taxiing data; note that X axis is direction of landing and taxiing. The accuracies of Y and Z components are similar. The standard deviation is also relatively low ( m). Note that residuals are velocities, and do not describe positioning accuracy directly. The sampling frequency of sensors is 10 Hz, and thus, positioning error is supposed to be around one to seven centimeters based on se velocity residuals.

17 Appl. Sci. 2018, 8, of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 17 of 22 Limitations seem to adequate and Possible for Applications aircraft model identification. Note that scale is preserved in LiDAR data, making object identification easier. Clearly, machine learning or or algorithms can be developed The prototype system allows for real-time tracking of aircraft during taxiing, landing or takeoffs. to identify types of incoming airplanes captured by LiDAR sensor network [9]. In addition, Figure 17 shows an example of a user interface, where a landing aircraft is tracked, and LiDAR as remote sensing technology, can also be used for monitoring surroundings to detect and demonstration identify hazardous system objects provides to improve velocity, safety altitude of apron operations and position (see [9,10]). including error envelopes along with Besides video feedback. possible applications Currently, presented mainabove, limitation original of intent developed of this system effort was is to small observation develop a space, LiDAR based which is prototype an affordability system issue. to demonstrate To cover larger feasibility sections along that airplanes runways can and be taxiways would identified require and large tracked number in airport of sensors, environment which would without increase any cooperation pricefrom of system aircraft. and Note that operation costs. aircraft However, navigation mobile systems LiDAR provide scanners, this information such as Velodyne to crew, sensors, but this are data expected is not shared to become by airport cheaper in authorities. coming years Using due remote to technology increasing provides needs for non cooperative mass production alternative boostedfor bycollecting autonomous huge car amount of motion parameters that can improve airport safety through helping understand and assess industry. There are or applications where proposed system is applicable at airports. For instance, risk of pilot s driving patterns on runways and taxiways (e.g., centerline deviation, LiDAR might offer tracking solution in night or bad visibility conditions, when optical sensing is wingspan separation). This information can be used in aircraft and airport planning, e.g., wher compromised. Obviously, LiDAR will fail in extreme wear conditions, such as heavy raining or airport meets standardized criteria or providing data for modification of airport standards snowing. and pilot With education. improving To develop resolution adequate and coverage, risk models aircraft for taxiing types behavior, can be Chou detected al. and used models identified. positioning Figure gauges 18to shows obtain data reconstruction at Chiang Kai Shek of a Cessna International and aairport Learjet, [24]. where Anor details study seem to adequate from FAA/Boeing for aircraft investigated model identification. 747s centerline Note deviations that scale at JFK isand preserved ANC airports in LiDAR [25 27]. data, They making object used identification laser diodes easier. at two Clearly, locations machine to measure learning location or orof algorithms nose and can be main developed gears. These to identify approaches types of allow incoming only for airplanes accurate captured on ground by aircraft LiDAR tracking sensor as network opposed [9]. to In addition, LiDAR based LiDAR as remote tracking sensing system technology, that is also capable can alsoof be tracking used for during monitoring landing or takeoffs surroundings with high to accuracy. detectfigure and identify 19 shows trajectories of all landing airplanes during Test #2. hazardous objects to improve safety of apron operations (see [9,10]). (a) (b) (c) Figure 17. Aircraft tracking: (a) current video frame, (b) altitude profile with blue dashed line showing error envelope in cm, red line is runway surface, and (c) 2D trajectory.

18 Appl. Sci. 2018, 8, of 22 Besides possible applications presented above, original intent of this effort was to develop a LiDAR-based prototype system to demonstrate feasibility that airplanes can be identified and tracked in airport environment without any cooperation from aircraft. Note that aircraft navigation systems provide this information to crew, but this data is not shared by airport authorities. Using remote technology provides non-cooperative alternative for collecting huge amount of motion parameters that can improve airport safety through helping understand and assess risk of pilot s driving patterns on runways and taxiways (e.g., centerline deviation, wingspan separation). This information can be used in aircraft and airport planning, e.g., wher airport meets standardized criteria or providing data for modification of airport standards and pilot education. To develop adequate risk models for taxiing behavior, Chou et al. used positioning gauges to obtain data at Chiang-Kai-Shek International Airport [24]. Anor study from FAA/Boeing investigated 747s centerline deviations at JFK and ANC airports [25 27]. They used laser diodes at two locations to measure location of nose and main gears. These approaches allow only for accurate Appl. on-ground Sci. 2018, aircraft 8, x FOR PEER tracking REVIEW as opposed to LiDAR-based tracking system that is also capable 18 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 18 of of 22 tracking during landing or takeoffs with high accuracy. Figure 19 shows trajectories of all landing Figure 17. Aircraft tracking: (a) current video frame (), (b) altitude profile with blue dashed line airplanes Figure during 17. Aircraft Test #2. tracking: (a) current video frame (), (b) altitude profile with blue dashed line showing error envelope in cm, red line is runway surface, and (c) 2D trajectory. showing error envelope in cm, red line is runway surface, and (c) 2D trajectory. (a) (b) (a) (b) Figure 18. Reconstruction of different aircraft types, (a) Cessna and (b) Learjet. Figure 18. Reconstruction of different aircraft types, (a) Cessna and (b) Learjet. Figure 19. Altitude profiles of 29 landing airplanes from Test #2. Figure 19. Altitude profiles of 29 landing airplanes from Test #2. Figure 19. Altitude profiles of 29 landing airplanes from Test #2. 6. Conclusions 6. Conclusions 6. Conclusions The paper discussed feasibility of using a LiDAR sensor network to accurately track taxiing The paper discussed feasibility of using a LiDAR sensor network to accurately track taxiing and landing aircraft. The used sensors Velodyne VLP 16 and HDL 32E provide high sampling and landing The paper aircraft. discussed The used feasibility sensors of using Velodyne a LiDAR VLP 16 sensor and network HDL 32E provide to accurately high track sampling taxiing rates to capture an aircraft during taxiing or landing with relatively high speed; however, rates and landing to capture aircraft. an The aircraft used during sensors taxiing Velodyne or landing VLP-16 with and relatively HDL-32E provide high speed; high however, sampling captured point clouds are sparse. The main reason for small number of LiDAR points captured rates to capture point clouds an aircraft are during sparse. taxiing The main or landing reason with for relatively small number high speed; of LiDAR however, points captured captured from point aircraft body is large, ~20 30 m object sensor distance. This issue is addressed in from clouds aircraft are sparse. body is The main large, reason ~20 30 for m object sensor small number distance. of LiDAR This points issue is captured addressed from in paper by presenting two novel model based tracking algorithms. First, VM algorithm is paper by presenting two novel model based tracking algorithms. First, VM algorithm is developed to find parameters of a simple motion model, such as CV or CA, with minimizing developed to find parameters of a simple motion model, such as CV or CA, with minimizing reconstruction volume. Then, CT algorithm uses se reconstructions, as an initial solution, to reconstruction volume. Then, CT algorithm uses se reconstructions, as an initial solution, to find parameters of general motion model. The proposed method outperformed COG find parameters of general motion model. The proposed method outperformed COG method, and could achieve one to seven centimeter tracking accuracy. The VM and CT algorithms method, and could achieve one to seven centimeter tracking accuracy. The VM and CT algorithms

19 Appl. Sci. 2018, 8, of 22 aircraft body is large, ~20 30 m object sensor distance. This issue is addressed in paper by presenting two novel model-based tracking algorithms. First, VM algorithm is developed to find parameters of a simple motion model, such as CV or CA, with minimizing reconstruction volume. Then, CT algorithm uses se reconstructions, as an initial solution, to find parameters of general motion model. The proposed method outperformed COG method, and could achieve oneto seven-centimeter tracking accuracy. The VM and CT algorithms perform really well on acquired datasets. However, dynamics of motion in presented examples is low, and refore, it is required to assess performance for more complex trajectories. Currently, sharp turning scenarios may not be handled with presented algorithm. In future, we plan to extend algorithms considering se motions in model to be able to handle more complex maneuvers. The developed approach might be also used in or applications outside of aviation domain, such as tracking vehicle traffic, etc. Supplementary Materials: C++ code with Matlab interface and sample data are available online at Acknowledgments: The authors would like to thank Ohio State University Airport as test site for this research and Federal Aviation Administration who supported this work through Center of Excellence for General Aviation, known as PEGASAS, Partnership to Enhance General Aviation Safety Accessibility, and Sustainability. Although FAA has sponsored this project, it neir endorses nor rejects findings of this research. Author Contributions: C. T. conceived experiment. Z. K. conceived sensor georeferencing. Z. K. and C. T. designed and performed experiment. Z. K. conducted data processing, analysis, and developed algorithms. Z. K. and C. T. wrote paper. Conflicts of Interest: The authors declare no conflict of interest. Appendix A This appendix explains some basic definitions used throughout paper. A point captured by a LiDAR sensor is given with its X, Y and Z coordinates as well as time when it was captured, i.e., p = [x, y, z, t] P, where P is set of points. A P C = 2 P set is a point cloud, where C is set of point clouds and 2 P is power set of P. Consequently, P contains no or many points. A scan is a point cloud collected by sensor when it makes a complete 360 scan starting from 0. Note that times of points in a scan might be different but has to be within δt sampling time; this is time to make a full 360 scan. The reverse direction is not valid, points within sampling time might not correspond to same scan. The data collected by sensor can be split into scans, and consequently, collected data is a series of consecutive scans. The reconstruction or reconstructed point cloud is a point cloud that contains several transformed scans. In paper, scans are typically transformed based on a chosen motion model, resulting in a reconstruction. Appendix B The applied motion models are very simple, yet his appendix summarizes se basic formulas for sake of completeness. The general 2D motion considering a pointwise particle can be described as time variable polynomial functions: x(t) = x 0 + a 1 t + a 2 t a n t n,y(t) = y 0 + b 1 t + b 2 t b n t n. (A1) where, a 0,..., a n and b 0,..., b n are parameters of motion. This model can be extended with Z coordinate in case of 3D motions, such that z(t) = z 0 + c 1 t + c 2 t c n t n. (A2)

20 Appl. Sci. 2018, 8, of 22 The model can be simplified that results in constant acceleration (CA) model for curvilinear motions: x(t) = x 0 + v x t + a x t 2,y(t) = y 0 + v y t + a y t 2,z(t) = z 0 + v z t + a z t 2. (A3) where x 0, y 0, z 0 are starting position, v x, v y, v z are velocities, a x, a y, a z are accelerations along X, Y, Z directions, and t is time. Finally, 3D constant velocity (CV) model is x(t) = x 0 + v x t,y(t) = y 0 + v y t,z(t) = z 0 + v z t (A4) A p = [x, y, z, t] P point from P C cloud can be transformed back to zero time. For instance, for general motion model, with knowing a 0,..., a n, b 0,..., b n, c 0,..., c n motion parameters, transformation takes following form: x 0 = x(t) a 1 t a 2 t 2... a n t n,y 0 = y(t) b 1 t b 2 t 2... b n t n,z 0 = z(t) c 1 t c 2 t 2... c n t n. (A5) Applying se equations for all captured points results in a point cloud called reconstructed point cloud or reconstruction. Appendix C Algorithm A1 presents steps for simulated annealing method to solve volume minimization defined in Equation (2). Algorithm A1 Simulated annealing algorithm for volume minimization 1. Input: 2. P C (point cloud), T (P, R, R n ) P (motion model), r 0 R n (initial search boundaries) 3. t init R (initial temperature), r min R (minimum temperature), i max Z + (maximum iteration number), 4. c [01] (cooling coefficient), n n Z + (number of neighborhoods) 5. Initialization: 6. i 0; t t init 7. x, x best x l+x u 2 8. v Vol(P) 9. Iterations: 10. while r i > r min 11. t c i // Decreasing temperature 12. r i = r 0 * t // Searching radius for inner iteration 13. k while k < n n 15. x x best + r i rnd() // Pick a neighbor randomly 16. P U pt PT(p t, t, x) // Calculate cloud based on motion model 17. v Vol(P ) // Calculate volume 18. if v < v or exp( e e )) > rnd() // Update local best 19. x x 20. v v 21. end if 22. k k end while 24. x best x // Update best 25. i i end while 27. Output: x best

21 Appl. Sci. 2018, 8, of 22 References 1. Toth, C.; Jóźków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, [CrossRef] 2. Navarro-Serment, L.E.; Mertz, C.; Hebert, M. Pedestrian Detection and Tracking Using Three-Dimensional LADAR Data. In Field and Service Robotics; Springer Tracts in Advanced Robotics; Springer: Berlin, Germany, 2010; pp , ISBN Cui, J.; Zha, H.; Zhao, H.; Shibasaki, R. Laser-based detection and tracking of multiple people in crowds. Comput. Vis. Image Underst. 2007, 106, [CrossRef] 4. Sato, S.; Hashimoto, M.; Takita, M.; Takagi, K.; Ogawa, T. Multilayer lidar-based pedestrian tracking in urban environments. In Proceedings of 2010 IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, June 2010; pp Tarko, P.; Ariyur, K.B.; Romero, M.A.; Bandaru, V.K.; Liu, C. Stationary LiDAR for Traffic and Safety Applications Vehicles Interpretation and Tracking; USDOT: Washington, DC, USA, Börcs, A.; Nagy, B.; Benedek, C. On board 3D object perception in dynamic urban scenes. In Proceedings of 2013 IEEE 4th International Conference on Cognitive Infocommunications (CogInfoCom), Budapest, Hungary, 2 5 December 2013; pp Toth, C.K.; Grejner-Brzezinska, D. Extracting dynamic spatial data from airborne imaging sensors to support traffic flow estimation. ISPRS J. Photogramm. Remote Sens. 2006, 61, [CrossRef] 8. Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations. Available online: (accessed on 16 December 2017). 9. Mund, J.; Frank, M.; Dieke-Meier, F.; Fricke, H.; Meyer, L.; Ror, C. Introducing LiDAR Point Cloud-based Object Classification for Safer Apron Operations. In Proceedings of International Symposium on Enhanced Solutions for Aircraft and Vehicle Surveillance Applications, Berlin, Germany, 7 8 April Mund, J.; Zouhar, A.; Meyer, L.; Fricke, H.; Ror, C. Performance Evaluation of LiDAR Point Clouds towards Automated FOD Detection on Airport Aprons. In Proceedings of 5th International Conference on Application and Theory of Automation in Command and Control Systems, Toulouse, France, 30 September 2 October Koppanyi, Z.; Toth, C. Estimating Aircraft Heading based on Laserscanner Derived Point Clouds. In Proceedings of ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Munich, Germany, March 2015; Volume II-3/W4, pp Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D. Monitoring Aircraft Motion at Airports by LiDAR. In Proceedings of ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, July 2016; Volume III-1, pp VLP-16 Manual. Available online: (accessed on 15 December 2017). 14. HDL-32E Manual. Available online: (accessed on 15 December 2017). 15. Wang, D.Z.; Posner, I.; Newman, P. Model-free Detection and Tracking of Dynamic Objects with 2D Lidar. Int. J. Robot. Res. 2015, 34, [CrossRef] 16. Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, [CrossRef] 17. Moosmann, F.; Stiller, C. Joint self-localization and tracking of generic objects in 3D range data. In Proceedings of 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6 10 May 2013; pp Dewan, A.; Caselitz, T.; Tipaldi, G.D.; Burgard, W. Motion-based detection and tracking in 3D LiDAR scans. In Proceedings of 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, May 2016; pp Kaestner, R.; Maye, J.; Pilat, Y.; Siegwart, R. Generative object detection and tracking in 3D range data. In Proceedings of 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, May 2012; pp

22 Appl. Sci. 2018, 8, of Dierenbach, K.; Jozkow, G.; Koppanyi, Z.; Toth, C. Supporting Feature Extraction Performance Validation by Simulated Point Cloud. In Proceedings of ASPRS 2015 Annual Conference, Tampa, FL, USA, 4 8 May Saez, J.M.; Escolano, F. Entropy Minimization SLAM Using Stereo Vision. In Proceedings of 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, April 2005; pp Saez, J.M.; Hogue, A.; Escolano, F.; Jenkin, M. Underwater 3D SLAM through entropy minimization. In Proceedings of 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, May 2006; pp Aarts, E.; Korst, J.; Michiels, W. Simulated Annealing. In Search Methodologies; Springer: Boston, MA, USA, 2005; pp , ISBN Chou, C.P.; Cheng, H.J. Aircraft Taxiing Pattern in Chiang Kai-Shek International Airport. In Airfield and Highway Pavement: Meeting Today s Challenges with Emerging Technologies, Proceedings of Airfield and Highway Pavements Specialty Conference, Atlanta, GA, USA, 30 April 3 May 2006; American Society of Civil Engineers: Reston, VA, USA, [CrossRef] 25. Sholz, F. Statistical Extreme Value Analysis of ANC Taxiway Centerline Deviation for 747 Aircraft; FAA/Boeing Cooperative Research and Development Agreement; Federal Aviation Administration: Washington, DC, USA, Sholz, F. Statistical Extreme Value Analysis of JFK Taxiway Centerline Deviation for 747 Aircraft; FAA/Boeing Cooperative Research and Development Agreement; Federal Aviation Administration: Washington, DC, USA, Sholz, F. Statistical Extreme Value Concerning Risk of Wingtip to Wingtip or Fixed Object Collision for Taxiing Large Aircraft; Federal Aviation Administration: Washington, DC, USA, by authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under terms and conditions of Creative Commons Attribution (CC BY) license (

Runway Centerline Deviation Estimation from Point Clouds using LiDAR imagery

Runway Centerline Deviation Estimation from Point Clouds using LiDAR imagery Runway Centerline Deviation Estimation from Point Clouds using LiDAR imagery Seth Young 1, Charles Toth 2, Zoltan Koppanyi 2 1 Department of Civil, Environmental and Geodetic Engineering The Ohio State

More information

DERIVING PEDESTRIAN POSITIONS FROM UNCALIBRATED VIDEOS

DERIVING PEDESTRIAN POSITIONS FROM UNCALIBRATED VIDEOS DERIVING PEDESTRIAN POSITIONS FROM UNCALIBRATED VIDEOS Zoltan Koppanyi, Post-Doctoral Researcher Charles K. Toth, Research Professor The Ohio State University 2046 Neil Ave Mall, Bolz Hall Columbus, OH,

More information

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016 Sensor Fusion: Potential, Challenges and Applications Presented by KVH Industries and Geodetics, Inc. December 2016 1 KVH Industries Overview Innovative technology company 600 employees worldwide Focused

More information

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149 IGTF 26 Fort Worth, TX, April -5, 26 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 Light weighted and Portable LiDAR, VLP-6 Registration Yushin Ahn (yahn@mtu.edu), Kyung In Huh (khuh@cpp.edu), Sudhagar Nagarajan

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

Evaluating the Performance of a Vehicle Pose Measurement System

Evaluating the Performance of a Vehicle Pose Measurement System Evaluating the Performance of a Vehicle Pose Measurement System Harry Scott Sandor Szabo National Institute of Standards and Technology Abstract A method is presented for evaluating the performance of

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS M. Hassanein a, *, A. Moussa a,b, N. El-Sheimy a a Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

A Whole New World of Mapping and Sensing: Uses from Asset to Management to In Vehicle Sensing for Collision Avoidance

A Whole New World of Mapping and Sensing: Uses from Asset to Management to In Vehicle Sensing for Collision Avoidance A Whole New World of Mapping and Sensing: Uses from Asset to Management to In Vehicle Sensing for Collision Avoidance Charles Toth, Dorota A Grejner-Brzezinska, Carla Bailo and Joanna Pinkerton Satellite

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning 1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Institute for Photogrammetry (ifp) University of Stuttgart ifp Geschwister-Scholl-Str. 24 D M. Cramer: Final report

More information

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Path and Trajectory specification Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Specifying the desired motion to achieve a specified goal is often a

More information

Research on-board LIDAR point cloud data pretreatment

Research on-board LIDAR point cloud data pretreatment Acta Technica 62, No. 3B/2017, 1 16 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on-board LIDAR point cloud data pretreatment Peng Cang 1, Zhenglin Yu 1, Bo Yu 2, 3 Abstract. In view of the

More information

Chapters 1 9: Overview

Chapters 1 9: Overview Chapters 1 9: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 9: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapters

More information

CE 59700: LASER SCANNING

CE 59700: LASER SCANNING Digital Photogrammetry Research Group Lyles School of Civil Engineering Purdue University, USA Webpage: http://purdue.edu/ce/ Email: ahabib@purdue.edu CE 59700: LASER SCANNING 1 Contact Information Instructor:

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Automatic registration of terrestrial laser scans for geological deformation monitoring

Automatic registration of terrestrial laser scans for geological deformation monitoring Automatic registration of terrestrial laser scans for geological deformation monitoring Daniel Wujanz 1, Michael Avian 2, Daniel Krueger 1, Frank Neitzel 1 1 Chair of Geodesy and Adjustment Theory, Technische

More information

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM Glossary of Navigation Terms accelerometer. A device that senses inertial reaction to measure linear or angular acceleration. In its simplest form, it consists of a case-mounted spring and mass arrangement

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Surface Registration. Gianpaolo Palma

Surface Registration. Gianpaolo Palma Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid

More information

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight local variation of one variable with respect to another.

More information

BEFORE YOU BUY: SEVEN CRITICAL QUESTIONS TO ASK ABOUT LASER SCANNERS. Robert Gardiner

BEFORE YOU BUY: SEVEN CRITICAL QUESTIONS TO ASK ABOUT LASER SCANNERS. Robert Gardiner BEFORE YOU BUY: SEVEN CRITICAL QUESTIONS TO ASK ABOUT LASER SCANNERS Robert Gardiner Table of Contents Introduction... 3 Horizontal and Vertical Angular Accuracies... 4 Movement Tracking... 6 Range Limitations...

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Critical Aspects when using Total Stations and Laser Scanners for Geotechnical Monitoring

Critical Aspects when using Total Stations and Laser Scanners for Geotechnical Monitoring Critical Aspects when using Total Stations and Laser Scanners for Geotechnical Monitoring Lienhart, W. Institute of Engineering Geodesy and Measurement Systems, Graz University of Technology, Austria Abstract

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

The YellowScan Surveyor: 5cm Accuracy Demonstrated

The YellowScan Surveyor: 5cm Accuracy Demonstrated The YellowScan Surveyor: 5cm Accuracy Demonstrated Pierre Chaponnière1 and Tristan Allouis2 1 Application Engineer, YellowScan 2 CTO, YellowScan Introduction YellowScan Surveyor, the very latest lightweight

More information

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING Kenta Fukano 1, and Hiroshi Masuda 2 1) Graduate student, Department of Intelligence Mechanical Engineering, The University of Electro-Communications,

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

TLS Parameters, Workflows and Field Methods

TLS Parameters, Workflows and Field Methods TLS Parameters, Workflows and Field Methods Marianne Okal, UNAVCO June 20 th, 2014 How a Lidar instrument works (Recap) Transmits laser signals and measures the reflected light to create 3D point clouds.

More information

Option Driver Assistance. Product Information

Option Driver Assistance. Product Information Product Information Table of Contents 1 Overview... 3 1.1 Introduction... 3 1.2 Features and Advantages... 3 1.3 Application Areas... 4 1.4 Further Information... 5 2 Functions... 5 3 Creating the Configuration

More information

Aerial and Mobile LiDAR Data Fusion

Aerial and Mobile LiDAR Data Fusion Creating Value Delivering Solutions Aerial and Mobile LiDAR Data Fusion Dr. Srini Dharmapuri, CP, PMP What You Will Learn About LiDAR Fusion Mobile and Aerial LiDAR Technology Components & Parameters Project

More information

Chapter 2 Trajectory and Floating-Car Data

Chapter 2 Trajectory and Floating-Car Data Chapter 2 Trajectory and Floating-Car Data Measure what is measurable, and make measurable what is not so. Galileo Galilei Abstract Different aspects of traffic dynamics are captured by different measurement

More information

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,

More information

3D Point Cloud Processing

3D Point Cloud Processing 3D Point Cloud Processing The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in intensity that give the robot a single

More information

Calibration of Inertial Measurement Units Using Pendulum Motion

Calibration of Inertial Measurement Units Using Pendulum Motion Technical Paper Int l J. of Aeronautical & Space Sci. 11(3), 234 239 (2010) DOI:10.5139/IJASS.2010.11.3.234 Calibration of Inertial Measurement Units Using Pendulum Motion Keeyoung Choi* and Se-ah Jang**

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

The Applanix Approach to GPS/INS Integration

The Applanix Approach to GPS/INS Integration Lithopoulos 53 The Applanix Approach to GPS/INS Integration ERIK LITHOPOULOS, Markham ABSTRACT The Position and Orientation System for Direct Georeferencing (POS/DG) is an off-the-shelf integrated GPS/inertial

More information

UAS Campus Survey Project

UAS Campus Survey Project ARTICLE STUDENTS CAPTURING SPATIAL INFORMATION NEEDS UAS Campus Survey Project Texas A&M University- Corpus Christi, home to the largest geomatics undergraduate programme in Texas, USA, is currently undergoing

More information

Chapter 1: Overview. Photogrammetry: Introduction & Applications Photogrammetric tools:

Chapter 1: Overview. Photogrammetry: Introduction & Applications Photogrammetric tools: Chapter 1: Overview Photogrammetry: Introduction & Applications Photogrammetric tools: Rotation matrices Photogrammetric point positioning Photogrammetric bundle adjustment This chapter will cover the

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

The Effect of Changing Grid Size in the Creation of Laser Scanner Digital Surface Models

The Effect of Changing Grid Size in the Creation of Laser Scanner Digital Surface Models The Effect of Changing Grid Size in the Creation of Laser Scanner Digital Surface Models Smith, S.L 1, Holland, D.A 1, and Longley, P.A 2 1 Research & Innovation, Ordnance Survey, Romsey Road, Southampton,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems

Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems Hadi Wassaf David Burnham Frank Wang Communication, Navigation, Surveillance (CNS) and Traffic Management Systems

More information

Exploiting a database to predict the in-flight stability of the F-16

Exploiting a database to predict the in-flight stability of the F-16 Exploiting a database to predict the in-flight stability of the F-16 David Amsallem and Julien Cortial December 12, 2008 1 Introduction Among the critical phenomena that have to be taken into account when

More information

A New Way to Control Mobile LiDAR Data

A New Way to Control Mobile LiDAR Data A New Way to Control Mobile LiDAR Data Survey control has always been a critically important issue when conducting mobile LiDAR surveys. While the accuracies currently being achieved with the most capable

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Xiao Zhang, Wenda Xu, Chiyu Dong, John M. Dolan, Electrical and Computer Engineering, Carnegie Mellon University Robotics Institute,

More information

L16. Scan Matching and Image Formation

L16. Scan Matching and Image Formation EECS568 Mobile Robotics: Methods and Principles Prof. Edwin Olson L16. Scan Matching and Image Formation Scan Matching Before After 2 Scan Matching Before After 2 Map matching has to be fast 14 robots

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES AUTOMATIC EXTRACTING DEM FROM DSM WITH CONSECUTIVE MORPHOLOGICAL FILTERING Junhee Youn *1 & Tae-Hoon Kim 2 *1,2 Korea Institute of Civil Engineering

More information

Rigorous Scan Data Adjustment for kinematic LIDAR systems

Rigorous Scan Data Adjustment for kinematic LIDAR systems Rigorous Scan Data Adjustment for kinematic LIDAR systems Paul Swatschina Riegl Laser Measurement Systems ELMF Amsterdam, The Netherlands 13 November 2013 www.riegl.com Contents why kinematic scan data

More information

MEMS technology quality requirements as applied to multibeam echosounder. Jerzy DEMKOWICZ, Krzysztof BIKONIS

MEMS technology quality requirements as applied to multibeam echosounder. Jerzy DEMKOWICZ, Krzysztof BIKONIS MEMS technology quality requirements as applied to multibeam echosounder Jerzy DEMKOWICZ, Krzysztof BIKONIS Gdansk University of Technology Gdansk, Narutowicza str. 11/12, Poland demjot@eti.pg.gda.pl Small,

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

Gregory Walsh, Ph.D. San Ramon, CA January 25, 2011

Gregory Walsh, Ph.D. San Ramon, CA January 25, 2011 Leica ScanStation:: Calibration and QA Gregory Walsh, Ph.D. San Ramon, CA January 25, 2011 1. Summary Leica Geosystems, in creating the Leica Scanstation family of products, has designed and conducted

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Accelerometer Gesture Recognition

Accelerometer Gesture Recognition Accelerometer Gesture Recognition Michael Xie xie@cs.stanford.edu David Pan napdivad@stanford.edu December 12, 2014 Abstract Our goal is to make gesture-based input for smartphones and smartwatches accurate

More information

An Overview of Applanix.

An Overview of Applanix. An Overview of Applanix The Company The Industry Leader in Developing Aided Inertial Technology Founded on Canadian Aerospace and Defense Industry Expertise Providing Precise Position and Orientation Systems

More information

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University

More information

Spatial and multi-scale data assimilation in EO-LDAS. Technical Note for EO-LDAS project/nceo. P. Lewis, UCL NERC NCEO

Spatial and multi-scale data assimilation in EO-LDAS. Technical Note for EO-LDAS project/nceo. P. Lewis, UCL NERC NCEO Spatial and multi-scale data assimilation in EO-LDAS Technical Note for EO-LDAS project/nceo P. Lewis, UCL NERC NCEO Abstract Email: p.lewis@ucl.ac.uk 2 May 2012 In this technical note, spatial data assimilation

More information

Association-Matrix-Based Sample Consensus Approach for Automated Registration of Terrestrial Laser Scans Using Linear Features

Association-Matrix-Based Sample Consensus Approach for Automated Registration of Terrestrial Laser Scans Using Linear Features Association-Matrix-Based Sample Consensus Approach for Automated Registration of Terrestrial Laser Scans Using Linear Features Kaleel Al-Durgham and Ayman Habib Abstract This paper presents an approach

More information

GEODETIC MEASURING METHODS AND SHAPE ESTIMATION OF CONCRETE THIN SHELL SURFACE

GEODETIC MEASURING METHODS AND SHAPE ESTIMATION OF CONCRETE THIN SHELL SURFACE GEODETIC MEASURING METHODS AND SHAPE ESTIMATION OF CONCRETE THIN SHELL SURFACE M. Woźniak, K. Woźniak Warsaw University of Technology ABSTRACT The geodetic measurements of surface geometry can be performed

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

Design and Development of Unmanned Tilt T-Tri Rotor Aerial Vehicle

Design and Development of Unmanned Tilt T-Tri Rotor Aerial Vehicle Design and Development of Unmanned Tilt T-Tri Rotor Aerial Vehicle K. Senthil Kumar, Mohammad Rasheed, and T.Anand Abstract Helicopter offers the capability of hover, slow forward movement, vertical take-off

More information

TLS Parameters, Workflows and Field Methods

TLS Parameters, Workflows and Field Methods TLS Parameters, Workflows and Field Methods Marianne Okal, UNAVCO GSA, October 20 th, 2017 How a Lidar instrument works (Recap) Transmits laser signals and measures the reflected light to create 3D point

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Inertial Navigation Static Calibration

Inertial Navigation Static Calibration INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2018, VOL. 64, NO. 2, PP. 243 248 Manuscript received December 2, 2017; revised April, 2018. DOI: 10.24425/119518 Inertial Navigation Static Calibration

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Driven Cavity Example

Driven Cavity Example BMAppendixI.qxd 11/14/12 6:55 PM Page I-1 I CFD Driven Cavity Example I.1 Problem One of the classic benchmarks in CFD is the driven cavity problem. Consider steady, incompressible, viscous flow in a square

More information

Anisotropic model building with well control Chaoguang Zhou*, Zijian Liu, N. D. Whitmore, and Samuel Brown, PGS

Anisotropic model building with well control Chaoguang Zhou*, Zijian Liu, N. D. Whitmore, and Samuel Brown, PGS Anisotropic model building with well control Chaoguang Zhou*, Zijian Liu, N. D. Whitmore, and Samuel Brown, PGS Summary Anisotropic depth model building using surface seismic data alone is non-unique and

More information

STEREO IMAGE POINT CLOUD AND LIDAR POINT CLOUD FUSION FOR THE 3D STREET MAPPING

STEREO IMAGE POINT CLOUD AND LIDAR POINT CLOUD FUSION FOR THE 3D STREET MAPPING STEREO IMAGE POINT CLOUD AND LIDAR POINT CLOUD FUSION FOR THE 3D STREET MAPPING Yuan Yang, Ph.D. Student Zoltan Koppanyi, Post-Doctoral Researcher Charles K Toth, Research Professor SPIN Lab The University

More information

LaserGuard LG300 area alarm system. 3D laser radar alarm system for motion control and alarm applications. Instruction manual

LaserGuard LG300 area alarm system. 3D laser radar alarm system for motion control and alarm applications. Instruction manual LaserGuard LG300 area alarm system 3D laser radar alarm system for motion control and alarm applications Instruction manual LaserGuard The LaserGuard program is the user interface for the 3D laser scanner

More information

Mapping Project Report Table of Contents

Mapping Project Report Table of Contents LiDAR Estimation of Forest Leaf Structure, Terrain, and Hydrophysiology Airborne Mapping Project Report Principal Investigator: Katherine Windfeldt University of Minnesota-Twin cities 115 Green Hall 1530

More information

LOAM: LiDAR Odometry and Mapping in Real Time

LOAM: LiDAR Odometry and Mapping in Real Time LOAM: LiDAR Odometry and Mapping in Real Time Aayush Dwivedi (14006), Akshay Sharma (14062), Mandeep Singh (14363) Indian Institute of Technology Kanpur 1 Abstract This project deals with online simultaneous

More information

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for

More information

Development of a Test Field for the Calibration and Evaluation of Kinematic Multi Sensor Systems

Development of a Test Field for the Calibration and Evaluation of Kinematic Multi Sensor Systems Development of a Test Field for the Calibration and Evaluation of Kinematic Multi Sensor Systems DGK-Doktorandenseminar Graz, Austria, 26 th April 2017 Erik Heinz Institute of Geodesy and Geoinformation

More information

POINT CLOUD ANALYSIS FOR ROAD PAVEMENTS IN BAD CONDITIONS INTRODUCTION

POINT CLOUD ANALYSIS FOR ROAD PAVEMENTS IN BAD CONDITIONS INTRODUCTION POINT CLOUD ANALYSIS FOR ROAD PAVEMENTS IN BAD CONDITIONS Yoshiyuki Yamamoto, Associate Professor Yasuhiro Shimizu, Doctoral Student Eiji Nakamura, Professor Masayuki Okugawa, Associate Professor Aichi

More information

Geometric Correction

Geometric Correction CEE 6150: Digital Image Processing Geometric Correction 1 Sources of Distortion Sensor Characteristics optical distortion aspect ratio non-linear mirror velocity detector geometry & scanning sequence Viewing

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

TLS Parameters, Workflows and Field Methods

TLS Parameters, Workflows and Field Methods TLS Parameters, Workflows and Field Methods Marianne Okal, UNAVCO GSA, September 23 rd, 2016 How a Lidar instrument works (Recap) Transmits laser signals and measures the reflected light to create 3D point

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Georeferencing & Spatial Adjustment

Georeferencing & Spatial Adjustment Georeferencing & Spatial Adjustment Aligning Raster and Vector Data to the Real World Rotation Differential Scaling Distortion Skew Translation 1 The Problem How are geographically unregistered data, either

More information