Drone Net Architecture for UAS Traffic Management Multi-modal Sensor Networking Experiments

Size: px
Start display at page:

Download "Drone Net Architecture for UAS Traffic Management Multi-modal Sensor Networking Experiments"

Transcription

1 Drone Net Architecture for UAS Traffic Management Multi-modal Sensor Networking Experiments Sam Siewert ERAU 3700 Willow Creek Rd Prescott, AZ Andalibi, Mehran ERAU 3700 Willow Creek Rd Prescott, AZ Stephen Bruder ERAU 3700 Willow Creek Rd Prescott, AZ Iacopo Gentilini ERAU 3700 Willow Creek Rd Prescott, AZ Jonathan Buchholz ERAU 3700 Willow Creek Rd Prescott, AZ Abstract Drone Net is a conceptual architecture to integrate passive sensor nodes in a local sensor network along with traditional active sensing methods for small Unmanned Aerial System (suas) traffic management. The goal of the proposed research architecture is to evaluate the feasibility of the use of multiple passive sensor nodes integrating Electro- Optical/Infrared (EO/IR) and acoustic arrays networked around a UAS Traffic Management (UTM) operating region (Class G uncontrolled airspace for general aviation). The Drone Net approach will be further developed based on the feasibility analysis provided here, to compare to and/or be used in addition to RADAR (Radio Detection and Ranging) and Automatic Dependent Surveillance-Broadcast (ADS-B) tracking and identification in future experiments. We hypothesize that this hybrid passive plus active sensing approach can better manage non-compliant small UAS (without ADS-B transceivers) along with compliant UAS and general aviation in sensitive airspace, urban locations, and geofenced regions. Numerous commercial interests are developing UTM instrumentation for compliant and non-compliant drone detection and counter measures, but performance in terms of ability to detect, track, classify (bird, bug, drone, general aviation), identify, and localize aerial objects has not been standardized or well developed to compare multi-sensor solutions. The proposed Drone Net open system reference architecture is designed for passive nodes organized in a network, which can be integrated with RADAR and ADS-B. Here we present preliminary proof of concept results for two primary methods of truth comparison for generation of performance in terms of true and false positives and negatives for detection, classification, and identification. The first ground truth method designed and evaluated uses suas Micro Air Vehicle Link (MAVLink) ADS-B data along with EO/IR range detection experiments. The second ground truth method requires human review of triggered detection image capture and allows for truth performance assessment for noncompliant suas and other aerial objects (birds and bugs). The networked passive sensors have been designed to meet Class G and geo-fence UTM goals as well as assist with urban UTM operations. The approach can greatly complement NASA UTM collaboration and testing goals for 2020 and the last fifty foot challenge for package delivery UAS operations. Submitted to IEEE Aerospace, Big Sky, The EO/IR system has been tested with basic motion detection for general aviation and suas in prior work, which is now being extended to include more sensing modalities and more advanced machine vision and machine learning development via the networking of the nodes and ground computing. The paper will detail the hardware, firmware and software architecture, and preliminary efficacy of the two ground truth methods used to compute standard performance metrics. TABLE OF CONTENTS 1. INTRODUCTION DRONE NET CONCEPT SYSTEM ARCHITECTURE SOFTWARE ARCHITECTURE HYPOTHESIS TO TEST METHOD RELATED RESEARCH LOCALIZATION DRONE DETECTION ANALYSIS AND TRUTH MODEL DRONE NET TRUTH MODELS FUTURE PLANNED ACOUSTIC WORK FUTURE PLANNED ACTIVE SENSING WORK SUMMARY ACKNOWLEDGEMENTS REFERENCES BIOGRAPHY INTRODUCTION The goal of the proposed work is to create a baseline to compare methods of UTM small Unmanned Aerial System (suas) observation and tracking, with contingency scenario triggering for non-compliant suas and establish local and regional Drone Net sensor systems. The system and software architecture is a new construction and concept for which our team first wanted to assess feasibility of using passive instruments (EO/IR and acoustic) either in place of

2 or in addition to active RADAR and ADS-B methods of UTM. Ultimately, we would like to make a direct comparison to ADS-B and/or RADAR, but we felt it was necessary to first architect and assess the concept in terms of feasibility for UTM detection, tracking, and identification of UAS with passive methods alone. Drone Net is being architected, designed, and prototyped by a collaborating team lead by Embry Riddle with support from students at University of Colorado Boulder in the Embedded Systems Engineering program. Drone Net integrates passive sensor nodes in a local sensor network on the ground and on cooperative suas along with traditional active sensing methods for suas traffic management. The goal of Drone Net is to evaluate the use of multiple passive sensor nodes integrating Electro- Optical/Infrared (EO/IR) and acoustic arrays networked around a UAS Traffic Management (UTM) operating region (Class G uncontrolled airspace for general aviation (GA)). Drone Net is intended to support UTM operations concepts including real-time de-confliction (see and avoid suas-tosuas, and between GA and suas), contingency alerts, and event logging for non-cooperative suas and operation of suas in both urban and rural class G scenarios. Drone Net in particular is a fully open hardware, firmware and software architecture such that anyone can implement the sensor network or elements of it to collect data and share this data broadly to support machine vision and machine learning research for UTM real-time automation, which is described as UTM autonomicity in the UTM concept of operations [23]. The open architecture also provides a baseline that can allow for performance assessment of commercial UTM solutions and comparison of active and passive sensing instruments. For the work described in this paper, the Drone Net approach will be compared to and/or used in addition to RADAR (Radio Detection and Ranging) and Automatic Dependent Surveillance-Broadcast (ADS-B) tracking and identification. In this paper, we present preliminary feasibility analysis for two truth models to be used in performance comparison: 1) human review of imaging and acoustics and 2) geometric and physical computation of observability. Active sensing systems using LIDAR, RADAR, and ADS-B have limitations we discuss in this paper in detail including range, time reference precision, sample rate, cost, and durability/reliability for long term use. While flight nodes do have specific requirements for navigation that are unique compared to ground nodes, we have found significant overlap and commonality in requirements the main difference noted is that LIDAR is of high value on the flight nodes, but all nodes require not only GPS or ADS-B localization, but also inertial sensing. With a network of passive sensors, placed on the perimeter and interior of an aerial column, and cooperative flight nodes, the goal for this work is supplant the need for ground RADAR and improve upon ADS-B alone or simply show that Drone Net in addition to RADAR can provide better situational awareness for UTM. We hypothesize that the Drone Net approach can better manage non-compliant suas (without ADS-B transceivers) along with compliant UAS and general aviation in sensitive airspace, urban locations, and rural geolocations. Drone Net objectives include passive instrument reference design, networking of instrument nodes, a clear comparison of performance for UTM automation machine vision and learning algorithms, and a standard for data collection, processing, and storage for broader use by the UTM community. To start, we have implemented a prototype EO/IR instrument used to collect data to establish feasibility results presented in this paper, but also outline plans for two additional instruments and a flight configuration for EO/IR with added LIDAR. To summarize, our objectives include: 1) Design and test 3 passive instruments for ground and flight use in a wireless network of sensors including: a. EO/IR narrow field LWIR (10-14 micron), 32 degree horizontal field of view (HFOV) and 26 degree vertical field of view (VFOV) with one or two narrow field panchromatic area scan cameras, 13 degree HFOV, 9.8 degree VFOV all interfaced via USB-3 to System-on-chip (SoC) processing. The EO/IR has been prototyped and has been under testing for the past year at ERAU with ability to catalog common aerial objects within a 100 meter radius column [29]. b. All-sky visible camera 180 degree hemispherical detection with 6 cameras arranged with overlapping conical wide fields of view (fish eye) all streaming MPEG Transport Streams (MPTS) to EO/IR for processing and to control tilt/pan for detecting aerial objects. c. Acoustic array perimeter, intensity probe and/or beam-forming array microphones [26] with analog input into EO/IR for processing or streaming over MPTS. 2) Provide software automation for a clear comparison of standard performance metrics for multi-sensor fusion and machine learning approaches to detection, classification, and identification for aerial objects. 3) Create catalogs of aerial objects by type (classification) based on detection (motion trigger baseline, salient object for machine learning) and build database of aerial objects with EO/IR, acoustic, and compare-to active sensing data. 4) Capture aerial object event logs for cataloged objects for UTM tests and experiments. 5) Fuse aerial identification and tracking information from other sources including aggregators such as flightradar24, Air Traffic Control (ATC), and military. 2

3 6) Provide a recipe for creation of campus (urban) Drone Net at ERAU and a version that can be transported to other UTM test sites. 7) Publish a shared architecture, system reference design and Software Requirements Specification and Design (SRS/SRD) so others can replicate using off-the-shelf hardware, firmware (embedded Linux and Zephyr [34]), and software (Tensor flow [22], RDBMS, and POSIX compliant operating systems and file systems). 8) Provide public facing for collaboration on machine learning and machine vision algorithm development for UTM use. 9) Define system, firmware, and software architecture for use in UTM experiments and determine range limitations of each sensor along with performance when combined with competing software algorithms for data fusion, machine vision, and machine learning. Based on our architecture presented here, along with EO/IR reference design and partially completed Drone Net prototype system and software, we have captured data to determine the feasibility of two truth models for which we have provided a comparison of True/False Positives (TP, FP) and True/False Negatives (TN, FN) for presence of a test compliant suas (ADS-B and flight navigation data available) including: 1) EO/IR Truth model 1 human review to determine TP, FP, TN, FN 2) EO/IR Truth model 2 geometric observability based on suas navigation logging and ADS-B transmissions with ground ADS-B logs to determine TP, FP, TN, FN. The analysis of the data collected from two flight experiments presented here provides evidence for our assertion that these two truth models can be used effectively to compare a wide range of machine vision and machine learning software algorithms for detection, classification, localization (from multiple sensor samples), and identification of aerial objects in a class G air column. Collection of TP, FP, TN, FN over a range of machine vision and learning sensitivity parameter settings can be used to create Receiver Operator Characteristic (ROC) as we demonstrated in prior work [29], as well as Precision/Recall (PR) and F1 measure statistics for detection performance. Further analysis to classify and identify target types (suas, GA, natural, other) can be summarized with confusion matrix analysis. The long term goal of this analysis is to support creation and curation of a high quality public database to catalog aerial objects and to make available event logs from UTM testing. We have basic tools to reduce the fatigue of human review, but also have plans for future work for gamification of the human review of image and acoustic classification of data. Finally, we also present the viability of acoustic detection and classification of suas to complement EO/IR with preliminary and basic characterization of suas acoustic spectral analysis outdoors, with and without background noise. Overall, the goal for this paper is to document the Drone Net architecture, establish feasibility of the approach, and to present our hypothesis that a low-cost network of ground and flight sensors may be more effective than ADS- B along and/or ground RADAR, or at least provide better situational awareness for UTM than the active methods alone. 2. DRONE NET CONCEPT The Drone Net concept includes elements shown in Figure 1. The work presented here has focused on feasibility testing of ground EO/IR and characterization of acoustic viability with the intent to expand nodes in a wireless communication network of nodes in a Drone Net Sensor Network in an air-column of 1Km diameter to start with. We intend to expand the range as we progress to the single pixel limits of EO/IR with pointed narrow field optical gain and high resolution detectors. For feasibility, we have not yet concerned ourselves with range, and in fact, flying our suas Beyond Visual Line of Sight (BVLOS) is a future goal to expand the size of our test column, which is a shared UTM goal. The Passive Sensing Network will include at least the EO/IR, a future All-sky camera (in development), and a future acoustic array (also in development) at a location as indicated around the perimeter and within the ground footprint of the column. Cooperative suas are equipped with LIDAR (for proximity operations and collision avoidance) and an EO/IR identical to the ground, but with a gimbaled mount (compared to tripod tilt/pan). It is imagined that the flight nodes can become part of Drone Net with a DSRC communication protocol (to be determined), but that non-cooperative, non-compliant suas may also be in this column along with natural aerial objects (birds, bugs, meteors, debris, ground clutter, etc.). The active sensors, RADAR, and LIDAR will be used on the ground as secondary validation of our results and to support our hypothesis that the network of passive sensors can compete with RADAR or even outperform in terms of classification and identification performance at lower cost. The flight segment includes passive EO/IR and IMU navigation, as well as active ADS-B and LIDAR to support suas-to-suas, and GA see-and-avoid along with ability to provide as-flown geometric truth data from navigation logs and ADS-B broadcasts. Flying a higher fidelity IMU allows us to grade the value of ADS-B in such a small column given the protocol s limited sample rate (2Hz), precision limits with geodetic data (1/n degrees latitude and longitude). Additionally this approach allows for experimentation with a potential improvement to the ADS- B protocol, or ADS-B++. 3

4 Figure 1. Drone Net Concept Diagram To enable data sharing for de-confliction with ground node information and to experiment with ADS-B++, we plan to incorporate IEEE Direct Short Range Communication (DSRC) between ground nodes and cooperative suas, as well as between the ground nodes in a local area network. The Drone Net concept is intended to support near term goals for EO/IR and acoustic characterization and feasibility analysis, but also longerterm goals including: 1) Development, competitive comparison and use of machine vision and machine learning for detection, classification, and ultimately identification. 2) Field testing by hosting and participating in UTM experiments including urban, rural, see-and-avoid, cooperative and non-cooperative de-confliction, with DSRC. 3) Aggregation and curation of high quality suas catalogs with event logging of test data including: EO/IR images, acoustic spectrum analysis, flight IMU logs, LIDAR point clouds, along with active sensing RADAR and ADS-B data. 4) Simulation of suas and passive ground instruments to predict detectability and to guide future experiments and placement of sensor nodes. 5) Re-play ADS-B and flight navigation data in a simulation to create an observation truth model for EO/IR and acoustic sensors (using MATLAB). 6) Improvement of sensor nodes, placement, and DSRC networking based on simulation and re-play analysis. 7) Comparison of proposed commercial and research suas UTM, safety/security, and counter UAS systems with common performance analysis and data baseline. Key contributions of Drone Net include reference instrument designs, open source software (embedded and server), and the public machine learning database of images, cataloged aerial objects, and event logs. The open source software approach will allow for rapid evaluation of competing machine vision and machine learning algorithms using open source libraries such as OpenCV [35] and numerous machine learning stacks such as Tensorflow [22]. 3. SYSTEM ARCHITECTURE The Drone Net system architecture is intended to be scalable on an wireless network based upon per node processing, where data shared is not raw video, acoustic or image streams, but rather detections with classification meta-data. The DSRC messaging will include events for new aerial object detection (re-detection), conflict contingency events, tracking messages during active observing (e.g. EO/IR tilt/pan azimuth and elevation updates) and some opportunistic uplink of images to the Local Drone Net Machine Learning Server for novel aerial objects and to establish identification of both compliant and non-compliant suas and GA, as well as characterization of natural objects, debris and clutter. 4

5 Figure 2. Drone Net Block Diagram The Drone Net ground and flight elements are shown in Figure 2 as a block diagram showing DSRC data flow to and from processing elements. Each EO/IR node includes an networked SoC for machine vision and deployed machine learning applications that can at least detect and classify aerial objects (identification will likely require Drone Net Local Server inferential lookup from a database of cataloged aerial objects). The flight configuration is envisioned to include a LIDAR pre-processor (likely Raspberry Pi 3) and a machine vision SoC for see-and-avoid and processing for detected aerial objects it can downlink for cataloging and/or event notifications to Drone Net. Compliant suas are also envisioned to request notification uplinks (perhaps via publish/subscribe parameters) for Drone Net supported sense-and-avoid and de-confliction or contingency compliance through UTM protocols [23]. Ground node configuration The ground instruments for Drone Net include three passive sensor node elements that will be deployed together since the EO/IR enclosure also provides the embedded SoC machine vision processing and acoustic sampling and preprocessing. For experiments they can be unit tested alone using a Linux laptop and collaborators can implement a subset of instruments in a basic ground node. To summarize, the ground node includes: 1) EO/IR GP-GPU System-on-chip processing, single narrow field LWIR and one or two visible camera systems with built-in IMU, compass, tilt/pan, ADS-B receiver, and DSRC (802.11) for communication with local-area nodes 2) All-sky 6 visible camera systems with wired/wireless network for MPTS streaming to EO/IR node for detection and pointing of narrow field camera assembly 3) Acoustic array 6 or more wired microphones cabled to EO/IR node for audio capture and analysis. 4) Integrated EO/IR instrumentation for self-localization using GPS and inertial sensors for elevation and roll. 5) Observation of other nodes in the system via infrared stimulator (light emitting diode) to determine EO/IR camera system azimuth (pan) pointing. 6) Communication via wireless DSRC protocol to share detection information and to coordinate time based on GPS. The all-sky-camera and acoustic array will be interfaced to the EO/IR SoC computer using MPTS for video and audio streaming, enabling the EO/IR computer to detect and determine azimuth and elevation of aerial objects, thus providing the EO/IR tilt/pan tracking in real-time. Through the use of simulation and field tests, the optimal configuration of cameras and microphones will be determined as shown in Figure 2c. 5

6 Flight node configuration The compliant flight instruments for Drone Net include 2 passive sensors, the LWIR and visible cameras, as well as the active LIDAR sensor. Only a single camera is needed as the flight node can derive depth information using structure from motion. To summarize, the flight node includes: (a) (b) (c) Figure 3. (a) EO/IR Ground Node Block Diagram, (b) EO/IR + LIDAR Flight Node Block Diagram and (c) all-sky-camera and microphone ground array interfaced to EO/IR SoC 1) EO/IR GP-GPU System-on-chip processing, single narrow field LWIR and visible camera system with built-in IMU, compass, tilt/pan, ADS-B receiver, and DSRC (802.11) for communication with local-area nodes. 2) Interface between the flight SoC and ground via wireless DSRC, but also a data link to the flight control autopilot for optical navigation. 3) Wireless DSRC to ground for publish/subscribe notification to de-conflict with GA and other suas and notification of contingencies that require immediate ground safe recovery operations. 4) LIDAR scan processing for obstacle avoidance and proximity navigation in urban environments. A typical system configuration of Drone Net EO/IR ground nodes is shown in Figure 4 which allows for full localization and orientation of each node. Each node is able to observe the infrared light emitting diode which will be tripod fixed such that the azimuth of the tilt/pan EO/IR camera assembly can be determined based upon the know GPS location of the observed node. This can be done on start-up and as needed to calibrate before each node enters an active tilt/pan tracking mode. 6

7 Figure 4. Drone Net Node Azimuth Determination Strategy 4. SOFTWARE ARCHITECTURE The Drone Net software architecture includes ground and flight system services operating in real-time running on embedded Linux using POSIX real-time extensions for predictable response. Figure 5 is a Data Flow Diagram (DFD) showing the major services with message-based (publish/subscribe) communication between flight nodes and ground nodes via DSRC flight and ground messaging services providing and interface between segments and transport of the messages over and to the Internet Cloud for aggregation of data in a public database. The flight node services segment includes: 1) LIDAR interface and 3D model to represent the proximal world of the suas for urban operations. 2) EO/IR image fusion and aerial object detection for seeand-avoid features and testing. 3) ADS-B receiver interface and conflict prediction based on compliant near-by aerial objects. 4) Flight control interface for experiments with see-andavoid and automatic de-confliction and contingency safe recovery operations. 5) A rule oriented inference service for decision making to balance obstacle avoidance, see-and-avoid, deconfliction, contingency safe recovery, and normal flight operations. The ground node services segment includes: 1) Event Log for DSRC relayed flight node events, ground node EO/IR detection and tracking events, and events aggregated in the local ML Server. 2) An Aerial Object Catalog for newly observed aerial objects from flight or ground nodes. 3) EO/IR image fusion, detection, and tracking interface to the EO/IR subsystem. 4) Detection and azimuth, elevation estimation interface to the all-sky camera subsystem that produces tilt/pan pointing commands for EO/IR. 5) Pointing to control tilt/pan and to determine which detected object should be tracked if multiple are active in the same column of air. 6) Spectral analysis of acoustic data from the microphone array with an interface to the Aerial Object Catalog to record acoustic signatures of detected aerial acoustic sources. The ground system may be composed of any number of EO/IR ground nodes with all-sky and acoustic array subsystems (optional) and each node in turn interfaces to one local area ML Server, which includes: 1) Use motion based detection (difference frames with statistical filtering) for the all-sky camera to provide azimuth and elevation localization of aerial objects entering the monitored column of air. The all-skycamera has a hemispherical view with 6 megapixel cameras. Acoustic cues may also assist with azimuth and elevation localization, but range of detection for both requires further analysis. 2) Tilt and pan based upon all-sky-camera azimuth and elevation localization of aerial objects detected and track objects of interest by maintaining the centroid of the object in the center of the narrow field of view. Based upon the fixed location and tilt pan of the camera we believe this can be accomplished with simple Histogram of Oriented Gradients (HOG) and thresholds with LWIR and visible. 7

8 3) Classify detected and tracked objects as suas, GA, natural, and other based upon a Convolutional Neural Network (CNN) or other methods of machine learning that can be deployed to the embedded SoC. 4) Uplink each uniquely detected aerial object to the Local Drone Net ML Server for cataloging and potentially for full identification. 5) See-and-avoid detection and tracking of non-compliant suas and GA using machine learning and HOG tracking 6) Ground node master event log and catalog with optional raw data collection for verification. 7) Machine learning training, validation, test and deployment based on master even log and catalog data. 8) Uplink of master event log and catalog to Cloud for assessment by other Drone Net geo-locations. 9) Download of other geo-location event logs and catalogs. Figure 5. Envisioned Real-Time Software Architecture 5. HYPOTHESIS TO TEST Drone Net is being built to support general research for UTM, ML/MV, and sensor fusion and to compare active and passive sensing, but overall, our group is testing the following hypothesis: 1) A network of passive sensors can effectively monitor, track, catalog and identify aerial objects in a column of air including general aviation, suas and natural objects with lower cost, better performance and longer term reliability compared to ADS-B and RADAR alone 2) Networked ground and flight Drone Net nodes can communicate to de-conflict suas from each other and general aviation with greater success than ADS-B and RADAR alone 3) Human review truth and geometric flight navigation data can be used to compare ground sensor network proposals for UTM to optimize overall class-g shared airspace use with minimal conflict Based upon prior work and issues with ADS-B alone, cost of RADAR, and problems with reliable GPS availability in urban UTM environments, we believe passive sensing networks will have an overall advantage to active methods of ADS-B and RADAR alone. 8

9 6. METHOD The general method of cataloging, tracking and logging events related to aerial objects makes use of MV/ML detection and tracking. Many methods for detecting and tracking aerial objects from ground fixed nodes (that do tilt/pan) and moving aerial nodes have been investigated [32]. In general, while this is more challenging than detection of a moving object in a fixed field of view, many ML algorithms combined with MV show good to excellent performance [20]. The problem is selecting the best performing MV/ML combination tailored to EO/IR and acoustic sensors, so Drone Net has focused upon a common architecture and methods of performance comparison first. For any given MV/ML algorithm to be tested, the performance is graded based upon: 1) Accuracy of frames captured for detected aerial objects compared to continuous frame capture at 10Hz to provide a baseline of comparison. In terms of True Positive (TP) and False Positive (FP) for the detection compared to review of all frames. 2) ML detection, classification and identification images reviewed for accuracy with range of ML threshold settings (sensitivity) and for ROC, PR, F-measure, confusion matrix analysis. 3) Automated geometric analysis or human review of full image capture used to verify detection. First we will describe automated geometric analysis using a navigational truth model with re-simulation of events, described in more detail in section 9. Figure 6 shows an example of a coordinated view of ground nodes from a test suas in (a) and the ground node view of the suas in (b). Based upon the geometry of the test our simulation can recreate what the ground node should be able to observe using geometric localization. Geometric with localization The aforementioned navigation sensors placed on each ground Drone-Net node and aerial node provide the necessary information to determine observability of said cooperative suas within the field of view of a given sensor (e.g., EO/IR camera). Knowing the static geodetic coordinates (latitude ( L c ), longitude ( λ c ), and height ( hc )) of a given ground node (i.e., camera), and thus, the position vector in earth-centered earth-fixed (ECEF) e coordinates rec ( Lc, λc, hc ) and similar instantaneous e position of an aerial node reb ( Lb, λb, hb ), the relative r e = r e L, λ, h r e L, λ, h can position vector ( ) ( ) cb eb b b b ec c c c be described in a locally-level (i.e., tangential) coordinate Ct t C e r = C L, λ r. Finally, knowing the static frame as ( ) cb e c c cb orientation of the camera, a normalized pointing vector, in the camera coordinate frame, becomes r C ˆ = C,, r C / r C ( θφψ) c c t t cb t cb cb. This vector can be transformed into horizontal and vertical angles (spherical coordinates) in order to determine if the target (aerial node) is within the horizontal/vertical field of view (FOV) of the camera. To simulate the drone flight and its projection in the camera, a model was made in MATLAB, where the ADS-B data from the UAS acquired in real time (or recorded offline) was used for 3D visualization of UAS in the world coordinate frame. Namely, latitude, longitude, and altitude for the UAS location and pitch, roll, and yaw for the UAS orientation were utilized as can be seen in Fig 7.a. (a) (a) (b) Figure 6. (a) LWIR and (b) visible image example thumbnails 9 (b) Figure 7. 3D Visualization of the UAS in the world coordinate frame, and (b) projection of UAS onto the camera projection plane.

10 The camera has been considered to be at the origin of the local world coordinate frame. To project the UAS key points onto the camera projection plane, first the UAS key points used in part (a) will be projected to the world coordinate frame: XX WW YYWW = [ RR 11 ZZ WW1 00 tt 11 1 ] XX bb YY bb ZZ bb 1 (1) Where subscripts WW and bb represents the coordinates of the key points in the world and UAS body frames, respectively. Denoting roll, pitch, yaw, latitude, longitude, and altitude by φφ, θθ, ψψ, Lat, Long, and Alt, respectively, rotation matrix RR 11 and translation vector tt 11 are given by: RR 11 = RR XX (φφ)rr YY (θθ)rr ZZ (ψψ) (2) (RR EE + AAAAAA)cos (LLLLLL)cos(LLLLLLLL) tt 11 = (RR EE + AAAAAA)cos (LLLLLL)sin(LLLLLLLL) AAAAAA + (1 ee 2 )RR EE sin (LLLLLL) (3) where ee is the Earth eccentricity and RR EE is the Earth radius of curvature calculated by: RR 0 RR EE = 1 ee 2 ssssss 2 (LLLLLL) with RR 0 being the Earth Equatorial radius. (4) In Eq (3) for vector tt 11, we need to subtract the ECEF coordinates of the camera from the UAS Earth-Center Earth-Fixed (ECEF) coordinates to acquire local world coordinates frames since the ECEF origin is at the Earth center. Then, the world coordinates of the UAS key points will be projected to the camera frame via the following equations: XX CC YY CC ZZ CC 1 = [ RR tt 22 XX WW YYWW 1 ] (5) ZZ WW1 Denoting camera pan, tilt, and roll by ηη, ββ, and λλ, respectively, and considering the z axis of the camera frame to be the optical axis pointing outward, and positive directions of x axis to be toward left and y axis downward, rotation matrix RR 22 and translation vector tt 22 are given by: RR = RR YY (ηη)rr XX (ββ)rr ZZ (λ) tt 22 = 0 0 (6) (7) where the last matrix on the RHS of the first equation is to change to coordinate frame from world to camera and the vector tt 22 is zero due to considering the camera at the origin of the world frame. Finally, the key points described in camera frame will be projected on the camera projection plane using camera intrinsic parameters. Let xx nn be the normalized image projection: xx nn = xx yy = XX CC/ZZ CC YY CC /ZZ CC (8) Let r 2 =x 2 +y 2 and k c =[k c (1) k c (2) k c (3) k c (4) k c (5)] be the vector of camera distortion parameters. The effect of tangential and radial distortion parameters is calculated: xx dd = xx dd(1) xx dd (2) = (1 + kk cc (1)rr 2 + kk cc (2)rr 4 + kk cc (5)rr 6 )xx nn + 2kk cc(3)xxxx + kk cc (4)(rr 2 + 2xx 2 ) kk cc (3)(rr 2 + 2yy 2 ) + 2kk cc (4)xxxx (9) Then, the pixel coordinates of the key points are calculated by the camera intrinsic matrix, as: xx pp ffff(1) αααααα(1) cccc(1) xx dd (1) yy pp = 0 ffff(2) cccc(2) xx dd (2) (10) where cc(1) and cc(2) are the pixel coordinates of the camera projection plane principal point, α is the skew coefficient, and fc(1) and fc(2) are the focal lengths in the x and y direction expressed in pixels. The focal length in each direction in pixels is given by: ffff (pppppppppppp) = ffff(mmmm) pp where p is the pixel pitch in that direction. (11) The FLIR LWIR camera used in this study had a focal length of 19 mm, horizontal and vertical fields of view of 32o and 26o, and image size of [ ] pixels, 10

11 respectively. The relation between the field of view, focal length, and the image size in each direction is given by: pp iiiiii_ssss FFFFFF = 2 arctan ( ) (12) 2ff with img_sz being the image size in that direction in pixels. The result of applying Eq's (1) to (12) can be seen in part (b) of Fig 7, where a convex hull is used to connect the projected key points together. The simulation to recreate flight path and corresponding observability of an suas is based upon time coordination of sensors in each segment as follows: 1) flight sensors second suas is observable based on cooperative suas navigation shared data or groundbased observation of non-cooperative 2) ADS-B flight-to-ground target is observable based on camera location, pointing and field-of-view for ADS-B location at corresponding time 3) suas navigation improved knowledge of suas location with better temporal and spatial resolution compared to ADS-B 4) ground sensor target is observable based on camera system location, orientation, and filed-of-view over time (fixed or tilt/pan), to determine when cooperative suas should be observable based on ADS-B or suas shared navigation While NASA UTM is pursuing goals to which Drone Net aligns well, we are not aware of a similar comprehensive research effort with open design to explore suas detection, tracking and identification with resilience quite like Drone Net. Industry efforts are in progress to build similar system solutions, but our goal is to provide an open reference, a high quality database from testing and to share our results and methods. 7. RELATED RESEARCH The Drone Net architecture was inspired by previous work on smart camera systems [14, 15] and a previous experiment to assess viability of using a software defined multi-spectral camera to catalog aerial objects [29]. Results from this prior work along with NASA UTM operations concepts for rural and urban suas [23] as well as a wide variety of industry drone privacy protection and security instrument systems inspired our group to create the Drone Net open architecture and design. Prior work on acoustic detection of GA and drones [25, 27] as well as products such as acoustic cameras [36] inspired us to consider how acoustic sensing could be combined with EO/IR in networks. Analysis completed by Sandia Labs [28] convinced us that focus on passive sensors and EO/IR in particular is a promising approach for the detection of suas, with small cross section, which makes RADAR less effective and lowers initial cost and cost of operation over long periods of time. Related RADAR research [24] has shown that RADAR requires costly X- band and S-band systems in order to obtain small cross section shape and track information critical to high performance detection (in terms of ROC) and to support classification and identification goals we are pursuing. Overall, RADAR can detect suas, but based on prior work, we believe that comparison to RADAR with Drone Net EO/IR is valuable research and provides significant cost and operational ease of use advantages, that can be demonstrated with the analysis methods we have presented in this paper. We have consulted many excellent sources for methods of image, sensor, and information fusion including LWIR with visible image fusion at pixel and feature level [16] and plan to pursue pixel level fusion within our EO/IR devices based on multi-spectral stimulation and calibration due to the well documented challenges known for LWIR + visible coarse feature level fusion [17]. Finally, we intend to leverage existing machine learning methods with focus on open source software such as TensorFlow [22], but also want to open up are architecture for simple replication and use by collaborators to explore a wide range of Machine Learning (ML) and Machine Vision (MV) algorithms used in flight and on the ground. Detection and tracking with moving cameras (flight) and gimbaled or tilt/pan fixed cameras (flight and ground) has well known challenges that require more advanced algorithms than the motion detect baseline we have used in this paper [33]. Overall we recognize that Drone Net is a complex and involved project, so our main goals are to produce an open architecture and reference designs as well as a high quality public database of drone images, acoustics, and detection, classification and identification training and validation sets for broad use. 8. LOCALIZATION DRONE DETECTION ANALYSIS AND TRUTH MODEL Using ADS-B data, OEM navigation, or our own high fidelity navigation systems, we track the flights of our test suas and then reconstruct the trajectory in a MATLAB simulation. If the projected UAS fall within the image boundaries, it is claimed as seen in the MATLAB model. The re-simulation of the flight can in fact be used as a method to compare what any ground node should be able to observe in terms of the suas of interest and to generate a synthetic view from that ground node as the Figure 7 example shows. Figure 8 shows a 3D reconstruction of the flight in ground node relative coordinates that can be correlated to the ground track shown in Figure 9. The ground track can likewise be ground node relative or in absolute geodetic latitude and longitude. The geodetic ground track in Figure 9 was collected by the suas OEM navigation system. Overall, we have three sources of navigation data including: ADS-B (coarse, accurate and precise locations, but relative low precision time), OEM 11

12 navigation (good, but accuracy unknown), and our own HF navigation which is best (but under development). ADS-B has limitations including: bandwidth, a sample rate of 2Hz or less as well as limited digits of precision for geodetic state (Earth fixed) compared to relative). As such, we plan to build the HF navigation as a supplement to ADS-B for our experiments. An Automatic Dependent Surveillance-Broadcast (ADS-B) -out system transmits GPS derived position and speed with additional information (such as, identification, callsign, and timestamps) at a nominal 1 Hz update rate with a range of approximately 100 miles. A variety of message formats are available, however, the traffic report message is the most germane to the localization task and an example of which is shown in Fig. 10. Figure 8. 3D visualization of one portion of UAS trajectory that can be seen in the simulation model The message itself does not provide an explicit timestamp, however, based on an assumed 1 Hz update and the time since last contact/communication, a crude time epoch of origination can be deduced (±1 sec). Such imprecise timing is a challenge, when considering the use of this sensor as a ground truth source for a 30 frame per second sensor; however, it is certainly suitable for performing data association based on an array of detections corresponding to a track. In order to provide ground truth for a compliant UAS, a high-precision high-bandwidth position, velocity, and orientation (PVA) solution will be obtained from the integration of GPS, inertial sensors, and a barometric altimeter. In the event that it is infeasible to instrument the compliant UAS, an alternative approach will be to access the OEM GPS data from the telemetry stream. Track segment shown in a (a) (b) Figure 9. (a) Top view of one UAS trajectory generated by the MATLAB model, and (b) top view of the UAS trajectory generated by DJI Mavic software 9. DRONE NET TRUTH MODELS Truth models for Drone Net have varying degrees of reliability and are either based upon geometric analysis and re-simulation of compliant/cooperative suas navigational data or human review. The geometric truth models are based upon one of three navigational sources: 1) ADS-B transmit/receive data, which is accurate in terms of localization, but has a very low sample rate and time is based upon receipt rather than the sampled state. This is sufficient for de-confliction of suas with GA, but may not be sufficient for suas to suas de-confliction and is of limited value to our truth analysis based on the low sample rate and potential error in time correlation. 2) OEM navigational log data, which is of unknown accuracy and suas specific, but generally at a higher sample rate than ADS-B and with clear low-latency time stamping. We have used the OEM navigational data as our truth for this paper. 3) High Fidelity (HF) navigational log data is a planned enhancement to include a low-drift and high precision inertial measurement unit and GPS receiver with the goal to outperform the OEM and ADS-B in terms of localization accuracy over time. 12

13 ADS-B is the only navigation truth method that supplies identification as well as localization data. For our testing, identification could be added to the DSRC messaging to supplement ADS-B. In general ADS-B is trusted, but could be inaccurate or spoofed in real world scenarios. Figure 10. ADS-B Log by ICAO Identification with Geodetic Location and Implied Relative Samples over Time The other major method for truth analysis used in this paper and planned for future Drone Net experiments is human review. Human review leverages human visual intelligence and in fact with review tools that are well designed can take advantage of human behavioral intelligence (e.g., a flight trajectory, shape and behavior common to an insect rather than an suas or GA). This was demonstrated in prior Drone Net related testing [29]. The human review does not rely upon ADS-B identification and therefore potentially could identify ICAO spoofing or inaccuracies and provide classification that includes non-compliant, non-cooperative suas and natural aerial objects. Overall, the post flight analysis for Drone Net experiments is shown in Figure 11. Figure 11. Drone Net Truth Modeling and Analysis Methods 13

14 In prior Drone Net related work [29], we have compared salient object detection algorithms to simple motion detection with statistical filtering and adjustable thresholds for statistical change to produce a Receiver Operator Characteristic. This requires repetition of experiments to provide a sensitivity analysis of the False Positive (FP) rate as a function of True Positive (TP) accuracy. Related measures including Precision Recall (PR) and F-measure present the overall TP, FP, True Negative (TN) and False Negative (FN) data for a test or series of tests with variation of parameters used in detection, classification and identification. Figure 12 shows a basic human review analysis of two tests completed in a feasibility study completed on October 29, While the Motion Detect (MD) algorithm is a basic detection scheme that does not yet incorporate machine learning, it shows that our method of performance analysis will provide a valid method of comparing detection methods. Our goals for this paper were to establish the feasibility of the methods of analysis and the Drone Net architecture and we plan to repeat this experiment many times in the future to perfect and fully characterize it and to automate the production of ROC, PR, F-measure and confusion matrix analysis for our open design solution as well as competitive solutions from industry [37]. We believe with the demonstration of feasibility provided here that we can extend this work to compare many different types of passive and active sensors, sensor networks, and sensor fusion systems. Likewise, we can hold the sensor configuration constant and test a variety of MV/ML algorithms in on flight and ground EO/IR nodes to compare detection, classification and identification performance of each. Finally, we can also compare our concepts for HF navigation and improved ADS-B, ADS- B++, with the architecture and experimental methods we have prototyped and evaluated for feasibility. We expect HF navigation combined with our geometric analysis and MATLAB re-simulation of events and HFOV, VFOV provides our best truth model, but we will analyze all truth models to compare them. Specific scenarios can lead to errors from any of the three navigational truth models (limited sampling rates, spoofing, multiple objects in a field of view at the same time, instrument errors, and signal integrity issues to name a few). Therefore, we believe it is of high value to collect and process data from multiple sources. Human review has high value when comparing results to MV/ML, as humans have irrefutable high visual intelligence [31], but with multi-spectral images (sensing not natural for humans), there are also misconceptions and sources of error in human review. We intend to address this through gamification and automation of human review in order to promote a statistically significant outcome and to build a high quality database of human reviewed images and acoustic samples. True Positive Rate Drone ROC for Motion Detect False Positive Rate Figure 12. Detection Performance for Motion Based Differencing with Erosion Filter [29] 10. FUTURE PLANNED ACOUSTIC WORK The idea of combining acoustic data with EO/IR came out of our investigation of acoustic cameras and based upon related research, we have decided to explore acoustic characterization to determine if an acoustic camera will be of value to Drone Net. Here we present our preliminary results with focus on spectral analysis used to classify suas acoustic sources and to provide a secondary method for azimuth and elevation angle locations of sources to guider our narrow field EO/IR pointing. On one hand, acoustic sensors are passive, have non-line-ofsight capability, and are small, low-power, and inexpensive. Acoustic microphones configured in an array are capable of classifying/identifying and estimating the azimuth and elevation angles to detected targets of interest at distances of several kilometers [1]. Many UAS are relatively small in size, so they can be difficult to detect optically. In addition, they can maneuver at low altitudes and not have considerable metal parts and a large RADAR section, so they can elude RADAR detection. UASs powered by small electric motors might not have sufficient thermal emittance to be detected during day time against the sun glare or hot background objects [2]. On the other hand, there are several limitations to what acoustic sensors can do for drone detection and classification: (1) If the UAS is in the far field, only Direction Of Arrival (DOA) can be detected and the UAS cannot be localized; (2) UAS classification based on acoustic signature requires powerful recording microphones and significant signal processing to separate the UAS acoustic wave from the background noise and it needs a significant database of acoustic signatures for training any classifier which is not publicly available; (3) Existence of wind and temperature gradients would change the direction of travel for acoustic waves, they could be bent upward or scattered and not be captured by the microphone; (4) reflectance of acoustic waves from the terrain might MD RAND 14

15 interfere destructively with the acoustic waves; (5) Absorption of higher frequencies in the acoustic waves by the atmosphere and interference from the background noise make it difficult to extract the original acoustic wave and reduce the effective distance over which the acoustic event can be recorded as can be seen in Fig 13. Here the spectrograms for our UAS flying in real world conditions with background noise and wind and a typical UAS moving in an anechoic chamber are shown in parts (a) and (b), respectively. Spectrogram is a logarithmic plot of the squared magnitude of the short-time Fourier transform of a signal which describes the frequencies available in the signal at different time points [38]. While in [b] the change in frequency due to drone direction of motion with respect to the microphone, i.e., Doppler Effect, can be seen on vertical harmonic lines, it is not visible in our UAS acoustic signal spectrogram due to the interference from the background noise and wind. Also, as the frequency increases on the horizontal axis in (a), it is less observable due to atmospheric absorption dbm/hz Frequency (khz) (a) dbm/hz 5 Time (s) In Fig 14 (a), the rotors were spinning with an RPM of about 7200 (or equivalently 120 Hz), so the Blade Passing Frequency (BPF) was 240 Hz (each motor shaft has 2 blades) which can be seen in Fig 14 (b) along with its harmonics (a) dbm/hz front right front left rear right rear left Time(s) Frequency (khz) (b) Figure 14. (a) rpm of UAS motors vs time, and (b) spectrogram of DJI Mavic acoustic wave where BPF harmonics can be clearly seen Time (s) Rotational velocity (rpm) Frequency (khz) (b) Figure 13. Spectrogram of DJI Mavic acoustic wave in the outdoor conditions subjected to moderate wind, considerable ambient noise, and flying on one side of the microphone, and (b) spectrogram of the a typical UAS acoustic wave in an anechoic chamber, where the Doppler effect is clearly observed Time (s) 11. FUTURE PLANNED ACTIVE SENSING WORK As shown in Figure 1, our long term goal is to compare our passive sensing methods to active. The best advantage of RADAR is detection range which can be over kilometers although this is not well established for suas with small cross section. Furthermore, RADAR provides track and some RADAR systems can provide basic shape information. We want to use RADAR to compare to, in order to test our hypothesis that passive sensing, with a network of sensors than can ring and or be laid out in a mesh can perform as well as RADAR for detection, but perhaps far better for classification and identification of non-cooperative and noncompliant suas and other aerial objects (e.g. birds). ADS- B is expected to work well for GA, but for suas, operating with higher density, in urban and rural environments with high ground clutter, we d like to compare performance. For ground nodes LIDAR is costly and has limited range. 15

16 For flight Drone Net nodes, we believe LIDAR has much more value compared to the ground, especially for urban scenarios that can be GPS limited or denied to support missions such as parcel delivery. Our future plans include combining active sensor either to test our hypothesis (the case with ground RADAR) or to enhance flight node operations. 12. SUMMARY We have shown feasibility for use of two truth models to assess performance of sensor networks that employ MV/ML for detection, tracking, classification and identification of both compliant and non-compliant suas. We have also characterized acoustics of our test suas (DJI Mavic) to establish that acoustic signatures are for additional information to establish suas type could enhance our goals and objectives. Finally, we have provided a reference for the Drone Net hardware, firmware, software architecture so that we can proceed with development of reference designs for flight and ground nodes with an open invitation for other researchers to contribute, improve and collaborate on this approach to improving suas safe operations in class G shared airspace consistent with UTM operational concepts and goals. ACKNOWLEDGEMENTS The authors thank Embry Riddle Aeronautical University Accelerate Research Initiative program for funding to build and purse the feasibility experiments presented in this paper. REFERENCES [1] Benyamin, Minas, and Geoffrey H. Goldman. Acoustic Detection and Tracking of a Class I UAS with a Small Tetrahedral Microphone Array. No. ARL-TR ARMY RESEARCH LAB ADELPHI MD, [2] Pham, Tien, and Leng Sim. Acoustic Data Collection of Tactical Unmanned Air Vehicles (TUAVs). No. ARL- TR ARMY RESEARCH LAB ADELPHI MD, [3] Zelnio, Anne M., Ellen E. Case, and Brian D. Rigling. "A low-cost acoustic array for detecting and tracking small RC aircraft." Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, DSP/SPE IEEE 13th. IEEE, [4] Massey, Kevin, and Richard Gaeta. "Noise measurements of tactical UAVs." 16th AIAA 3911 (2010): [5] Hans-Elias de Bree, Guido de Croon, "Acoustic Vector Sensors on Small Unmanned Air Vehicle", the SMi Unmanned Aircraft Systems UK, exploitation for aircraft localisation and parameter estimation." Defence Science Journal 51.3 (2001): 279. [7] Jeon, Sungho, et al. "Empirical Study of Drone Sound Detection in Real-Life Environment with Deep Neural Networks." arxiv preprint arxiv: (2017). [8] Kloet, N., S. Watkins, and R. Clothier. "Acoustic signature measurement of small multi-rotor unmanned aircraft systems." International Journal of Micro Air Vehicles 9.1 (2017): [9] Intaratep, Nanyaporn, et al. "Experimental study of quadcopter acoustics and performance at static thrust conditions." 22nd AIAA/CEAS Aeroacoustics Conference [10] Heilmann, Dipl Wi Ing Gunnar, Dirk Doebler, and Magdalena Boeck. "Exploring the limitations and expectations of sound source localization and visualization techniques." INTER-NOISE and NOISE- CON congress and conference proceedings, Melbourne, Australia. Vol [11] Liu, Hao, et al. "Drone Detection Based on an Audio- Assisted Camera Array." Multimedia Big Data (BigMM), 2017 IEEE Third International Conference on. IEEE, [12] Bougaiov, N., and Yu Danik. "HOUGH TRANSFORM FOR UAV s ACOUSTIC SIGNALS DETECTION." [13] Shi, Weiqun, et al. "Detecting, Tracking, and Identifying Airborne Threats with Netted Sensor Fence." Sensor Fusion-Foundation and Applications. InTech, [14] Siewert, V. Angoth, R. Krishnamurthy, K. Mani, K. Mock, S. B. Singh, S. Srivistava, C. Wagner, R. Claus, M. Demi Vis, Software Defined Multi-Spectral Imaging for Arctic Sensor Networks, SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII, Baltimore, Maryland, April [15] S. Siewert, J. Shihadeh, Randall Myers, Jay Khandhar, Vitaly Ivanov, Low Cost, High Performance and Efficiency Computational Photometer Design, SPIE Sensing Technology and Applications, SPIE Proceedings, Volume 9121, Baltimore, Maryland, May [16] Piella, G. (2003). A general framework for multiresolution image fusion: from pixels to regions. Information fusion, 4(4), [17] Blum, R. S., & Liu, Z. (Eds.). (2005). Multi-sensor image fusion and its applications. CRC press. [6] Sadasivan, S., M. Gurubasavaraj, and S. Ravi Sekar. "Acoustic signature of an unmanned air vehicle 16 [18] Sharma, G., Jurie, F., & Schmid, C. (2012, June). Discriminative spatial saliency for image classification.

17 In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on (pp ). IEEE. [19] Richards, Mark A., James A. Scheer, and William A. Holm. Principles of modern radar. SciTech Pub., [20] Panagiotakis, Costas, et al. "Segmentation and sampling of moving object trajectories based on representativeness." IEEE Transactions on Knowledge and Data Engineering 24.7 (2012): [21] flightradar24.com, ADS-B, primary/secondary RADAR flight localization and aggregation services. [22] Abadi, Martín, et al. "Tensorflow: Large-scale machine learning on heterogeneous distributed systems." arxiv preprint arxiv: (2016). [23] Kopardekar, Parimal, et al. "Unmanned aircraft system traffic management (utm) concept of operations." AIAA Aviation Forum [24] Mohajerin, Nima, et al. "Feature extraction and radar track classification for detecting UAVs in civillian airspace." Radar Conference, 2014 IEEE. IEEE, [25] de Bree, Hans-Elias, and Guido de Croon. "Acoustic Vector Sensors on Small Unmanned Air Vehicles." the SMi Unmanned Aircraft Systems, UK (2011). [26] Case, Ellen E., Anne M. Zelnio, and Brian D. Rigling. "Low-cost acoustic array for small UAV detection and tracking." Aerospace and Electronics Conference, NAECON IEEE National. IEEE, [27] Zelnio, Anne M., Ellen E. Case, and Brian D. Rigling. "A low-cost acoustic array for detecting and tracking small RC aircraft." Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, DSP/SPE IEEE 13th. IEEE, [28] Birch, Gabriel Carisle, John Clark Griffin, and Matthew Kelly Erdman. UAS Detection Classification and Neutralization: Market Survey No. SAND Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), [29] S. Siewert, M. Vis, R. Claus, R. Krishnamurthy, S. B. Singh, A. K. Singh, S. Gunasekaran, Image and Information Fusion Experiments with a Software- Defined Multi-Spectral Imaging System for Aviation and Marine Sensor Networks, AIAA SciTech 2017, Grapevine, Texas, January [30] Geiger, Andreas, et al. "Vision meets robotics: The KITTI dataset." The International Journal of Robotics Research (2013): [31] Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database." Computer Vision and Pattern Recognition, CVPR IEEE Conference on. IEEE, [32] Aker, Cemal, and Sinan Kalkan. "Using Deep Networks for Drone Detection." arxiv preprint arxiv: (2017). [33] Zhu, Yukun, et al. "segdeepm: Exploiting segmentation and context in deep neural networks for object detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition [34] Amiri-Kordestani, Mahdi, and Hadj Bourdoucen. "A Survey On Embedded Open Source System Software For The Internet Of Things." (2017). [35] Pulli, Kari, et al. "Real-time computer vision with OpenCV." Communications of the ACM 55.6 (2012): [36] Hansen, R. K., and P. A. Andersen. "A 3D underwater acoustic camera properties and applications." Acoustical Imaging. Springer US, [37] Hearing, Brian, and John Franklin. "Drone detection and classification methods and apparatus." U.S. Patent No. 9,697, Jul BIOGRAPHIES Sam Siewert has a B.S. in Aerospace and Mechanical Engineering from University of Notre Dame and M.S. and Ph.D. in Computer Science from University of Colorado Boulder. He has worked in the computer engineering industry for twenty four years before starting an academic career in Half of his time was spent on NASA space exploration programs and the other half of that time on commercial product development for high performance networking and storage systems. In 2014 Dr. Siewert joined Embry Riddle Aeronautical University as full time faculty and retains an adjunct professor role at University of Colorado Boulder. Mehran Andalibi received his Ph.D. degree in Mechanical Engineering from Oklahoma State University in He is currently an Assistant Professor in the Department of Mechanical Engineering, Embry- Riddle Aeronautical University, AZ. His research interests are the application of computer vision in intelligent systems, including vision-based navigation of autonomous ground robots, detection, tracking, and classification of unmanned aerial vehicles, and development of visionbased medical devices. 17

18 Stephen Bruder, Ph.D., is a subject matter expert in the area of GPS denied navigation with 20+ years of experience and more than 50 peer reviewed publications. He is currently an associate professor at Embry-Riddle Aeronautical University, member of the ICARUS research group, and a consultant in the area of aided navigation systems. Dr. Bruder has served as principal investigator on aided navigation projects for MDA, AFRL, NASA, SNL, USSOCOM, and others, to include the development of a GPS denied navigation algorithms for unmanned ground vehicles (SNL) and a satellite based autocalibrating inertial measurement system (MDA). Iacopo Gentilini Received his M.S. degree in Mechanical Engineering in 2010 and a Ph.D. in Mechanical Engineering from Carnegie Mellon University in He is currently an Associate Professor of Aerospace and Mechanical Engineering at Embry- Riddle Aeronautical University. His interests include mixed integer non-linear programming, robotic path planning in redundant configuration spaces, and cycle time and energy consumption minimization for redundant industrial robotic systems. Jonathan Buchholz is currently an undergraduate student of Mechanical Engineering at Embry-Riddle Aeronautical University. His interests include aerial robotics, machine learning, and autonomous systems. 18

Camera Drones Lecture 2 Control and Sensors

Camera Drones Lecture 2 Control and Sensors Camera Drones Lecture 2 Control and Sensors Ass.Prof. Friedrich Fraundorfer WS 2017 1 Outline Quadrotor control principles Sensors 2 Quadrotor control - Hovering Hovering means quadrotor needs to hold

More information

SE310 Analysis and Design of Software Systems

SE310 Analysis and Design of Software Systems SE310 Analysis and Design of Software Systems Lecture 11 Presentation of Analysis and Design [Level-0,1,2,3] March 21, 2018 Sam Siewert Reminders Assignment #5 Assignment #5 & #6 Posted Questions? Exercise

More information

DRONE NET, A PASSIVE INSTRUMENT NETWORK DRIVEN BY MACHINE VISION AND MACHINE LEARNING TO AUTOMATE UAS TRAFFIC MANAGEMENT

DRONE NET, A PASSIVE INSTRUMENT NETWORK DRIVEN BY MACHINE VISION AND MACHINE LEARNING TO AUTOMATE UAS TRAFFIC MANAGEMENT AUVSI s XPONENTIAL 2018-SIEWERT DRONE NET, A PASSIVE INSTRUMENT NETWORK DRIVEN BY MACHINE VISION AND MACHINE LEARNING TO AUTOMATE UAS TRAFFIC MANAGEMENT Sam Siewert *, Mehran Andalibi, Stephen Bruder,

More information

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016 Sensor Fusion: Potential, Challenges and Applications Presented by KVH Industries and Geodetics, Inc. December 2016 1 KVH Industries Overview Innovative technology company 600 employees worldwide Focused

More information

Airborne LiDAR Data Acquisition for Forestry Applications. Mischa Hey WSI (Corvallis, OR)

Airborne LiDAR Data Acquisition for Forestry Applications. Mischa Hey WSI (Corvallis, OR) Airborne LiDAR Data Acquisition for Forestry Applications Mischa Hey WSI (Corvallis, OR) WSI Services Corvallis, OR Airborne Mapping: Light Detection and Ranging (LiDAR) Thermal Infrared Imagery 4-Band

More information

Visionary EXT. Intelligent Camera System PATENT PENDING

Visionary EXT. Intelligent Camera System PATENT PENDING Visionary EXT Intelligent Camera System PATENT PENDING EXTended Range Aventura's Visionary EXT represents a new paradigm in video surveillance, where an operator can concentrate on watching a single screen

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures

Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures Joseph Coughlin Stinger Ghaffarian Technologies Colorado Springs, CO joe.coughlin@sgt-inc.com

More information

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots ROBOT TEAMS CH 12 Experiments with Cooperative Aerial-Ground Robots Gaurav S. Sukhatme, James F. Montgomery, and Richard T. Vaughan Speaker: Jeff Barnett Paper Focus Heterogeneous Teams for Surveillance

More information

Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition

Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition NDIA 2012 Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition 15 May 2012 Briefed by: Stephanie Brown Reitmeier United States Army Aviation and Missile Research, Development,

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Chapter 13. Vision Based Guidance. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012,

Chapter 13. Vision Based Guidance. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012, Chapter 3 Vision Based Guidance Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 22, Chapter 3: Slide Architecture w/ Camera targets to track/avoid vision-based guidance waypoints status

More information

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning 1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July

More information

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS

CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS PASSIVE ACTIVE DIGITAL CAMERA THERMAL (e.g. TIMS) VIDEO CAMERA MULTI- SPECTRAL SCANNERS VISIBLE & NIR MICROWAVE HYPERSPECTRAL (e.g. AVIRIS) SLAR Real Aperture

More information

3D Convolutional Neural Networks for Landing Zone Detection from LiDAR

3D Convolutional Neural Networks for Landing Zone Detection from LiDAR 3D Convolutional Neural Networks for Landing Zone Detection from LiDAR Daniel Mataruna and Sebastian Scherer Presented by: Sabin Kafle Outline Introduction Preliminaries Approach Volumetric Density Mapping

More information

Runway Centerline Deviation Estimation from Point Clouds using LiDAR imagery

Runway Centerline Deviation Estimation from Point Clouds using LiDAR imagery Runway Centerline Deviation Estimation from Point Clouds using LiDAR imagery Seth Young 1, Charles Toth 2, Zoltan Koppanyi 2 1 Department of Civil, Environmental and Geodetic Engineering The Ohio State

More information

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Dipl. Inf. Sandro Esquivel Prof. Dr.-Ing. Reinhard Koch Multimedia Information Processing Christian-Albrechts-University of Kiel

More information

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM Glossary of Navigation Terms accelerometer. A device that senses inertial reaction to measure linear or angular acceleration. In its simplest form, it consists of a case-mounted spring and mass arrangement

More information

Pervasive Computing. OpenLab Jan 14 04pm L Institute of Networked and Embedded Systems

Pervasive Computing. OpenLab Jan 14 04pm L Institute of Networked and Embedded Systems Pervasive Computing Institute of Networked and Embedded Systems OpenLab 2010 Jan 14 04pm L4.1.01 MISSION STATEMENT Founded in 2007, the Pervasive Computing Group at Klagenfurt University is part of the

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Satellite Attitude Determination

Satellite Attitude Determination Satellite Attitude Determination AERO4701 Space Engineering 3 Week 5 Last Week Looked at GPS signals and pseudorange error terms Looked at GPS positioning from pseudorange data Looked at GPS error sources,

More information

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation February 26, 2013 ESRA Fed. GIS Outline: Big picture: Positioning and

More information

Infrared Scene Simulation for Chemical Standoff Detection System Evaluation

Infrared Scene Simulation for Chemical Standoff Detection System Evaluation Infrared Scene Simulation for Chemical Standoff Detection System Evaluation Peter Mantica, Chris Lietzke, Jer Zimmermann ITT Industries, Advanced Engineering and Sciences Division Fort Wayne, Indiana Fran

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 11 Part-2 Segmentation, Camera Calibration and Feature Alignment March 28, 2014 Sam Siewert Outline of Week 11 Exam #1 Results Overview and Solutions Wrap up of

More information

Integrated Multi-Source LiDAR and Imagery

Integrated Multi-Source LiDAR and Imagery Figure 1: AirDaC aerial scanning system Integrated Multi-Source LiDAR and Imagery The derived benefits of LiDAR scanning in the fields of engineering, surveying, and planning are well documented. It has

More information

Navigation for Future Space Exploration Missions Based on Imaging LiDAR Technologies. Alexandre Pollini Amsterdam,

Navigation for Future Space Exploration Missions Based on Imaging LiDAR Technologies. Alexandre Pollini Amsterdam, Navigation for Future Space Exploration Missions Based on Imaging LiDAR Technologies Alexandre Pollini Amsterdam, 12.11.2013 Presentation outline The needs: missions scenario Current benchmark in space

More information

Project: UAV Payload Retract Mechanism. Company Background. Introduction

Project: UAV Payload Retract Mechanism. Company Background. Introduction Ascent Vision Technologies LLC 90 Aviation Lane Belgrade, MT 59714 Tel 406.388.2092 Fax 406.388.8133 www.ascentvision.com Project: UAV Payload Retract Mechanism Company Background Ascent Vision Technologies

More information

MACHINE VISION AS A METHOD FOR CHARACTERIZING SOLAR TRACKER PERFORMANCE

MACHINE VISION AS A METHOD FOR CHARACTERIZING SOLAR TRACKER PERFORMANCE MACHINE VISION AS A METHOD FOR CHARACTERIZING SOLAR TRACKER PERFORMANCE M. Davis, J. Lawler, J. Coyle, A. Reich, T. Williams GreenMountain Engineering, LLC ABSTRACT This paper describes an approach to

More information

Course Outline (1) #6 Data Acquisition for Built Environment. Fumio YAMAZAKI

Course Outline (1) #6 Data Acquisition for Built Environment. Fumio YAMAZAKI AT09.98 Applied GIS and Remote Sensing for Disaster Mitigation #6 Data Acquisition for Built Environment 9 October, 2002 Fumio YAMAZAKI yamazaki@ait.ac.th http://www.star.ait.ac.th/~yamazaki/ Course Outline

More information

Dr. Larry J. Paxton Johns Hopkins University Applied Physics Laboratory Laurel, MD (301) (301) fax

Dr. Larry J. Paxton Johns Hopkins University Applied Physics Laboratory Laurel, MD (301) (301) fax Dr. Larry J. Paxton Johns Hopkins University Applied Physics Laboratory Laurel, MD 20723 (301) 953-6871 (301) 953-6670 fax Understand the instrument. Be able to convert measured counts/pixel on-orbit into

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 12 Part-2 Additional 3D Scene Considerations March 29, 2014 Sam Siewert Outline of Week 12 Computer Vision APIs and Languages Alternatives to C++ and OpenCV API

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

TEST RESULTS OF A GPS/INERTIAL NAVIGATION SYSTEM USING A LOW COST MEMS IMU

TEST RESULTS OF A GPS/INERTIAL NAVIGATION SYSTEM USING A LOW COST MEMS IMU TEST RESULTS OF A GPS/INERTIAL NAVIGATION SYSTEM USING A LOW COST MEMS IMU Alison K. Brown, Ph.D.* NAVSYS Corporation, 1496 Woodcarver Road, Colorado Springs, CO 891 USA, e-mail: abrown@navsys.com Abstract

More information

Reality Modeling Drone Capture Guide

Reality Modeling Drone Capture Guide Reality Modeling Drone Capture Guide Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with ContextCapture, Bentley s reality modeling software. Learn the

More information

Large-Scale Flight Phase identification from ADS-B Data Using Machine Learning Methods

Large-Scale Flight Phase identification from ADS-B Data Using Machine Learning Methods Large-Scale Flight Phase identification from ADS-B Data Using Methods Junzi Sun 06.2016 PhD student, ATM Control and Simulation, Aerospace Engineering Large-Scale Flight Phase identification from ADS-B

More information

TLS Parameters, Workflows and Field Methods

TLS Parameters, Workflows and Field Methods TLS Parameters, Workflows and Field Methods Marianne Okal, UNAVCO GSA, October 20 th, 2017 How a Lidar instrument works (Recap) Transmits laser signals and measures the reflected light to create 3D point

More information

Sensor technology for mobile robots

Sensor technology for mobile robots Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.uni-hamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance

More information

INSPIRE 1 Release Notes

INSPIRE 1 Release Notes 2017.07.10 1. All-in-One firmware version updated to v01.11.01.50. 2. Remote Controller firmware version updated to v1.7.80. 3. DJI GO app ios version updated to v3.1.13. 4. DJI GO app Android version

More information

UAV s in Surveying: Integration/processes/deliverables A-Z. 3Dsurvey.si

UAV s in Surveying: Integration/processes/deliverables A-Z. 3Dsurvey.si UAV s in Surveying: Integration/processes/deliverables A-Z Info@eGPS.net TODAY S PROGRAM Introduction to photogrammetry and 3Dsurvey Theoretical facts about the technology and basics of 3dsurvey Introduction

More information

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural

More information

Evaluating the Performance of a Vehicle Pose Measurement System

Evaluating the Performance of a Vehicle Pose Measurement System Evaluating the Performance of a Vehicle Pose Measurement System Harry Scott Sandor Szabo National Institute of Standards and Technology Abstract A method is presented for evaluating the performance of

More information

Fusion of Radar and EO-sensors for Surveillance

Fusion of Radar and EO-sensors for Surveillance of Radar and EO-sensors for Surveillance L.J.H.M. Kester, A. Theil TNO Physics and Electronics Laboratory P.O. Box 96864, 2509 JG The Hague, The Netherlands kester@fel.tno.nl, theil@fel.tno.nl Abstract

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

Correcting INS Drift in Terrain Surface Measurements. Heather Chemistruck Ph.D. Student Mechanical Engineering Vehicle Terrain Performance Lab

Correcting INS Drift in Terrain Surface Measurements. Heather Chemistruck Ph.D. Student Mechanical Engineering Vehicle Terrain Performance Lab Correcting INS Drift in Terrain Surface Measurements Ph.D. Student Mechanical Engineering Vehicle Terrain Performance Lab October 25, 2010 Outline Laboratory Overview Vehicle Terrain Measurement System

More information

Object Geolocation from Crowdsourced Street Level Imagery

Object Geolocation from Crowdsourced Street Level Imagery Object Geolocation from Crowdsourced Street Level Imagery Vladimir A. Krylov and Rozenn Dahyot ADAPT Centre, School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland {vladimir.krylov,rozenn.dahyot}@tcd.ie

More information

Lecture: Autonomous micro aerial vehicles

Lecture: Autonomous micro aerial vehicles Lecture: Autonomous micro aerial vehicles Friedrich Fraundorfer Remote Sensing Technology TU München 1/41 Autonomous operation@eth Zürich Start 2/41 Autonomous operation@eth Zürich 3/41 Outline MAV system

More information

Aerial and Mobile LiDAR Data Fusion

Aerial and Mobile LiDAR Data Fusion Creating Value Delivering Solutions Aerial and Mobile LiDAR Data Fusion Dr. Srini Dharmapuri, CP, PMP What You Will Learn About LiDAR Fusion Mobile and Aerial LiDAR Technology Components & Parameters Project

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

POME A mobile camera system for accurate indoor pose

POME A mobile camera system for accurate indoor pose POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble

More information

INSPIRE 1 Release Notes

INSPIRE 1 Release Notes 2016.12.15 1. All-in-One firmware version updated to v1.10.1.40. 2. DJI GO app ios version updated to v3.1.1. 3. DJI GO app Android version updated to v3.1.1. What s New: 1. Optimized Flight Restriction

More information

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation Patrick Dunau 1 Fraunhofer-Institute, of Optronics, Image Exploitation and System Technologies (IOSB), Gutleuthausstr.

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software Outline of Presentation Automated Feature Extraction from Terrestrial and Airborne LIDAR Presented By: Stuart Blundell Overwatch Geospatial - VLS Ops Co-Author: David W. Opitz Overwatch Geospatial - VLS

More information

2. POINT CLOUD DATA PROCESSING

2. POINT CLOUD DATA PROCESSING Point Cloud Generation from suas-mounted iphone Imagery: Performance Analysis A. D. Ladai, J. Miller Towill, Inc., 2300 Clayton Road, Suite 1200, Concord, CA 94520-2176, USA - (andras.ladai, jeffrey.miller)@towill.com

More information

Airborne Laser Scanning: Remote Sensing with LiDAR

Airborne Laser Scanning: Remote Sensing with LiDAR Airborne Laser Scanning: Remote Sensing with LiDAR ALS / LIDAR OUTLINE Laser remote sensing background Basic components of an ALS/LIDAR system Two distinct families of ALS systems Waveform Discrete Return

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

LIDAR MAPPING FACT SHEET

LIDAR MAPPING FACT SHEET 1. LIDAR THEORY What is lidar? Lidar is an acronym for light detection and ranging. In the mapping industry, this term is used to describe an airborne laser profiling system that produces location and

More information

FC Series Traffic Camera. Architect & Engineering Specifications

FC Series Traffic Camera. Architect & Engineering Specifications FC Series Traffic Camera Architect & Engineering Specifications This document is controlled to FLIR Technology Level 1. The information contained in this document pertains to a dual use product controlled

More information

INSPIRE 1 Quick Start Guide V1.0

INSPIRE 1 Quick Start Guide V1.0 INSPIRE Quick Start Guide V.0 The Inspire is a professional aerial filmmaking and photography platform that is ready to fly right out of the box. Featuring an onboard camera equipped with a 0mm lens and

More information

DEVELOPMENT OF CAMERA MODEL AND GEOMETRIC CALIBRATION/VALIDATION OF XSAT IRIS IMAGERY

DEVELOPMENT OF CAMERA MODEL AND GEOMETRIC CALIBRATION/VALIDATION OF XSAT IRIS IMAGERY DEVELOPMENT OF CAMERA MODEL AND GEOMETRIC CALIBRATION/VALIDATION OF XSAT IRIS IMAGERY Leong Keong Kwoh, Xiaojing Huang, Wee Juan Tan Centre for Remote, Imaging Sensing and Processing (CRISP), National

More information

Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems

Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems Hadi Wassaf David Burnham Frank Wang Communication, Navigation, Surveillance (CNS) and Traffic Management Systems

More information

ACELLENT SOFTWARE CATALOG

ACELLENT SOFTWARE CATALOG ACELLENT SOFTWARE CATALOG Software Acellent's software works in tandem with our sensors and hardware to detect and characterize structural anomalies in metals and composites due to the presence of cracks,

More information

Towards a Lower Helicopter Noise Interference in Human Life

Towards a Lower Helicopter Noise Interference in Human Life Towards a Lower Helicopter Noise Interference in Human Life Fausto Cenedese Acoustics and Vibration Department AGUSTA, Via G. Agusta 520, 21017 Cascina Costa (VA), Italy Noise Regulation Workshop September

More information

UAV Hyperspectral system for remote sensing application

UAV Hyperspectral system for remote sensing application UAV Hyperspectral system for remote sensing application The system consists airborne imaging spectrophotometer placed on a frame suitable for use aircraft, a UAV helicopter and all components needed for

More information

Mapping Project Report Table of Contents

Mapping Project Report Table of Contents LiDAR Estimation of Forest Leaf Structure, Terrain, and Hydrophysiology Airborne Mapping Project Report Principal Investigator: Katherine Windfeldt University of Minnesota-Twin cities 115 Green Hall 1530

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection

Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection Thomas Gruber a, Ryan Fauth a, Brian Tercha a, Chris Powell a, Kristin Steinhardt a a

More information

About LIDAR Data. What Are LIDAR Data? How LIDAR Data Are Collected

About LIDAR Data. What Are LIDAR Data? How LIDAR Data Are Collected 1 of 6 10/7/2006 3:24 PM Project Overview Data Description GIS Tutorials Applications Coastal County Maps Data Tools Data Sets & Metadata Other Links About this CD-ROM Partners About LIDAR Data What Are

More information

Geolocation with FW 6.4x & Video Security Client Geolocation with FW 6.4x & Video Security Client 2.1 Technical Note

Geolocation with FW 6.4x & Video Security Client Geolocation with FW 6.4x & Video Security Client 2.1 Technical Note Geolocation with FW 6.4x & Video Security Client 2.1 1 10 Geolocation with FW 6.4x & Video Security Client 2.1 Technical Note Geolocation with FW 6.4x & Video Security Client 2.1 2 10 Table of contents

More information

DJI GS PRO. User Manual V

DJI GS PRO. User Manual V DJI GS PRO User Manual V1.4 2017.03 Video Tutorials Virtual Fence Mission 3D Map Area Mission Waypoint Flight Mission 2 2017 DJI All Rights Reserved. Contents Video Tutorials 2 Disclaimer 4 Warning 4 Introduction

More information

Expendable Countermeasure Effectiveness against Imaging Infrared Guided Threats

Expendable Countermeasure Effectiveness against Imaging Infrared Guided Threats Expendable Countermeasure Effectiveness against Imaging Infrared Guided Threats Presentation Outline 1. Introduction 2. Imaging Infrared (IIR) Guided Threats 3. Infrared Countermeasures (IRCM) 4. IRCM

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 5 - Class 1: Matching, Stitching, Registration September 26th, 2017 ??? Recap Today Feature Matching Image Alignment Panoramas HW2! Feature Matches Feature

More information

Inspire 2 Release Notes

Inspire 2 Release Notes Date: 2018.11.16 DJI GO 4 app: v01.02.0300 v01.01.0050 ios v4.3.0 or above, Android v4.3.0 or above Fixed a rare issue where compass interference altered the orientation of the aircraft. Fixed a rare issue

More information

Sensor-netting algorithm for CB threat mapping

Sensor-netting algorithm for CB threat mapping Sensor-netting algorithm for CB threat mapping Thomas Gruber a, Larry Grim a, Christopher Keiser b, William Ginley b a MESH, Inc., 114 Barnsley Road, Oxford, PA USA 19363 b U.S. Army, APG, MD 21010 ABSTRACT

More information

New Sony DepthSense TM ToF Technology

New Sony DepthSense TM ToF Technology ADVANCED MATERIAL HANDLING WITH New Sony DepthSense TM ToF Technology Jenson Chang Product Marketing November 7, 2018 1 3D SENSING APPLICATIONS Pick and Place Drones Collision Detection People Counting

More information

Light Detection and Ranging (LiDAR)

Light Detection and Ranging (LiDAR) Light Detection and Ranging (LiDAR) http://code.google.com/creative/radiohead/ Types of aerial sensors passive active 1 Active sensors for mapping terrain Radar transmits microwaves in pulses determines

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

TLS Parameters, Workflows and Field Methods

TLS Parameters, Workflows and Field Methods TLS Parameters, Workflows and Field Methods Marianne Okal, UNAVCO GSA, September 23 rd, 2016 How a Lidar instrument works (Recap) Transmits laser signals and measures the reflected light to create 3D point

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

P/N: FLIR Aerial Commercial Building Inspector Kit (30 Hz) Other output formats Social media

P/N: FLIR Aerial Commercial Building Inspector Kit (30 Hz) Other output formats Social media Page 1 of 7 Other output formats Social media Click here to refresh the page Rev. 35005 Last modified 2016-04-15 Additional supporting documents: Mechanical drawings P/N: 75604-0404 FLIR Aerial Commercial

More information

Absolute Horizontal Accuracies of Pictometry s Individual Orthogonal Frame Imagery

Absolute Horizontal Accuracies of Pictometry s Individual Orthogonal Frame Imagery A Pictometry International, Corp White Paper Absolute Horizontal Accuracies of Pictometry s Individual Orthogonal Frame Imagery Michael J. Zoltek VP, Surveying & Mapping Pictometry International, Corp

More information

S2 MPC Data Quality Report Ref. S2-PDGS-MPC-DQR

S2 MPC Data Quality Report Ref. S2-PDGS-MPC-DQR S2 MPC Data Quality Report Ref. S2-PDGS-MPC-DQR 2/13 Authors Table Name Company Responsibility Date Signature Written by S. Clerc & MPC Team ACRI/Argans Technical Manager 2015-11-30 Verified by O. Devignot

More information

Lidar Sensors, Today & Tomorrow. Christian Sevcik RIEGL Laser Measurement Systems

Lidar Sensors, Today & Tomorrow. Christian Sevcik RIEGL Laser Measurement Systems Lidar Sensors, Today & Tomorrow Christian Sevcik RIEGL Laser Measurement Systems o o o o Online Waveform technology Stand alone operation no field computer required Remote control through wireless network

More information

This was written by a designer of inertial guidance machines, & is correct. **********************************************************************

This was written by a designer of inertial guidance machines, & is correct. ********************************************************************** EXPLANATORY NOTES ON THE SIMPLE INERTIAL NAVIGATION MACHINE How does the missile know where it is at all times? It knows this because it knows where it isn't. By subtracting where it is from where it isn't

More information

An Introduction to Lidar & Forestry May 2013

An Introduction to Lidar & Forestry May 2013 An Introduction to Lidar & Forestry May 2013 Introduction to Lidar & Forestry Lidar technology Derivatives from point clouds Applied to forestry Publish & Share Futures Lidar Light Detection And Ranging

More information

PRECISE POSITION AND ATTITUDE GEOREGISTRATION OF VIDEO IMAGERY FOR TARGET GEOLOCATION

PRECISE POSITION AND ATTITUDE GEOREGISTRATION OF VIDEO IMAGERY FOR TARGET GEOLOCATION PRECISE POSITION AND ATTITUDE GEOREGISTRATION OF VIDEO IMAGERY FOR TARGET GEOLOCATION ABSTRACT Dr. Alison K. Brown, NAVSYS Corporation 14960 Woodcarver Road, Colorado Springs CO 80921 In this paper, an

More information

New Tools in Aircraft Accident Reconstruction to Assist the Insurer. Overview. Overview 4/30/2015

New Tools in Aircraft Accident Reconstruction to Assist the Insurer. Overview. Overview 4/30/2015 New Tools in Aircraft Accident Reconstruction to Assist the Insurer Steven L. Steve Morris Senior Managing Consultant and Manager of Colorado Operations Engineering Systems Inc (ESI) Overview ESI has reconstructed

More information

Use of n-vector for Radar Applications

Use of n-vector for Radar Applications Use of n-vector for Radar Applications Nina Ødegaard, Kenneth Gade Norwegian Defence Research Establishment Kjeller, NORWAY email: Nina.Odegaard@ffi.no Kenneth.Gade@ffi.no Abstract: This paper aims to

More information

Removing Drift from Inertial Navigation System Measurements RPUG Robert Binns Mechanical Engineering Vehicle Terrain Performance Lab

Removing Drift from Inertial Navigation System Measurements RPUG Robert Binns Mechanical Engineering Vehicle Terrain Performance Lab Removing Drift from Inertial Navigation System Measurements RPUG 2009 Mechanical Engineering Vehicle Terrain Performance Lab December 10, 2009 Outline Laboratory Overview Vehicle Terrain Measurement System

More information

Real-Time Vehicle Detection and Tracking DDDAS Using Hyperspectral Features from Aerial Video

Real-Time Vehicle Detection and Tracking DDDAS Using Hyperspectral Features from Aerial Video Real-Time Vehicle Detection and Tracking DDDAS Using Hyperspectral Features from Aerial Video Matthew J. Hoffman, Burak Uzkent, Anthony Vodacek School of Mathematical Sciences Chester F. Carlson Center

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

A Layered Protocol Architecture for Scalable Innovation and Identification of Network Economic Synergies in the Internet of Things

A Layered Protocol Architecture for Scalable Innovation and Identification of Network Economic Synergies in the Internet of Things A Layered Protocol Architecture for Scalable Innovation and Identification of Network Economic Synergies in the Internet of Things Tilman Wolf 1 and Anna Nagurney 2 1 Department of Electrical and Computer

More information

Daniel A. Lavigne Defence Research and Development Canada Valcartier. Mélanie Breton Aerex Avionics Inc. July 27, 2010

Daniel A. Lavigne Defence Research and Development Canada Valcartier. Mélanie Breton Aerex Avionics Inc. July 27, 2010 A new fusion algorithm for shadow penetration using visible and midwave infrared polarimetric images Daniel A. Lavigne Defence Research and Development Canada Valcartier Mélanie Breton Aerex Avionics Inc.

More information