Reality Modeling Drone Capture Guide

Similar documents
Sasanka Madawalagama Geoinformatics Center Asian Institute of Technology Thailand

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)

UAV s in Surveying: Integration/processes/deliverables A-Z. 3Dsurvey.si

REMOTE SENSING LiDAR & PHOTOGRAMMETRY 19 May 2017

USING UNMANNED AERIAL VEHICLE (DRONE/FLYCAM) TECHNOLOGY IN SURVEY WORK OF PORTCOAST

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points)

Comparing Aerial Photogrammetry and 3D Laser Scanning Methods for Creating 3D Models of Complex Objects

Comparing Aerial Photogrammetry and 3D Laser Scanning Methods for Creating 3D Models of Complex Objects

Photogrammetry: A Modern Tool for Crash Scene Mapping

Exterior Orientation Parameters

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (without Ground Control Points)

Trimble GeoSpatial Products

a Geo-Odyssey of UAS LiDAR Mapping Henno Morkel UAS Segment Specialist DroneCon 17 May 2018

Journal Online Jaringan COT POLIPD (JOJAPS) Accuracy Assessment of Height Coordinate Using Unmanned Aerial Vehicle Images Based On Leveling Height

A COMPARISON OF STANDARD FIXED-WING VS MULTIROTOR DRONE PHOTOGRAMMETRY SURVEYS

Introduction. Acute3D S.A.S. WTC Valbonne Sophia Antipolis. 120 route des Macarons.

Efficient Processing of UAV Projects

Equipment. Remote Systems & Sensors. Altitude Imaging uses professional drones and sensors operated by highly skilled remote pilots

USE OF DRONE TECHNOLOGY AND PHOTOGRAMMETRY FOR BEACH MORPHODYNAMICS AND BREAKWATER MONITORING.

2. POINT CLOUD DATA PROCESSING

3D recording of archaeological excavation

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION

Drone2Map for ArcGIS: Bring Drone Imagery into ArcGIS. Will

A New Protocol of CSI For The Royal Canadian Mounted Police

THREE DIMENSIONAL CURVE HALL RECONSTRUCTION USING SEMI-AUTOMATIC UAV

MicaSense RedEdge-MX TM Multispectral Camera. Integration Guide

COMBINED BUNDLE BLOCK ADJUSTMENT VERSUS DIRECT SENSOR ORIENTATION ABSTRACT

Comparing workflow and point cloud outputs of the Trimble SX10 TLS and sensefly ebee Plus drone

GEOREFERENCING EXPERIMENTS WITH UAS IMAGERY

Assessing the Accuracy of Stockpile Volumes Obtained Through Aerial Surveying

Drone2Map: an Introduction. October 2017

Best Practices for Managing Aerial and UAS Frame Imagery. Cody Benkelman, Jie Zhang

POSITIONING A PIXEL IN A COORDINATE SYSTEM

Over the years, they have been used several tools to perform aerial surveys of analyzed to archaeological sites and monuments. From the plane to the

The Applanix Approach to GPS/INS Integration

Photogrammetric Performance of an Ultra Light Weight Swinglet UAV

EVOLUTION OF POINT CLOUD

Overview of the Trimble TX5 Laser Scanner

Aerial and Mobile LiDAR Data Fusion

Digital Photogrammetry Software Comparison for Rock Mass Characterization

TAKING FLIGHT JULY/AUGUST 2015 COMMERCIAL UAS ADVANCEMENT BALLOONS THE POOR MAN S UAV? SINGLE PHOTON SENSOR REVIEW VOLUME 5 ISSUE 5

Phantom 4 Pro/Pro+ Release Notes

Unmanned Aerial Systems: A Look Into UAS at ODOT

Chapters 1-4: Summary

Drones and Supercalculators : A New Field of Investigation in Photogrammetry.

INTEGRATION OF MOBILE LASER SCANNING DATA WITH UAV IMAGERY FOR VERY HIGH RESOLUTION 3D CITY MODELING

AGROPRO 100. advanced Precision farming solution.

Photogrammetry: DTM Extraction & Editing

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

Absolute Horizontal Accuracies of Pictometry s Individual Orthogonal Frame Imagery

Integrated Multi-Source LiDAR and Imagery

Characterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying. Ryan Hough and Fei Dai

Our Experiences with UAVs in Coastal Monitoring at Gator Lake. Capt. Joe Morrow MRD Associates, Inc. Destin, Florida

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

Phantom 4 Pro/Pro+ Release Notes

Coastal Survey of archaeological sites using drones

STARTING WITH DRONES. Data Collection and Remote Sensing with UAVs, etc. Dr. Bill Hazelton LS

Support for external aerotriangulation results from professional systems (Inpho, Bingo).

INVESTIGATION OF 1:1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES

Drones for research - Observing the world in 3D from a LiDAR-UAV

Assessing 3D Point Cloud Fidelity of UAS SfM Software Solutions Over Varying Terrain

DJI Phantom 3 - Standard Edition + Softshell Backpack

Surveying like never before

Iwane Mobile Mapping System

Accuracy Assessment of an ebee UAS Survey

Introduction Photogrammetry Photos light Gramma drawing Metron measure Basic Definition The art and science of obtaining reliable measurements by mean

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS

MODELING OF THE PLAN DA MATTUN ARCHAEOLOGICAL SITE USING A COMBINATION OF DIFFERENT SENSORS

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

Aerial Mapping using UAS. Jeff Campbell

Dataset2: Fleurac. Contents. Using a dataset containing geo-tags Getting familiar with spatial reference systems Advanced features

AN INTEGRATED SENSOR ORIENTATION SYSTEM FOR AIRBORNE PHOTOGRAMMETRIC APPLICATIONS

Accuracy Assessment of POS AVX 210 integrated with the Phase One ixu150

Chapters 1 9: Overview

Trimble Geospatial Division Integrated Solutions for Geomatics professions. Volker Zirn Regional Sales Representative

Camera Calibration for a Robust Omni-directional Photogrammetry System

Leica Systems Overview

LIDAR MAPPING FACT SHEET

Reality Modeling Webinar

HEIGHT GRADIENT APPROACH FOR OCCLUSION DETECTION IN UAV IMAGERY

Digital Photogrammetric System. Version 5.3 USER GUIDE. Processing of UAV data

(Subsea) Keith Vickery Zupt LLC

THE RANGER-UAV FEATURES

TopoDrone Photogrammetric Mapping Reliable, Accurate, Safe

Photogrammetry: DTM Extraction & Editing

Generating highly accurate 3D data using a sensefly exom drone

ADVANCING REALITY MODELING WITH CONTEXTCAPTURE

Virtual and remote inspection methods

Geometric Accuracy Assessment of Unmanned Digital Cameras and LiDAR Payloads

2/9/2016. Session Agenda: Implementing new Geospatial Technologies for more efficient data capture

Scan-Copter 2.0. Strength by Cooperation. a product of 4D-IT GmbH & von-oben e.u. High-Quality. 3D Documentation. supported by UAV

SimActive and PhaseOne Workflow case study. By François Riendeau and Dr. Yuri Raizman Revision 1.0

Applications of Mobile LiDAR and UAV Sourced Photogrammetry

Training i Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Quality Accuracy Professionalism

Rectification Algorithm for Linear Pushbroom Image of UAV

Trimble VISION Positions from Pictures

ON THE ACCURAY AND PERFORMANCE OF THE GEOMÒBIL SYSTEM

Transcription:

Reality Modeling Drone Capture Guide Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with ContextCapture, Bentley s reality modeling software. Learn the limits of capturing vertical (nadir) images and techniques for obliques, and how to add robustness to your photogrammetric drone projects. Materials Drones There are two main categories of drones: fixed wing unmanned aerial vehicles (UAVs) and multirotor drones. Fixed Wing UAVs (e.g. Topcon Sirius Pro) These drones are used for medium- or large-scale cartography and terrain modeling purposes. Fixed wing UAVs have a high level of autonomy and can quickly cover large distances. However, these types of drones usually cannot capture oblique imagery, lowering the quality of reality data output on complex scenes. Fixed wing UAVs are optimal for terrain modeling, 2.5D-models, and orthophoto production. These capabilities, along with real 3D reality modeling, are at the core of ContextCapture technology. Multirotor Drones (e.g. Topcon Falcon 8, DJI phantom 4 Pro) For advanced 3D modeling projects, these drones are required because they can capture oblique photos. The autonomy of these drones is not as good as a fixed wing UAV, but multirotor drones can capture the required photos for complex sites. The quality of the camera mounted on the drone will greatly impact the photogrammetric performances; good 3D modeling requires good photographs. Therefore, the choice of UAV will also be influenced by the payload that it can hold. For professional use, we would recommend a drone like the DJI Phantom 4 Pro, a similar model, or any drone capable of holding a good camera. Avoid drones that do not meet these requirements, such as the DJI Mavic, Spark, Phantom 1, Phantom 2, or Phantom 3. Cameras Hardware The quality of the photogrammetry processing will greatly depend on the quality of the camera mounted on the drone. Entry-price professional drones, such as the DJI Phantom 4, usually feature small cameras on a gimbal, capable of capturing photos. However, these drones payload is limited and does not allow for holding better cameras. 1

For greater photo quality, drones like the DJI Matrice and Topcon Falcon 8 can hold bigger cameras, such as the Sony Alpha 6000 (hybrid models) or Sony A7R (DSLR). For the best results, consider the medium format camera like Phase One. Good photogrammetry also requires good optics. Try to avoid a long focal length, as photogrammetry tends to be unstable due to the narrow boresight angles between consecutive photos. We recommend prime lenses (fixed focal) with a focal length range of 15 to 25 millimeters. Note that it is better to capture stills rather than videos for photogrammetry. Calibration Beyond hardware, we recommend that you input accurate camera-calibration values in ContextCapture. Even though calibrating cameras is part of aerotriangulation, we recommend precalibrating the camera on an easy project. Once the camera is robustly calibrated, the parameters can be used on other complex projects. This further aerotriangulation on complex projects, based on a pre-calibrated camera, is more efficient than if it was done without using any initial parameters. Below are examples of aerotriangulation on a single dataset with (Figure 1) and without (Figure 2) initial calibration parameters. Figure 1: 2: Aerotriangulation without 3D view initial on a calibrated calibration camera parameters Figure 1: Aerotriangulation 3D view on a non-calibrated camera The spherical effects seen above will be more important with nadir-only acquisitions. Prior camera calibration will be required for vertical-view patterns and highly recommended for any type of acquisition. This one-time calibration will take approximately 10 minutes and can be reused for future complex projects. Here is the step-by step procedure: 1. Choose a small, stationary asset that you can turn around and can be shot under any angle to run a robust calibration. The small asset should be highly textured, such as a statue, to be perfectly suited for camera calibration (Figure 3). 2. Get the camera that you will use for your real projects and set it in real conditions with the same image format and same focal length. We are only speaking about the camera. If you 2

use your camera mounted under a drone, then it is not mandatory to run a drone-flight for the calibration. You can simply dissociate the camera from the drone, run the calibration, and use the camera parameters once back under the drone. 3. Turn 360 degrees around the object/statue and shoot around 30 images that are equally spaced from each other (Figure 4). Figure 3: Scene suited for camera calibration Figure 4: 3D-view of camera calibration stage 4. Start ContextCapture, create a new project, and submit an aerotriangulation on the photos that you just captured with the default settings. Once completed, your camera is calibrated. 5. You must save these camera parameters by going to the Photos tab and adding your calibrated camera to the camera-database (Figure 5). Figure 5: Add a custom camera to the camera database 6. Once completed, the calibration values will be automatically applied every time you add new pictures from this camera, and you can use them for further aerotriangulation. 7. For further aerotriangulation starting from already calibrated values, you must turn the radial distortion setting on Keep and make sure that your accurate calibration is properly 3

used (Figure 6). Figure 6: Force usage of robust camera parameters Battery Using drones potentially requires you to capture of thousands of photos. Therefore, it is crucial to estimate the number of batteries needed for a project, as missing photographs will affect the quality of the final 3D model. Ground Control Points If accurate georeferencing is important for your photogrammetric project, ground control points (GCPs) will be required. Depending on your image resolution, we recommend ground control points to be spaced about 20,000 pixels from each other. Example: Drone acquisition (2 centimeters/pixel > 0.02) x 20,000 = 400 meters The recommended space between neighbor GCPs is around 400 meters. Ground control points would be targets that are visible from the sky (Figures 7 and 8) and measured with survey equipment on the ground (e.g. total station) 4

Figure 7: Chessboard ground control point Figure 8: Aero propeller ground control point Beyond georeferencing, ground control points will also help ensure aerotriangulation robustness. GPS and IMU Sensors We also recommend embedding global positioning system (GPS) and inertial measurement unit (IMU) sensors on your drone. Initial GPS information will help for geo-registration and scaling. Combined with IMUs, you will get full pose metadata that will facilitate: 1. Aerotriangulation: an initial guess is available, so computation can be lighter and faster. 2. Ground control points registration: a reliable imagery selection and pointing suggestion will be automatically set up. However, all sensors are not equal. Depending on their efficiency, they will help with computation in different ways. For GPS sensors, there are two groups of options: basic and real-time kinematic (RTK) or post-processed kinematic (PPK). For IMU sensors, the options are either basic or high-end sensors. Below is a synthesis of GPSIMU sensors influence, depending on their types. Configuration No GPS No IMU Potential GCPs Basic GPS No IMU Potential GCPs Geo-registration Accuracy Not georeferenced - arbitrary scale Around 1 centimeter Approximately 1-2 meters Around 1 centimeter Benefits None -Rough georeferencing -Slight help at aerotriangulation stage Comments If geo-registration is important, ground control points must be used. Recommended for small acquisitions where knowing the location of the site is important but scale and geo-registration accuracy are not a concern. 5

Basic GPS Basic IMU Potential GCPs RTK/PPK GPS Basic IMU Potential GCPs RTK/PPK GPS High-end IMU Potential GCPs Approximately 1-2 meters Around 1 centimeter Around 5 centimeters Around 1 centimeter Around 5 centimeters Around 1 centimeter -Rough georeferencing -Slight help at aerotriangulation stage -Important help for ground control points registration -High accuracy georegistration -Enables "Adjust on positions" mode -Important help for ground control points registration -High accuracy georegistration -Enables "Adjust on positions" mode -Enables adjustment on initial poses -Important help for ground control points registration Recommended for small acquisitions where knowing the location of the site is important but scale and geo-registration accuracy are not a concern. The combination of GPSIMU sensors will help with ground control points registration even though computation won t go faster. Recommended for any acquisition where absolute accuracy is expected, especially if setting GCPs is challenging. The combination of GPSIMU sensors will help with ground control points registration even though computation won t go faster. Recommended for any acquisition where absolute accuracy is expected, especially if setting GCPs is challenging. The combination of GPSIMU sensors will help with ground control points registration and computation will go faster. Flight Planning Data capture for photogrammetry requires a determined flight plan to capture good photos to process 3D models, 3D-fly, with various camera angles. We do not recommend manual flights for photogrammetry projects. The flight planner must execute complex flight plans, such as: Orbital flights around points of interest. Linear flights along a given axis. Camera angle adjustments along a given axis. The flight plan must be prepared in advance, considering the flight speed (not too fast) and height of flight (not too high) to avoid blurry images. 6

Flight Patterns Best Practices Limits of Vertical (Nadir) Grids Nadir grids are often used to capture photos of a site. It is a quick and easy way to capture large areas and limit the total number of acquired photos. For photogrammetry to process nadir grids, it requires a sufficient overlap. We recommend a 70 percent overlap along a flight line, and a 60 percent overlap between photos from different flight lines. However, the results obtained with such patterns are limited. Some reasons include: Poor Photo Resolution on Vertical Elements With a vertical grid, all the photos look straight down. Resolution is high on horizontal surfaces, but pixels are stretched on the vertical parts. This system will induce inaccuracies on all those elements, leading to a bad reconstruction and even holes in the 3D model. Similar Successive Points of Views Since all the photos look at the scene from the same angle, the boresight angle difference is very small. This similarity creates a big uncertainty when photos are used to extract 3D information, especially along the z-axis. Many Masks The lack of differences in the point of view limits the areas that are completely covered by trees or overhanging structures. Comparison of Nadir vs. Nadir Oblique Considering the concerns with nadir acquisition, it is important to insist on the necessity of oblique captures. They will increase both aerotriangulation robustness and the quality of the mesh. Below are comparisons of scenes captured in two configurations (Figures 9, 10, and 11). These images show nadir and oblique (left side) and nadir only (right side) images. 7

Reality Modeling Academy Drone Capture Figure 9: Oblique Nadir vs. Nadir only Figure 10: Oblique Nadir vs. Nadir only 8

Reality Modeling Academy Drone Capture Figure 11 : Oblique Nadir vs. Nadir only Oblique Grid Considering Figures 9, 10, and 11, as well as the inability of fixed wing UAVs to adopt complex flight plans, the oblique-grid method can be a good compromise. This method consists of flying the drone in four directions with a maximum oblique angle of 30 degrees to create a grid of oblique photos. The setup consists of positioning the camera at an oblique angle (looking forward) and flying the drone back and forth while following parallel flight lines along one axis. Then, you repeat the same process along the perpendicular axis. This practice will generate obliques looking in four directions, creating a robust acquisition pattern. Figure 12: Oblique grid (Top view) 9

Note: In the same flight, you will capture obliques looking in the opposite direction. To ensure a good overlap for photogrammetry, the two flight lines capturing obliques in the same direction (each second line) should have an overlap of about 70 percent. Overlapping Orbits Overlapping orbits is a great technique to capture a complex site in full 3D. It is simple to execute and will ensure a great robustness in the photogrammetric process. This technique consists of capturing orbits over the area of interest with the camera pointing towards the center of the orbit with a 45-degree oblique angle. The area will be covered with orbits that overlap. We recommend a minimum 50 percent overlap between the orbits diameters (Figure 13 and Figure 14). We recommend that successive photos have a maximum angle difference of 15 degrees, meaning that a complete orbit should be captured with at least 24 photos. More photos can be useful when capturing thin elements, especially when capturing complex sites like plants or substations. Figure 13: Overlapping orbit (top view) Figure 14: Overlapping orbit (3D view) Additional orbits at lower altitudes might be necessary to capture more detail on parts of the site. The orbit diameter and height can be easily calculated. However, it is preferable to use a flight planning application that can generate this pattern automatically, such as Drone Harmony. Processing a Large Dataset In the case of a massive drone acquisition, we recommend splitting the global dataset into smaller parts to avoid memory overflows and ensure robustness. After splitting the blocks, they will be merged together to create a seamless reconstruction. We recommend to not exceed 10,000 images per block. Once extracted, here is the method to ensure seamless borders between sub-blocks: 1. Run your acquisition, choosing the most suitable flight plan. 10

2. Capture ground control points (one GCP for every 20,000 pixels). 3. Split your massive block in 10,000-image sub-parts. At this stage, it is very important to make sure that neighbor blocks are sharing GCPs. 4. Register GCPs in your images and run an aerotriangulation on each of the blocks. 5. Merge the aerotriangulated blocks. 6. Run a single reconstruction. Figure 15: Ground control points and sub-blocks extraction 11