Photogrammetry: A Modern Tool for Crash Scene Mapping

Similar documents
iwitness maps traffic accident scenes over distances of several hundred feet

Reality Modeling Drone Capture Guide

Sasanka Madawalagama Geoinformatics Center Asian Institute of Technology Thailand

Crash Scene Mapping Solutions for Quick Clearance during Inclement Weather

PART A Three-Dimensional Measurement with iwitness

A New Protocol of CSI For The Royal Canadian Mounted Police

Assessing the Accuracy of Stockpile Volumes Obtained Through Aerial Surveying

Chapters 1 9: Overview

Creating a Competitive Edge with FARO Scanners

The YellowScan Surveyor: 5cm Accuracy Demonstrated

THREE DIMENSIONAL CURVE HALL RECONSTRUCTION USING SEMI-AUTOMATIC UAV

UAV s in Surveying: Integration/processes/deliverables A-Z. 3Dsurvey.si

STARTING WITH DRONES. Data Collection and Remote Sensing with UAVs, etc. Dr. Bill Hazelton LS

3D recording of archaeological excavation

Chapter 1: Overview. Photogrammetry: Introduction & Applications Photogrammetric tools:

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)

Overview of the Trimble TX5 Laser Scanner

SimActive and PhaseOne Workflow case study. By François Riendeau and Dr. Yuri Raizman Revision 1.0

Generating highly accurate 3D data using a sensefly exom drone

a Geo-Odyssey of UAS LiDAR Mapping Henno Morkel UAS Segment Specialist DroneCon 17 May 2018

Chapters 1 7: Overview

Practical Considerations of UAS Operations on Data Quality

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

Surveying like never before

3GSM GmbH. Plüddemanngasse 77 A-8010 Graz, Austria Tel Fax:

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008

Merging LiDAR Data with Softcopy Photogrammetry Data

Mapping Road surface condition using Unmanned Aerial Vehicle- Based Imaging System. Ahmed F. Elaksher St. Cloud State University

Evaluating the Performance of a Vehicle Pose Measurement System

2/9/2016. Session Agenda: Implementing new Geospatial Technologies for more efficient data capture

Accuracy Assessment of an ebee UAS Survey

EVOLUTION OF POINT CLOUD

REMOTE SENSING LiDAR & PHOTOGRAMMETRY 19 May 2017

Accuracy Assessment of POS AVX 210 integrated with the Phase One ixu150

Integrated Multi-Source LiDAR and Imagery

THE INTERIOR AND EXTERIOR CALIBRATION FOR ULTRACAM D

Drone2Map: an Introduction. October 2017

USE OF DRONE TECHNOLOGY AND PHOTOGRAMMETRY FOR BEACH MORPHODYNAMICS AND BREAKWATER MONITORING.

Photo based Terrain Data Acquisition & 3D Modeling

2-4 April 2019 Taets Art and Event Park, Amsterdam CLICK TO KNOW MORE

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points)

LiDAR data overview. Dr. Keiko Saito Global Facility for Disaster Reduction and Recovery (GFDRR)

Frequently Asked Questions

UAS Campus Survey Project

LIDAR MAPPING FACT SHEET

Unmanned Aerial Systems: A Look Into UAS at ODOT

Chapters 1-4: Summary

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

Quality Accuracy Professionalism

Digital Photogrammetry Software Comparison for Rock Mass Characterization

WHITE PAPER. Distribution Substation Outage Investigations. Overview. Introduction. By Ahmad Shahsiah, Ph.D., P.E. March 2018

Security and Disaster Preparedness: New York s Experience. Presented by: New York Department of Public Service. December 7, James T.

Leica Public Safety Solutions Versatile. Durable. Admissible

Journal Online Jaringan COT POLIPD (JOJAPS) Accuracy Assessment of Height Coordinate Using Unmanned Aerial Vehicle Images Based On Leveling Height

Virtually Real: Terrestrial Laser Scanning

Light Detection and Ranging (LiDAR)

TRIMBLE BUSINESS CENTER PHOTOGRAMMETRY MODULE

PREPARATIONS FOR THE ON-ORBIT GEOMETRIC CALIBRATION OF THE ORBVIEW 3 AND 4 SATELLITES

UAS based laser scanning for forest inventory and precision farming

2. POINT CLOUD DATA PROCESSING

ON THE ACCURAY AND PERFORMANCE OF THE GEOMÒBIL SYSTEM

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (without Ground Control Points)

MAPPING WITHOUT GROUND CONTROL POINTS: DOES IT WORK?

THREE-DIMENSIONAL MAPPING OF AN ANCIENT CAVE PAINTINGS USING CLOSE-RANGE PHOTOGRAMMETRY AND TERRESTRIAL LASER SCANNING TECHNOLOGIES

A New Direction in GIS Data Collection or Why Are You Still in the Field?

ALS40 Airborne Laser Scanner

Available online at ScienceDirect. Procedia Environmental Sciences 36 (2016 )

Equipment. Remote Systems & Sensors. Altitude Imaging uses professional drones and sensors operated by highly skilled remote pilots

Advances in Close-Range Photogrammetry

Aerial and Mobile LiDAR Data Fusion

Trimble VISION Positions from Pictures

MODELING OF THE PLAN DA MATTUN ARCHAEOLOGICAL SITE USING A COMBINATION OF DIFFERENT SENSORS

Exterior Orientation Parameters

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

TLS Parameters, Workflows and Field Methods

LiDAR & Orthophoto Data Report

3D BUILDINGS MODELLING BASED ON A COMBINATION OF TECHNIQUES AND METHODOLOGIES

Epic Made Easy The Redesigned 350 QX3

AN INTEGRATED SENSOR ORIENTATION SYSTEM FOR AIRBORNE PHOTOGRAMMETRIC APPLICATIONS

Terrestrial GPS setup Fundamentals of Airborne LiDAR Systems, Collection and Calibration. JAMIE YOUNG Senior Manager LiDAR Solutions

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

An Introduction to Lidar & Forestry May 2013

The raycloud A Vision Beyond the Point Cloud

The Applanix Approach to GPS/INS Integration

Airborne Kinematic Positioning and Attitude Determination Without Base Stations

Iowa Department of Transportation Office of Design. Photogrammetric Mapping Specifications

Comparing Aerial Photogrammetry and 3D Laser Scanning Methods for Creating 3D Models of Complex Objects

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS

Terrain Modeling and Mapping for Telecom Network Installation Using Scanning Technology. Maziana Muhamad

Efficient Processing of UAV Projects

Airborne Laser Survey Systems: Technology and Applications

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

UNIT 22. Remote Sensing and Photogrammetry. Start-up. Reading. In class discuss the following questions:

Intelligent photogrammetry. Agisoft

Paul DiGiacobbe, PE, DBIA MASER Consulting Ryan Putt, PE HNTB Corporation Kevin Poad, PE HNTB Corporation

Leica Systems Overview

LiForest Software White paper. TRGS, 3070 M St., Merced, 93610, Phone , LiForest

GUIDELINES FOR THE IN SITU GEOMETRIC CALIBRATION OF THE AERIAL CAMERA SYSTEM

LEAK DETECTION UTILIZING SMALL UNMANNED AERIAL SYSTEMS (SUAS) PRESENTED BY MATT HALKER

Transcription:

Photogrammetry: A Modern Tool for Crash Scene Mapping Background A police accident investigator (AI) has many tasks when arriving at a crash scene. The officer s highest priority is public safety; the need to ensure that every individual involved in the incident is safe and out of traffic s way is paramount. The AI will diagram the crash scene in instances where criminal charges are anticipated, or where the crash has resulted in fatalities. The crash scene documentation is often referred to as a mapping of the scene, which is nowadays carried out using 3D measurement tools such as photogrammetry, surveying total stations and terrestrial laser scanners. There is a compelling need to minimize traffic disruption and road closures during the on-scene mapping operation, without compromising the dimensional documentation of crash scene evidence. When it comes to crash scene mapping, photogrammetry has the notable attribute of very fast data acquisition, the in-field time being essentially the time required to photograph the scene. Moreover, the technology is reliable, accurate and low-cost, which accounts for its adoption by hundreds of North American police departments over the past decade. The required network of overlapping images can be recorded with off-the-shelf cameras, either with the camera being hand-held for daylight conditions and tripod mounted for night-time operations, or flown on a UAV/drone. A single suite of easy-to-use software can photogrammetrically process the imagery, irrespective of camera platform, with the data processing of drone photography, especially, being a highly automated process, the main outputs of which are an array of 3D feature points of interest, a digital surface model (DSM), an orthoimage map and a 3D reality mesh. Case studies Presented here are two case studies of photogrammetrically documented highway traffic accidents involving single-vehicle roll-overs. Both employed hand-held photography acquired with consumergrade cameras, along with the iwitness software system for photogrammetric data processing. Also summarized is a UAV-photogrammetry training project where the drone imagery was automatically oriented with the iwitnesspro software in order to generate both a DSM and an orthoimage map of the simulated crash site. The two iwitness projects were conducted by the police departments of Granbury and Fairview, Texas, while the UAV-photogrammetry project was carried out by the Roanoke, TX police and fire departments. In all cases the 3D photogrammetry measurements were ingested into a CAD system to support the final crash scene diagramming. Case #1 In this accident, which occurred along a straight stretch of road in Granbury, TX, the vehicle left the road, crossed a grassed bar ditch, hit several trees and overturned, coming to rest on its driver s side, some 420 ft beyond where it left the pavement. Photogrammetric documentation of the accident scene, which was carried out by a single AI, commenced with the laying out of feature point markers. These simply highlight pieces of evidence and points of interest, which are otherwise not easily photographed from hand-held photography. A network of 21 overlapping images was then recorded, an

operation requiring only 10 minutes. The total in-field time required to lay out the feature point markers, capture the imagery and retrieve the markers was 35 minutes. The photogrammetric processing that followed was carried out using the iwitness software, the output of which was a 1 - accurate 3D mapping of the scene, as indicated in Figures 1 and 2. Figure 1 is a screen grab from iwitness showing the roadway; measured feature points, markers and natural features (indicated by green labels); and the bank of trees. The overturned vehicle is barely visible in the figure, but its final resting position can be seen in Figure 2, which shows a CAD diagramming of the vehicles trajectory, overlaid on a satellite image. The diagramming was conducted using the FARO Zone CAD software, which ingests a 3D point array file from iwitness in DXF format. The time required to conduct both the photogrammetric data processing and the CAD diagramming was approximately 2 hours. Image courtesy of Granbury TX PD Figure 1: Vehicle rollover accident scene viewed within iwitness. Image courtesy of Granbury TX PD Figure 2: CAD mapping, highlighting point at which vehicle left the road, and its final rest position. Case #2 This fatal accident, which was caused by a drug-affected driver losing control of his vehicle, occurred on a service road leading to an on-ramp for Highway 75 in Fairview, TX. As can be visualized from the final

CAD diagramming of the scene, shown in Figure 3, the driver lost control, at a speed estimated at greater than 90 mph, and the vehicle hit a concrete barrier and rode up over curbing (Figure 4) and traveled on into a grassy area where it collided with a concrete support pillar for an elevated off-ramp. Still traveling at speed, the vehicle then rolled over several times before coming to rest at the base of a grassy slope. Figure 5 shows feature point locations within the final rest of the vehicle s roll-over path. Figure 3: CAD mapping showing vehicle trajectory. Image courtesy of Fairview TX PD Figure 4: 3D Feature point mapping in area where vehicle collided with concrete barrier. As with Case #1, the images recorded at the scene, with a Pentax consumer-grade DSLR camera, were photogrammetrically processed using the iwitness software, again within a few hours, to an accuracy of close to 1. The CAD diagramming process, indicated by Figure 3, was again conducted using the FARO Zone software system.

Image courtesy of Fairview TX Figure 5: 3D Feature point mapping around vehicle s final rest position. Case #3: UAV-photogrammetry This third case involves the photogrammetric mapping of a simulated crash scene using imagery recorded from a UAV. The mapping project was carried out as part of an iwitnesspro training program conducted for the police and fire departments of Roanoke, TX. The training session covered the calibration of integrated DJI UAV cameras, flight planning for photogrammetry, manual and automated flight control and camera triggering, and automated photogrammetric data processing. As well as covering the operational aspects of the iwitness and iwitnesspro software systems, the training offered instruction in the use of both DJI Flight Planner (www.djiflightplanner.com) and LITCHI (https://flylitchi.com) flight control software. Camera Calibration is a very important aspect of photogrammetry, since the metric quality of the camera is a significant factor influencing the accuracy and reliability of the measured 3D object points, DSMs, orthoimage maps and other photogrammetrically generated products. The iwitness software suite utilizes the concept of self-calibration, and this technique can be problematic within networks of overlapping UAV images because they invariably do not exhibit the required imaging geometry. The modeling of lens distortion is generally viable via self-calibration using images from a low-flying UAV, but the recovery of interior orientation parameters, notably the camera focal length, is not supported in the absence of external constraints, namely high-quality ground control points and precisely known UAV locations at the instant each image is recorded. Comprehensive pre-calibration offers a viable alternative to on-the-job calibration, and for most UAVs, the camera can be calibrated on the ground. The concept of self-calibration is still employed, but by hand-holding and hand-orienting the drone, the necessary geometric requirements for the imaging network can be met, these centering upon achieving significant image scale variation within and between images, through convergent imaging, and orthogonally orienting the camera around its optical axis (i.e., rolling between portrait and landscape modes). With the iwitness approach, the self-

calibration may employ targets or indeed natural features, the data processing being fully automatic once the images are recorded. Further details on calibration are given in the iwitness user manual. Suffice to say here that the training session covered self-calibration using targets, the results of a 27- image self-calibration of the camera on a DJI Inspire 1 UAV being shown in Figure 6. Figure 6: Example calibration values for a DJI Inspire 1 UAV camera. Flight planning and flight control form the next important considerations, and in this case as in most accident scene mapping projects employing UAVs autonomous mission control was to be adopted, here via the DJIFlightPlanner (DJIFP) and LITCHI software. Both of these programs are straightforward to use, inexpensive, and provide a turn-key system for completely automatic UAV flight and image recording, from take-off to landing. Once the desired imaging scale is determined, the flight planning can take place, with the generally adopted configuration being overlapping strips of photography, with the image overlap being 60% plus both along- and cross-strip. DJIFP incorporates two image-acquisition modes, a positional mode where the aircraft temporarily stops at each designated waypoint to capture an image, and a continuous mode where the strips are flown with the photography being time-sequenced. The latter is usually much faster, and easier on the drone s battery. The regular network of 98 nadir-looking UAV images is indicated in Figure 7, within the workspace of iwitnesspro. (The close-in ring of oblique 27 images indicated in the figure were acquired manually to aid in the generation of 3D reality models of the vehicles themselves.) The flying height selected was 27.4m (90 ft), which yielded an image scale of 1:6300, from which a 3D positioning accuracy of around 1cm in planimetry and 2-3 cm in height could be anticipated. The actual flight control was via LITCHI, and here the nadir flight mission only took a few minutes.

Figure 7: Network of 125 aerial images covering the staged accident scene, along with 19,000 orientation points, shown in iwitnesspro. The DJI Inspire 1 images were imported into iwitnesspro, where there is a two-stage data reduction process: (1) Fully-automatic photogrammetric orientation and sparse point cloud generation, and (2) dense point cloud generation via dense image matching. In the network orientation stage, the camera station positions and orientation angles are determined via a feature-based matching process, with 3D coordinates being determined for the matched ground feature points. From the 3D point cloud generated in the dense matching, a surface reconstruction is achieved, with the outputs being a DSM, orthoimage map and a 3D reality mesh. The DSM and mesh are output in LAS and PLY formats, respectively. Both iwitness and iwitnesspro allow the user to work in local Cartesian, projection (notably UTM) or geographic coordinates. For this training a local Cartesian system was adopted by using three targeted feature points and the iwitness 3-2-1 process. Absolute scale was determined via two measured ground point to ground point distances (scale can also be derived from the UAV GPS positional data). Figure 8 shows the three ground points used to define the XYZ coordinate reference system, and the red line in the figure indicates one of the distance measured to set the object space scale. Both iwitness and iwitnesspro allow the operator to quickly triangulate ground control points using a function termed GCP-Assist. The traditional manual-referencing of particular feature points of interest is also available to the user. The outcome of the dense matching stage for the 125-image network, which took a few tens of minutes, was, in the first instance, a colorized point cloud of 9.4 million points, an oblique view of the area around to the staged vehicle collision being shown in Figure 9. An orthoimage map, which can be used to support further dimensional analysis of the crash site was also produced, along with a DSM and 3D mesh.

Figure 8: Three points used to set local XYZ coordinate system, along with measured distance for scale control. Summary Figure 9: Oblique view of portion of the generated dense 3D point cloud. Photogrammetry, both terrestrial and using images from UAVs/drones, offers law enforcement and emergency responders a faster and safer means to permanently record 3D scene evidence for diagramming purposes. A noteworthy advantage of photogrammetry is that the images can be measured at any time in the future, thus offering a significant time savings in instances where police charges are not filed. Important factors to consider in any photogrammetry system for crash reconstruction are the cost, ease of software use, and versatility for ground-based as well as aerial imaging for both day and night applications, whether using hand-held digital cameras, UAV camera platforms, or a combination of both. 2019 All Rights Reserved DeChant Consulting Services DCS Inc. www.iwitnessphoto.com iwitness & iwitnesspro are products of Photometrix, a Division of Geodetic Systems Inc.