DEVELOPING METHODOLOGY TO MAP TREE CANOPY IN URBAN AREAS FROM LOW COST COMMERCIAL UAVS

Similar documents
EVOLUTION OF POINT CLOUD

Sasanka Madawalagama Geoinformatics Center Asian Institute of Technology Thailand

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

Journal Online Jaringan COT POLIPD (JOJAPS) Accuracy Assessment of Height Coordinate Using Unmanned Aerial Vehicle Images Based On Leveling Height

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

UAV s in Surveying: Integration/processes/deliverables A-Z. 3Dsurvey.si

Intelligent photogrammetry. Agisoft

Assessing 3D Point Cloud Fidelity of UAS SfM Software Solutions Over Varying Terrain

3D recording of archaeological excavation

Classification of urban feature from unmanned aerial vehicle images using GASVM integration and multi-scale segmentation

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

Over the years, they have been used several tools to perform aerial surveys of analyzed to archaeological sites and monuments. From the plane to the

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES

Photogrammetry for forest inventory.

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

UAS Campus Survey Project

Photo based Terrain Data Acquisition & 3D Modeling

Reality Modeling Drone Capture Guide

Overview of the Trimble TX5 Laser Scanner

AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA

LiForest Software White paper. TRGS, 3070 M St., Merced, 93610, Phone , LiForest

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

USING UNMANNED AERIAL VEHICLE (DRONE/FLYCAM) TECHNOLOGY IN SURVEY WORK OF PORTCOAST

PhotoScan. Fully automated professional photogrammetric kit

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

Photogrammetry: A Modern Tool for Crash Scene Mapping

A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN

UPDATING OBJECT FOR GIS DATABASE INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES: A CASE STUDY ZONGULDAK

Object-oriented Classification of Urban Areas Using Lidar and Aerial Images

2-4 April 2019 Taets Art and Event Park, Amsterdam CLICK TO KNOW MORE

A NEW STRATEGY FOR DSM GENERATION FROM HIGH RESOLUTION STEREO SATELLITE IMAGES BASED ON CONTROL NETWORK INTEREST POINT MATCHING

Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

Harnessing GIS and Imagery for Power Transmission Inspection. ESRI European Users Conference October 15, 2015

Quality Accuracy Professionalism

Improvement of the Edge-based Morphological (EM) method for lidar data filtering

FOOTPRINTS EXTRACTION

E3De. E3De Discover the Next Dimension of Your Data.

LiDAR Applications. Examples of LiDAR applications. forestry hydrology geology urban applications

REMOTE SENSING LiDAR & PHOTOGRAMMETRY 19 May 2017

Trimble GeoSpatial Products

Equipment. Remote Systems & Sensors. Altitude Imaging uses professional drones and sensors operated by highly skilled remote pilots

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

MODELLING FOREST CANOPY USING AIRBORNE LIDAR DATA

UAS based laser scanning for forest inventory and precision farming

Leica Systems Overview

Progressing from object-based to object-oriented image analysis

Tree height measurements and tree growth estimation in a mire environment using digital surface models

Unmanned Aerial Systems: A Look Into UAS at ODOT

THREE DIMENSIONAL CURVE HALL RECONSTRUCTION USING SEMI-AUTOMATIC UAV

MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY INTRODUCTION

A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION INTRODUCTION

Open Pit Mines. Terrestrial LiDAR and UAV Aerial Triangulation for. Figure 1: ILRIS at work

Course Outline (1) #6 Data Acquisition for Built Environment. Fumio YAMAZAKI

a Geo-Odyssey of UAS LiDAR Mapping Henno Morkel UAS Segment Specialist DroneCon 17 May 2018

OBJECT-ORIENTED BUINDING EXTRACTION BY DSM AND VERY HIGH-RESOLUTION ORTHOIMAGES

Drones for research - Observing the world in 3D from a LiDAR-UAV

ACCURACY ANALYSIS AND SURFACE MAPPING USING SPOT 5 STEREO DATA

APPENDIX E2. Vernal Pool Watershed Mapping

PROBLEMS AND LIMITATIONS OF SATELLITE IMAGE ORIENTATION FOR DETERMINATION OF HEIGHT MODELS

LIDAR MAPPING FACT SHEET

Applications of Mobile LiDAR and UAV Sourced Photogrammetry

TAKING FLIGHT JULY/AUGUST 2015 COMMERCIAL UAS ADVANCEMENT BALLOONS THE POOR MAN S UAV? SINGLE PHOTON SENSOR REVIEW VOLUME 5 ISSUE 5

BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL

A Method to Create a Single Photon LiDAR based Hydro-flattened DEM

The raycloud A Vision Beyond the Point Cloud

DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE

An Introduction to Lidar & Forestry May 2013

PhotoScan. Fully automated professional photogrammetric kit

STARTING WITH DRONES. Data Collection and Remote Sensing with UAVs, etc. Dr. Bill Hazelton LS

A COMPARISON OF STANDARD FIXED-WING VS MULTIROTOR DRONE PHOTOGRAMMETRY SURVEYS

Surveying like never before

USE OF DRONE TECHNOLOGY AND PHOTOGRAMMETRY FOR BEACH MORPHODYNAMICS AND BREAKWATER MONITORING.

CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS

CE 59700: LASER SCANNING

ON FUNDAMENTAL EVALUATION USING UAV IMAGERY AND 3D MODELING SOFTWARE

CLASSIFICATION OF NONPHOTOGRAPHIC REMOTE SENSORS

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998

DATA FUSION AND INTEGRATION FOR MULTI-RESOLUTION ONLINE 3D ENVIRONMENTAL MONITORING

INTEGRATION OF MOBILE LASER SCANNING DATA WITH UAV IMAGERY FOR VERY HIGH RESOLUTION 3D CITY MODELING

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Digital Softcopy Photogrammetry

Limit of the Paper should not be more than 3000 Words = 7/8 Pages) Abstract: About the Author:

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

Integrated Multi-Source LiDAR and Imagery

SPARK. Quick Start Guide V1.6

IMPROVED TARGET DETECTION IN URBAN AREA USING COMBINED LIDAR AND APEX DATA

The Use of UAV s for Gathering Spatial Information. James Van Rens CEO MAPPS Winter Conference January, 2015

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

EXTRACTING ORTHOGONAL BUILDING OBJECTS IN URBAN AREAS FROM HIGH RESOLUTION STEREO SATELLITE IMAGE PAIRS

2010 LiDAR Project. GIS User Group Meeting June 30, 2010

Files Used in this Tutorial

MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH INTRODUCTION

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

Transcription:

DEVELOPING METHODOLOGY TO MAP TREE CANOPY IN URBAN AREAS FROM LOW COST COMMERCIAL UAVS Niluka Munasinghe 1, U.I. Manatunga 1, H.M.R. Premasiri 1, N. Lakmal Deshapriya 2, S.L. Madawalagama 2 and Lal Samarakoon 2 1 Department of Earth Resources Engineering, University of Moratuwa, Sri Lanka Email: mniluka@gmail.com,mana.udeshini919@gmail.com,hmranjith@yahoo.com 2 Geoinformatics Center, Asian Institute of Technology, Thailand Email: lakmalnd@yahoo.com,madawalagama@gmail.com,lalsamarakoon@gmail.com KEYWORDS: UAV, Urban Planning, Tree Canopy, DSM, Orthoimage ABSTRACT: Urban Green Spaces are essential for the well-being of the urban society. Mapping and monitoring of tree canopy in urban landscapes are an integral part of the Urban Planning. Recent popularity and improvement of UAV technology enables planners to assess available urban green spaces in high resolution with greater accuracy. One of most famous UAV series is Phantom drone series. In this study, we have used Phantom 3 drone (Professional Edition) which has 4K camera, approximately 20minutes flight time and range of maximum 5 km. But, most of the available commercial Drones lacks Near Infrared (NIR) observation which is crucial to map and monitor vegetation. In this paper, we are exploring an efficient methodology which is based on textural information from RGB image and elevation information derived from a drone, to extract tree canopies in Urban Area. A university environment which consists of a desirable amount of tree canopy and buildings have been mapped to demonstrate this methodology in this paper. Finally, results were compared to digitized tree canopy layer to validate the results. 1. INTRODUCTION Urban planning means dealing with the constant change as communities, cities, and towns which continue to evolve. In response, professional planning, aside from requiring expertise, understanding, and political savvy, requires the correct technology to collect topographical information/data to facilitate effective translation of strategic visions into strategic action plans. However, collecting data is a challenging aspect of urban planning, because, effective urban planning requires large amounts of accurate data, which is difficult to collect since most often, locations, prove difficult to access. Further, most of the traditional technology for aerial surveys is highly expensive. An ideal methodology that is cost effective, highly accurate, and able to navigate difficult locations is required. One such methodology is UAV photogrammetry. UAV technology has revolutionized the aerial photogrammetric mapping, and its applications are speedily increasing. When compared with present day conventional methods of photogrammetry, UAV photogrammetry has improved and facilitated the creation of an advanced tool for photographic measurement. The major components of a UAV include an unmanned aircraft, a transmitter, a communication link, an image sensor such as a digital or infrared camera and mission planning. UAV photogrammetry can also be integrated with a LiDAR system. UAV photogrammetry facilitates both autonomous and semi-autonomous remote controlling and ensures the accuracy of an approximate 1cm to 2cm. Further, it creates new opportunities for application in close range domains by combining terrestrial and aerial photogrammetry, but its major advantage is that it is highly cost effective, as it enhances problem-solving applications and offers an alternative to the traditionally manned and high-cost aerial photogrammetry. UAV photogrammetry enables application in both small scale and large scale, with varying or similar system costs, depending to an extent on the intricateness of the system. Nevertheless, this study utilizes a UAV system approach and its cost effective applications to develop a methodology that facilitates mapping tree canopies in urban areas. Digital images can be directly captured by use of a digital camera, or they can be scanned from aerial photographs. Being in digital form facilitates the easy storage and management of the photographic information as digital maps, digital orthoimage and DEM in a computer. This method of using a computer to process digital images and aerial photography is called the digital photogrammetric method. A consumer grade UAV system was used to acquire the aerial photographs required for the study. Specifically, the Professional Edition, DJI Phantom 3 with a Sony EXMOR camera was utilized as the system for data acquisition of aerial photographs of the area of study.

2. STUDY AREA AND DATA USED 2.1 Study Area The University environment (The Asian Institute of Technology, Thailand) was selected as the study area. Because, it has land-cover type consists of combination of buildings and trees, which can be, considers as eco-friendly urban setting. 2.2 Data from UAV System Figure 1: Study Area (c WorldView-2 image 2012 DigitalGlobe) Phantom 3 professional drone is selected as the platform to collect aerial images of the study area which are processed to make the orthoimage and DSM. Phantom series drones by DJI are one of the most common civilian drones among the community, mainly used for photography and as a hobby. In recent studies, accurate mapping capacity of phantom 3 drone is exploited and it is used in this task to acquire aerial images. The drone is chosen considering the affordability and mobility. According to DJI, the Phantom 3 professional weights 1280g and falls to the micro UAV category. It is listed with a flight time of 23 minutes with a fully charged lithium polymer battery. The included remote control unit operates on a 2.4GHz ISM frequency and has the ability to communicate with the Phantom 3 up to 5 km range from the remote control. The 12 megapixel Sony EXMOR camera is factory built and mounted in 3 axis stabilization gimbal which provides clear stabilized images. Flight planning was done with a software named Map Pilot for DJI, which is used to execute full autonomous flight. The advantage of modern flight planning software is requirement of minimal interference for operators which allows to focus on field operation rather than calculation. The app automatically computes the exposure location and flight paths according to the given flying altitude and image overlap. Flying altitude is set to 100m above ground level, which provides 4.2cm per pixel average resolution in unprocessed photographs. Both sides overlap and forward overlap were set to 80% to provide more key points as possible for accurate photogrammetric processing. The total number of 1670 near vertical photos were collected by 7 successive flights which were processed together to obtain DSM and orthoimage.

3. METHODOLOGY 3.1 DSM & Orthoimage Generation The DSM & orthoimage generation was carried out using Agisoft PhotoScan software. The below diagram show the basic processing steps of the software. Feature matching across the photos Solving for camera intrinsic and extrinsic orientation parameters Dense surface reconstruction Texture Mapping Figure 2: Processing steps of Agisoft PhotoScan The first step is the feature matching across photos. Here the PhotoScan software identifies, in the source photos, the stable points under variations of viewpoint and lighting. After identifying, the software produces descriptors for the stable points based on their local neighborhood. The produced descriptors are then utilized to discover any correlation, communication or consistency among the photos. Aspects of this process are similar to the popular SIFT approach, but unlike the SIFT approach, this software employs different algorithms to acquire an alignment of slightly higher quality. Subsequently, the PhotoScan software applies a greedy algorithm to approximately locate the camera positions and then with the use of a bundle adjustment algorithm, proceeds to refine them. A variety of algorithms are at hand at the following dense surface reconstruction phase. Here, the methods dependent on the pairwise depth map computation are the exact, height field and smooth methods while the one depends on the multi-view approach is the fastest method. In the next phase, the software develops a texture atlas by creating parameters of the surface, possibly by cutting it (the surface) into smaller pieces and blending source photos. Resulted DSM and orthoimage shown below. Figure 3: Left: DSM; Right: Orthoimage

3.2 Tree Canopy Mapping A decision tree was developed to map the tree canopy (See Figure 4). Tree canopy mapping proves to be a complex task without near infrared band. So we used the height information and object parameters to map the canopy. This task was carried out using ecognition software package. The most elementary step while using ecognition for image analysis is the segmentation or separation of a scene depicting an image turn, into image objects. Therefore, segmentation at the initial stage involves the separation of an image into isolated regions that are represented by simple unclassified image objects, known as image object primitives. To get the most realistic image objects, the necessary elevation information must be represented, and it should not be too big or too small for the analysis task. The first step involved is creating image objects. To do this, the multiresolution segmentation algorithm was utilized. This is because it organizes the image into objects, regions with similar pixel values. Therefore, the areas that are homogenous produce objects that are larger, and areas that are heterogeneous produce smaller objects. How homogeneous/heterogeneous the objects are allowed to get is operated by the scale parameter. An assumption, that the buildings and trees were always elevated was made. Further, the study area had not experienced much change in terrain elevation, so simply the applying elevation threshold is good enough to classify elevated objects. In the second processing step, the trees had to be separated from buildings, as both were elevated objects. Here again the DSM information was used. It is apparent that Elevation (DSM) texture of the buildings is far smoother than trees, which can be represented by object s standard deviation (Trees - high Elevation Standard Deviation and Building - law Elevation Standard Figure 4: Decision Tree Deviation). So, the thresholding on Standard Deviation was used to separate Trees and Buildings, which are both elevated objects. Therefore, the standard deviation of the DSM was applied to separate the trees and the buildings. Nonetheless, some trees were still classified as Buildings even after refining the classification using the standard deviation of the DSM layer. It was obvious that the green layer contained significant information about vegetation. So well know Green Ratio green/(red+green+blue), which measures green component of the color was used for further refinements. Some areas of the buildings had not thus far been classified or were de-classified again because they accomplished one of the conditions before. Some of the not classified objects were highly surrounded by Buildings objects. If an unclassified object had a high common border to Buildings objects it ought to belong to the class Buildings. The neighborhood relationships of objects were then used to refine the classification. There were still some misclassified objects, but they were very small. The compactness of the object of interest was the separating condition here, based on the assumption, that the buildings have a regular shape. Refining the classification depended on that characteristic. 4. RESULTS Results showing the separation of Tree Canopy from the building is shown in Figure 4. Additionally, these results were compared with respect to manual digitized results of the study area, which is shown in Table 1. 64% of Total Accuracy was obtained through this methodology using consumer-grade UAV.

Classification Results 5. CONCLUSION AND DISCUSSION Figure 5: Resulted tree canopy map Table 1: Accuracy Assessment of the Classification Manual Digitizing Tree Non-Tree Tree 18 % 4 % Non-Tree 32 % 46 % Total Accuracy: 64% Total Accuracy with respect to manual digitizing and visual inspection of the results; validate the approach that we have used in this paper. Even without Near Infrared band, which is not available in consumer-grade UAV s, this approach successfully maps the Vegetation Canopy. It is prominent that this accuracy was good enough for urban planners to assess accessibility to green spaced of urban population for sustainable planning of the urban environment. Unavailability of freely available Software for Object Oriented Classification, high cost of UAV s and legal barriers associated with UAV are the main obstacles to this approach. With the current advancement of technology and the popularity of UAVs will help to overcome these obstacles in the future. 6. REFERENCES Uzar, Melis and Naci Yastikli. "Automatic Building Extraction Using Lidar and Aerial Photographs". Bol. Ciênc. Geod. 19.2 (2013): 153-171. Web. Attarzadeh, R. and Momeni, M. (2012) Object-based building extraction from high resolution satellite imagery, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXIX-B4, pp. 57 60. doi: 10.5194/isprsarchives-xxxix-b4-57-2012. Baatz, M., Benz, U., Dehghani, S. and Heynen, M. (2004) ecognition User Guide 4, Definiens Imagine GmbH, Munchen, Germany,pp. Baatz, M. and Schape, A. (2000) Multiresolution Segmentation: an optimization approach for high quality multiscale image segmentation, Proceedings of Angewandte Geogr. Informationsverarbeitung XII, Strobl, J. and Blaschke, T. eds., Wichmann, Heidelberg, pp. 12-23. Algorithms used in Photoscan (2015) Retrieved 17 September 2015, Available at http://www.agisoft.com/forum/index.php?topic=89.0 Hofmann, A.D., Mass, H., Streilein, A., 2002. Knowledge-based building detection based on laser scanner data and topographic map information, IAPRS, Vol. 34, Part 3A+B, pp.163-169. Trimble Navigation Limited, 2010. Getting started Example: Simple building extraction. In ecognition 8.0: Guided Tour Level 1. United States, pp. 32-55.

Lee, D.S., Shan J., Bethel, J.S., 2003. Class-guided building extraction from IKONOS imagery. Photogrammetric Engineering and Remote Sensing 69 (2), 143-150. Jiang, N., Zhang, J.X., 2008. Object-oriented building extraction by DSM and very high-resolution orthoimages. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol XXXVII. Part B3b. Song, W. and Haithcoat, T., 2005. Development of comprehensive accuracy assessment indexes for building footprint extraction. IEEE Transactions on Geoscience and Remote Sensing 43(2), pp. 402 404. Wei, Y.; Zhao, Z.; Song, J. Urban Building Extraction from High-Resolution Satellite Panchromatic Image Using Clustering and Edge Detection. In Proceedings of IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 20 24 September 2004; Volume 7, pp. 2008-2010. Agisoft, L. (2013). Agisoft PhotoScan user manual. Professional edition, version 0.9. 0. AgiSoft LLC (Pub), Calgary, CA.