Compressed Point Cloud Visualizations

Similar documents
AMSC 664: Final Report Constructing Digital Elevation Models From Urban Point Cloud Data Using Wedgelets

Preferred directions for resolving the non-uniqueness of Delaunay triangulations

IMAGE DE-NOISING IN WAVELET DOMAIN

LiDAR Data Processing:

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

FAST MULTIRESOLUTION PHOTON-LIMITED IMAGE RECONSTRUCTION

DIGITAL TERRAIN MODELS

prismatic discretization of the digital elevation model, and their associated volume integration problems. Summary

The Ball-Pivoting Algorithm for Surface Reconstruction

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Lecture 7: Most Common Edge Detectors

Digital Image Processing Fundamentals

Shape analysis through geometric distributions

Bagging for One-Class Learning

Workshop. on Model Choice and Regularization November SFB 475 joint with IBB/GSF ABSTRACTS

Iterated Functions Systems and Fractal Coding

Survey of the Mathematics of Big Data

Algorithms for GIS: Terrain simplification

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN

Computer Vision I - Filtering and Feature detection

TERM PAPER ON The Compressive Sensing Based on Biorthogonal Wavelet Basis

Triangular Mesh Segmentation Based On Surface Normal

CS 521 Data Mining Techniques Instructor: Abdullah Mueen

DENOISING OF COMPUTER TOMOGRAPHY IMAGES USING CURVELET TRANSFORM

Introduction to Computer Graphics. Modeling (3) April 27, 2017 Kenshi Takayama

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Generate Digital Elevation Models Using Laser Altimetry (LIDAR) Data

Uncertainties: Representation and Propagation & Line Extraction from Range data

Removing a mixture of Gaussian and impulsive noise using the total variation functional and split Bregman iterative method

Sensory Augmentation for Increased Awareness of Driving Environment

Summary of Raptor Codes

WHEN IS MORE NOT BETTER? Karen T. Sutherland Department of Computer Science University of Wisconsin La Crosse La Crosse, WI 56401

Curvelet Transform with Adaptive Tiling

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Snakes, Active Contours, and Segmentation Introduction and Classical Active Contours Active Contours Without Edges

Spatial Outlier Detection

Outline. Visualization Discretization Sampling Quantization Representation Continuous Discrete. Noise

Research Article Image Segmentation Using Gray-Scale Morphology and Marker-Controlled Watershed Transformation

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software

Quality Guided Image Denoising for Low-Cost Fundus Imaging

FOOTPRINTS EXTRACTION

A Volumetric Method for Building Complex Models from Range Images

Chapters 1 7: Overview

Feature Detectors and Descriptors: Corners, Lines, etc.

Outline. Reconstruction of 3D Meshes from Point Clouds. Motivation. Problem Statement. Applications. Challenges

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment

COMPUTER AND ROBOT VISION

Backcasting: A New Approach to Energy Conservation in Sensor Networks

DEVELOPMENT OF ORIENTATION AND DEM/ORTHOIMAGE GENERATION PROGRAM FOR ALOS PRISM

Tomographic reconstruction: the challenge of dark information. S. Roux

Segmentation and Tracking of Partial Planar Templates

Image Fusion Using Double Density Discrete Wavelet Transform

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 3, MARCH

A CURVELET-BASED DISTANCE MEASURE FOR SEISMIC IMAGES. Yazeed Alaudah and Ghassan AlRegib

Basis Selection For Wavelet Regression

Fractal Compression. Related Topic Report. Henry Xiao. Queen s University. Kingston, Ontario, Canada. April 2004

DIGITAL TERRAIN MODELLING. Endre Katona University of Szeged Department of Informatics

arxiv: v1 [math.co] 7 Dec 2018

Image Restoration and Reconstruction

Central Slice Theorem

A Wavelet Tour of Signal Processing The Sparse Way

Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm


DECONFLICTION AND SURFACE GENERATION FROM BATHYMETRY DATA USING LR B- SPLINES

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS

Digital Image Processing

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Statistical Approach to a Color-based Face Detection Algorithm

Parameterization of triangular meshes

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

Rate-Distortion Optimized Tree Structured Compression Algorithms for Piecewise Polynomial Images

Based on Regression Diagnostics

DCT image denoising: a simple and effective image denoising algorithm

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Intrinsic Dimensionality Estimation for Data Sets

Tutorial on Image Compression

Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics

Clustering Reduced Order Models for Computational Fluid Dynamics

Measurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs

Digital Image Processing

Understanding policy intent and misconfigurations from implementations: consistency and convergence

Comparison between Various Edge Detection Methods on Satellite Image

ON SWELL COLORED COMPLETE GRAPHS

Advanced Processing Techniques and Classification of Full-waveform Airborne Laser...

Mesh Generation. Quadtrees. Geometric Algorithms. Lecture 9: Quadtrees

SEG/New Orleans 2006 Annual Meeting

TEXT DETECTION AND RECOGNITION IN CAMERA BASED IMAGES

On the Nonlinear Statistics of Range Image Patches

Agile multiscale decompositions for automatic image registration

ADAPTIVE FINITE ELEMENT

Level-set MCMC Curve Sampling and Geometric Conditional Simulation

Digital Geometry Processing

Robust Poisson Surface Reconstruction

Multiscale Geometric Image Processing

Robust Moving Least-squares Fitting with Sharp Features

1 The range query problem

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

Transcription:

Compressed Point Cloud Visualizations Christopher Miller October 2, 2007

Outline Description of The Problem Description of The Wedgelet Algorithm Statistical Preprocessing Intrinsic Point Cloud Simplification Validation Planned Languages and Hardware

Problem Description Description of Data Sets Survey aircraft equipped with a LIDAR range detection system produce maps of terrain consisting of millions of non-uniformly sampled points in R 3. This list of points is referred to as a Point Cloud. P = {(x 1, y 1, z 1 ), (x 2, y 2, z 2 ),..., (x n, y n, z n ).

Problem Description Problems With Current Point Cloud Representations Ideally this point cloud could be converted to an image and transmitted in real time. Currently however, techniques for producing viewable images from point cloud data sets are neither time or memory efficient. Regression techniques such as LOESS, which approximates the manifold locally using low degree polynomials produce good visualizations but do not admit a compact representation. Techniques which approximate the function piecewise on arbitrary adaptive meshes also do not admit a compact representation since the mesh is unknown to the receiver.

Problem Description Project Goal The goal of this project is to implement the wedgelet image transform on a LIDAR point cloud corresponding to urban terrain to create a highly compressed representation of the image. Additional preprocessing algorithms will be implemented to attain additional compression and improve image fidelity.

Description of The Wedgelet Algorithm The idea behind wedgelets was first described by Donoho in [1] as a way to encode images in the cartoon class, piecewise constant images with C 2 boundaries between discontinuities. The goal of wedgelets was to be able to capture directional data present in an image which wavelets are unable to capture.

Description of The Wedgelet Algorithm Wedgelets work by inducing a dyadic partition of an image, a tiling of the image space using squares of not necessarily constant size. On each square a line is drawn seperating the square into disjoint regions called wedges. Each wedge is then fitted with a constant value.

Description of The Wedgelet Algorithm Figure: Original Grey Scale Image of a House[3] Figure: Wedgelet Transform using 1000 Wedges [3]

Description of The Wedgelet Algorithm Wedgelets are provably quazioptimal when used on images of a class similar to the class we are interested in[1]. Namely data with a large geometric component such as urban topography. The line through each dyadic square essentially serves an an edge detector. Once the two discontinuous regions are separated they can be well approximated using a finite element space containing few degrees of freedom.

Description of The Wedgelet Algorithm Theorem Laurent: Let f be a piecewise constant with C 2 boundary. Assume that the set L j consists of all lines taking the angles { π 2 + 2 j lπ : 0 l 2 j }. Then for N N there exists a wedgelet approximation (g, W ) with W N and f g 2 2 CN 2.[3]

Description of The Wedgelet Algorithm Advantages of Wedgelets Wedgelets allow a very compact representation by taking advantage of the quadtree structure induced by the dyadic partition and by using approximating functions containing few degrees of freedom.

Statistical Preprocessing One disadvantage of using raw point cloud data as opposed to gridded data is the increased presence of sensor noise. Most of this noise consists of incorrect extreme values and gaussian white noise. Wedgelets has demonstrated some ability at eliminating white noise[2].

Statistical Preprocessing Examples of Noisy Images Figure: Salt and Pepper Noise 10 percent distribution

Statistical Preprocessing Examples of Noisy Images Figure: Gaussian White Noise µ = 0, σ =.005

Statistical Preprocessing Examples of Noisy Images Several algorithms have been proposed for the elimination of statistical outliers from laser point clouds.[5] Any algorithm that is implemented must be able to act on the point cloud at multiple scales. It must also be able to distinguish true outliers from discontinuities inherent in the data.

Intrinsic Point Cloud Simplification One disadvantage of the wedgelets transform is its computational complexity. For large data sets the algorithm will be required to perform tens of thousands of least squares regressions on data sets as large as ten million points.

Intrinsic Point Cloud Simplification Moenning and Dodgson[4] have proposed an algorithm that removes redundant points from a point cloud data set. The algorithm is based on the idea of farthest point sampling. A reduced data set makes the task of generating an image or compact representation from the point cloud much easier. This is particularly important given the complexity of the wedgelet transform.

Intrinsic Point Cloud Simplification This algorithm is robust in the sense that it can be applied locally without fear of over deleting elements from the cloud. There is a built in user defined minimal density guarantee that can be applied at all scales of the image[4]. This is important because if a particular section of the point cloud is overly thinned then the existence of a unique linear least squares fit cannot be guaranteed on that section. This would make wedgelets useless on that section of the point cloud.

Intrinsic Point Cloud Simplification Figure: Example of Figure 4. Feature-sensitively distributed point sets generated by adaptive simplification using local curvature estimation Moenning over renderingand of corresponding Dodgson s mesh reconstructions. (a) ρ = 8.00 (b) ρ = 5.50. Point Cloud Simplification Technique[4] quired to meet a uniform density requirement. As illustrated by figures 1 and 4, the algorithm presented here meets this requirement by taking into account a user- Figure 5. Renderings of meshes reconstructed sets generated by adaptive simplification using ture estimation. (a) 97.5% simplified (ρ = 8.50). (b) 95.0% simplified (ρ = 4.30). (c) 90.0% simplified (ρ = 2.10). (d) Original model (187644 points). used to hold the farthest point candidates. Its m quirements therefore vary with the magnitude sity parameter and the target model size respecti will generally be a fraction of the input model s Approximation error. Qualitatively, as i figures 2, 3 and 5, even for relatively small ta sizes, both the uniformly and feature-sensitivel

Validation Validation is to be accomplished in three steps: Analytic Proofs Implementation on a known image Comparison with Images from the USGS Archives

Validation Analytic Proofs Donoho[1] and F uhr [3] demonstrate strong error bounds for wedgelets under the assumption that the domain space is continuous. Moenning and Dodgson do not include any analytic error analysis with the description of their algorithm We would like to provide similarly strong error bounds on the Wedgelet algorithm assuming a discrete image space, and to perform a mathematical analysis of the error introduced by Moenning and Dodgson s point cloud simplification technique.

Validation Implementation On a Known Image This stage of validation we will take a given gridded image and represent it as a point cloud. The compression algorithm will then process the point cloud. The resulting image should be similar to the image resulting when wedgelets is run on the original gridded image.

Validation Comparison With Images from the USGS Archives The algorithm will process a point cloud taken from the USGS archive. The resulting image will be compared to a DEM (Digital elevation model) of the same region.

Planned Languages and Hardware The primary language will be C++ Some MATLAB will be used for visualization The recursive nature of the Wedgelets algorithm makes it a prime candidate for paralyzation. Currently the planned hardware consists of two windows PCs with dual core 2.33GHz Intel Xenon processors.

References David L. Donoho Wedgelets: Nearly Minimax Estimation of Edges, The Annals of Statistics, Vol. 27, No. 3. (Jun., 1999), pp. 859-897. Laurent Demaret, Felix Friedrich, Hartmut Fhr, Tomasz Szygowski, Multiscale Wedgelet Denoising Algorithms, Proceedings of SPIE, San Diego, August 2005, Wavelets XI, Vol. 5914, X1-12 H. F uhr, L. Demaret, F. Friedrich. Beyond Wavelets: New Image Representation Paradigms, Document and Image Compression, M.Barni and F.Bartolini (eds), May 2006. Moenning C., Dodgson N. A.: A New Point Cloud Simplification Algorithm. Proceedings 3rd IASTED Conference on Visualization, Imaging and Image Processing (2003). S. Sotoodeh, Outlier Detection In Laser Scanner Point Clouds, IAPRS Volume XXXVI, Part 5, Dresden 25-27 September 2006