Gradient-Free Boundary Tracking* Zhong Hu Faculty Advisor: Todd Wittman August 2007 UCLA Department of Mathematics

Similar documents
Subpixel Corner Detection Using Spatial Moment 1)

Biomedical Image Analysis. Point, Edge and Line Detection

Lecture 12 Level Sets & Parametric Transforms. sec & ch. 11 of Machine Vision by Wesley E. Snyder & Hairong Qi

Using Structural Features to Detect Buildings in Panchromatic Satellite Images

Comparative Study of ROI Extraction of Palmprint

Lecture 15: Segmentation (Edge Based, Hough Transform)

6.7. POLAR COORDINATES

Output Primitives Lecture: 4. Lecture 4

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Practical Image and Video Processing Using MATLAB

Files Used in This Tutorial. Background. Feature Extraction with Example-Based Classification Tutorial

Edge and local feature detection - 2. Importance of edge detection in computer vision

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment)

Simulation of Flow Development in a Pipe

9.5 Polar Coordinates. Copyright Cengage Learning. All rights reserved.

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

Laboratory of Applied Robotics

Lecture 9: Hough Transform and Thresholding base Segmentation

MODULE - 4. e-pg Pathshala

Spectral Classification

OBJECT detection in general has many applications

Digital Image Processing, 3rd ed.

IRIS SEGMENTATION OF NON-IDEAL IMAGES

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

ELEN E4830 Digital Image Processing. Homework 6 Solution

Lane Markers Detection based on Consecutive Threshold Segmentation

HOUGH TRANSFORM CS 6350 C V

CHAPTER 5 MOTION DETECTION AND ANALYSIS

HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Image Processing

EECS490: Digital Image Processing. Lecture #22

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Computer Vision 6 Segmentation by Fitting

Trigonometric Functions of Any Angle

EECS490: Digital Image Processing. Lecture #19

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

BIM472 Image Processing Image Segmentation

10.7. Polar Coordinates. Introduction. What you should learn. Why you should learn it. Example 1. Plotting Points on the Polar Coordinate System

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

DATA FUSION, DE-NOISING, AND FILTERING TO PRODUCE CLOUD-FREE TEMPORAL COMPOSITES USING PARALLEL TEMPORAL MAP ALGEBRA

COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES

September 17, 2003 Rev 2.0 (321-06) SUBJECT: Analysis of a Cylindrical pitot-static device for use in Air Flow Measurement

Module 7 VIDEO CODING AND MOTION ESTIMATION

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

EE368 Project: Visual Code Marker Detection

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Segmentation and Grouping

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Edge detection. Gradient-based edge operators

Final Exam Assigned: 11/21/02 Due: 12/05/02 at 2:30pm

Optimization. Intelligent Scissors (see also Snakes)

PASS Sample Size Software

13 Distribution Ray Tracing

Evolutionary Multi-Objective Optimization of Trace Transform for Invariant Feature Extraction

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection

Section 10.1 Polar Coordinates

COS Lecture 10 Autonomous Robot Navigation

IRIS recognition II. Eduard Bakštein,

Improved image segmentation for tracking salt boundaries

On-Line Quality Assessment of Horticultural Products Using Machine Vision

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Uncertainties: Representation and Propagation & Line Extraction from Range data

EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform

Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Image Processing: Final Exam November 10, :30 10:30

2014 Google Earth Engine Research Award Report

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle

EE795: Computer Vision and Intelligent Systems

6. Object Identification L AK S H M O U. E D U

Renderer Implementation: Basics and Clipping. Overview. Preliminaries. David Carr Virtual Environments, Fundamentals Spring 2005

Schedule for Rest of Semester

UAV-based Remote Sensing Payload Comprehensive Validation System

fraction of Nyquist

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives

Digital Image Processing using MATLAB and STATISTICA

An Orthogonal Curvilinear Terrain-Following Coordinate for Atmospheric Models!

Line, edge, blob and corner detection

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Distance and Angles Effect in Hough Transform for line detection

Fast and accurate automated cell boundary determination for fluorescence microscopy

Monitor and Map Vegetation Dynamics

Estimation Of Chlorophyll-A Concentrations Using Field Spectral Measurement And Multi-source Satellite Data In Lake Qiaodao, China (Project ID :10668)

Edge-directed Image Interpolation Using Color Gradient Information

Transcription:

Section I. Introduction Gradient-Free Boundary Tracking* Zhong Hu Faculty Advisor: Todd Wittman August 2007 UCLA Department of Mathematics In my paper, my main objective is to track objects in hyperspectral images, since hyperspectral images enable us to extract much more detailed information than standard RGB images. Classical segmentation methods, such as snakes and active contours, look at all the pixels in the image. For large multi-channel images, this is very expensive. Ideally, the number of pixels tracked should be proportional to the length of the boundary. Our solution is to simply walk the boundary in a systematic manner. A tracking algorithm is developed based on the robotic path planning algorithm developed by Jin and Bertozzi [1]. The algorithm is first tested on simple binary images and then extended to tracking objects in hyperspectral images. Section II. Algorithm The basic tracking algorithm is outlined below. Input: Angular velocity w, tracking velocity V Ask the user to input a point (x,y) in the image if (x,y) is inside the boundary set d = 1 else set d = -1 Initialize θ = 0 For fixed number of iterations Set θ = θ + d * w x = x + V * cos θ y = y + V * sin θ if (x,y) lies outside the image domain End the program if we have made a full circle V = 2 * V if we cross the boundary Store the boundary point d = -d The algorithm I have developed provides a method to track boundaries of an arbitrary image. To determine whether the image is inside or outside the boundary, we create a binary signal d(x,y), such that at a certain threshold, d(x,y)=1 if (x,y) is inside the boundary and d(x,y)= -1 if (x,y) is outside the boundary. Next, the user picks an arbitrary point on * This research is supported by NGA NURI grant HM1582-06-BAA-004

the image, which will be called (x,y). The number of iterations could either be fixed or a stopping criterion can be created. For each iteration, we move V units. If inside the boundary, the angle θ is increased by w, the angular velocity. If outside the boundary, the angle θ is decreased by w. The location x=x+v*cosθ and y=y+v*sinθ will be our new location. If the initial point is more than V units away from the boundary, the algorithm will fail to track the boundary. In this case, the algorithm will repeatedly trace out a circle. To overcome this problem, if we detect that a full circle was made, the size of V is increased until we find the boundary point. Then it is set back to the original value of V. Fig. 1 illustrates the importance of this correction. Fig. 1 Left: Without step V correction Right: With step V correction When the value of d changes sign, this indicates that we have crossed the boundary of the object. We store the boundary point as the midpoint of the current value of (x,y) and the previous (x,y) point. Fig. 2 shows how the algorithm works and also provides an example of how it applies on a binary image. -w v θ w Fig. 2 Left: Illustration of the algorithm Right: Example of applying the algorithm Section III. Performance The term Percent Pixels on Boundary (PPB) is defined by:

# boundary points detected PPB = Total # pixels tested PPB is used to measure the performance of the algorithm. We want to detect as many boundary points as possible while minimizing the number of pixels tested. Thus, it makes sense that the higher the PPB, the higher efficiency is for tracing the boundary. Applying the algorithm on the same image with the same number of iterations and parameters, the PPB varies slightly depending on the initial point. In Fig. 3 left, we can see that the crossing angle θ is large, which indicates that the tracking is not efficient. To improve our tracking, we apply the angle correction proposed in [1]. After we cross the boundary, θ is updated ( ) w t2 t1 2θ ref θ = θ + d 2 where t 1 and t 2 are times of the last two boundary crossings and θref is a parameter. From Fig. 3 right, the angle θ is much flatter once it passes the boundary. Also, the PPB with angle correction and θ ref 0.5 is higher than without angle correction. Moreover, angle correction tracks more of the image boundary with the same number of iterations. Speed V= 5, w=1, PPB=0.21, 100 iterations Fig. 3 Left: Without angle correction Speed V= 5, w=1, PPB=0.42,100 iterations Right : With angle correction If substantial noise is present in the images, our tracking algorithm fails to track the boundary. To overcome this, we apply two independent cumulative sum (CUSUM) filters as in [1]: 0 if t = 0 U ( t ) = min( U, max(0, A(x, y) - B - c u + U) if t > 0 L ( t ) = 0 if t = 0 max( L, min(0, A(x, y) - B + where U and L are the accumulation thresholds, A is the image, A(x,y) is the intensity at location (x,y), B is the threshold for the image A, and c u and c l are the dead-zone parameters. CUSUM filters deal with noise by minimizing the detection delay and keeping c l + L) if t > 0

the probability of false record low. U(t) is called the high-side filter and L(t) is called the low-side filter. If we move inside the boundary and the intensity is above B, U(t) increases as the accumulation of the measurement increases. When U(t) is greater than U, we have detected a boundary crossing. The same idea holds for L(t). In Figure 4, the left image shows the failure of the original algorithm to trace the boundary with Gaussian noise of variance 0.1. The image on the right shows the results using CUSUM filters by choosing U =50, L =-50, c u =2, c l =2, and B=100 to the image with Gaussian noise with variance 0.1. Figure 5 shows a plot of U versus L. Fig. 4 Left: Without CUSUM filters Right: With CUSUM filters 5 0 4 0 3 0 2 0 1 0 0-1 0-2 0-3 0-4 0-5 0 0 50 1 00 15 0 2 00 25 0 3 0 0 Fig. 5 The plot of U (red) versus L (blue) over time. First, observe that CUSUM performs best on images with noise. Also, CUSUM helps us track the boundary more smoothly. The performance of using CUSUM filters is better possibly due to the noise. To verify the above results, we test the tracking algorithm both with and without the CUSUM filter on a noise-free image. The result coincides with the above results (see Fig. 6).

V=3, PPB=0.297,150 iterations Fig. 6 Left: With CUSUM filters V=3, PPB=0.43, 150 iterations Right: Without CUSUM filters Section IV. Boundary tracking on hyperspectral images In order to track the water in a hyperspectral image, we need a water detector. Instead of using a water detector, we are going to use a human activity detector to differentiate natural from man-made regions. We use Normalized Difference Vegetation Index (NDVI): ρ nir ρ red NDVI = ρ + ρ nir where ρ nir and ρ red are the reflectance values in the near-infrared and red bands. At a certain threshold, NDVI does a good job differentiating land from water (see Fig. 7). red Fig. 7 Left: A Near Infrared band in San Francisco Bay Right: NDVI of San Francisco Bay Fig. 8 shows the results of using NDVI to trace the boundary on the green band of the hyperspectral image of San Francisco Bay. Both of the algorithms use the same w, initial

point, and same number of iterations, but different V. The smaller V traces the boundary more accurately, but it takes a longer time to run. a) V=25, Runtime=38.8 sec b) V=15, Runtime=20.2 sec Fig. 8 Boundary tracking on a hyperspectral image of San Francisco Bay The size of the hyperspectral image of San Francisco Bay is large, about 4050 by 3000 pixels. Thus, it is reasonable to use large units of V to track the boundary. To see the details of tracing the boundary, let s perform the algorithm on a small portion of the image and use small units of V. Also, it is a good idea to use the CUSUM filters since there may be some noise in the NDVI image. From Fig. 9, using the method of CUSUM filters with the right parameters seems to trace the boundary much better than the method without the CUSUM filters. However, the PPB with CUSUM filters is smaller than the one without CUSUM filters, which indicates that the PPB may not be ideal in measuring how well it can trace the boundary. Also, the CUSUM filters acts similar to the angle correction method in that the boundaries appear smoother, which indicates that tracking is more efficient. References [1] Z. Jin and A. L. Bertozzi, Environmental Boundary Tracking and Estimation Using Multiple Autonomous Vehicles, UCLA CAM preprint, 2007. [2] C. Unsalan and K. L. Boyer, A System to Detect Houses and Residential Street Networks in Multispectral Satellite Images, ICPR04, 18 Dec. 2004

800 iterations with CUSUM, V=3, PPB = 0.33 800 iterations without CUSUM, V=3, PPB = 0.42 Fig. 9 Effect of CUSUM filters on boundary tracking of 500x600 portion of San Francisco Bay image.