Local Image Registration: An Adaptive Filtering Framework

Similar documents
Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

A Robust Two Feature Points Based Depth Estimation Method 1)

Efficient Block Matching Algorithm for Motion Estimation

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

Global Flow Estimation. Lecture 9

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Lecture 16: Computer Vision

EE795: Computer Vision and Intelligent Systems

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

1-2 Feature-Based Image Mosaicing

Lucas-Kanade Image Registration Using Camera Parameters

Dense Image-based Motion Estimation Algorithms & Optical Flow

Lecture 16: Computer Vision

Segmentation and Tracking of Partial Planar Templates

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

The Lucas & Kanade Algorithm

Chapter 3 Image Registration. Chapter 3 Image Registration

Motion Estimation. There are three main types (or applications) of motion estimation:

WATERMARKING FOR LIGHT FIELD RENDERING 1

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural

Global Flow Estimation. Lecture 9

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

arxiv: v1 [cs.cv] 2 May 2016

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Image Coding with Active Appearance Models

Lucas-Kanade Without Iterative Warping

CS6670: Computer Vision

Visual Tracking (1) Feature Point Tracking and Block Matching

Marcel Worring Intelligent Sensory Information Systems

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Notes 9: Optical Flow

One category of visual tracking. Computer Science SURJ. Michael Fischer

The Template Update Problem

Comparison Between The Optical Flow Computational Techniques

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Comparison between Motion Analysis and Stereo

Image Fusion Using Double Density Discrete Wavelet Transform

Final Exam Study Guide

Object Detection in Video Streams

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Module 7 VIDEO CODING AND MOTION ESTIMATION

Lecture 20: Tracking. Tuesday, Nov 27

Compositing a bird's eye view mosaic

Feature Tracking and Optical Flow

Chapter 18. Geometric Operations

CS6670: Computer Vision

Optical Flow Estimation

Classification and Generation of 3D Discrete Curves

Midterm Examination CS 534: Computational Photography

Q-warping: Direct Computation of Quadratic Reference Surfaces

Using temporal seeding to constrain the disparity search range in stereo matching

CHAPTER 5 MOTION DETECTION AND ANALYSIS

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

Non-Rigid Image Registration

Accurate Image Registration from Local Phase Information

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

Comparison between Various Edge Detection Methods on Satellite Image

AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING

Predictive Interpolation for Registration

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Motion Detection Algorithm

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Real-time target tracking using a Pan and Tilt platform

A Bottom Up Algebraic Approach to Motion Segmentation

EE795: Computer Vision and Intelligent Systems

Visual Tracking (1) Pixel-intensity-based methods

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

ELEC Dr Reji Mathew Electrical Engineering UNSW

Elastic Bands: Connecting Path Planning and Control

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Feature Tracking and Optical Flow

Peripheral drift illusion

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

CS231A Section 6: Problem Set 3

Performance study on point target detection using super-resolution reconstruction

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Performance Evaluations for Parallel Image Filter on Multi - Core Computer using Java Threads

Tracking Computer Vision Spring 2018, Lecture 24

A Video Watermarking Algorithm Based on the Human Visual System Properties

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Pictures at an Exhibition

Super-Resolution. Many slides from Miki Elad Technion Yosi Rubner RTC and more

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Redundancy and Correlation: Temporal

CS-465 Computer Vision

Face Detection using Hierarchical SVM

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

Transcription:

Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester, NY 14627 b College of Engineering, Koc University, Istanbul, Turkey {caner, tekalp, gsharma, wheinzel}@ece.rochester.edu ABSTRACT We present a novel local image registration method based on adaptive filtering techniques. The proposed method utilizes an adaptive filter to track smoothly, locally varying changes in the motion field between the images. Image pixels are traversed following a scanning order established by Hilbert curves to preserve the contiguity in the 2-D image plane. We have performed experiments using both simulated images and real images captured by a digital camera. The proposed adaptive filtering framework has been shown by experimental results to give superior performance compared to global 2-D parametric registration and Lucas-Kanade optical flow technique when the image motion consists of mostly translational motion. The simulation experiments show that the proposed image registration technique can also handle small amounts of rotation, scale and perspectivity in the motion field. Keywords: Local image registration, adaptive filtering, hilbert curves 1. INTRODUCTION Image registration is a fundamental step required for more complex tasks in both image processing and computer vision. Due its wide range of application areas, a large number of techniques have been proposed to solve this problem in different scenarios. Extensive surveys of image registration techniques can be found in Brown 1 and Zitova et al.. 2 Image registration techniques can be classified into two groups, i) methods that rely on only the image data, and do not take into account the underlying camera or scene geometry, and ii) methods that take into account the underlying assumptions about the image acquisition process (i.e., scene geometry, camera motion). Methods that fall into the first group are mostly designed for image processing applications, e.g., block-matching method used for video compression. 3 Predominant methods in the latter group are from computer vision area, e.g., plane+parallax 4 method for 3-D scene structure computation. In this paper, we propose a new local image registration technique, in the first class, based on adaptive filtering techniques. Adaptive filters have been utilized successfully for system identification purposes in 1-D. This motivates us to formulate the image registration problem as a system identification problem and build an adaptive filtering framework to solve it. Adaptive filtering has been proved to track smoothly varying changes in the system response. In order to simulate same kind of effect in the image registration problem, image pixels are traversed following a scanning order established by Hilbert Curves to preserve contiguity in the 2-D image plane and hence track a smoothly varying motion field. In the proposed adaptive filtering framework, a number of conditions are required in the filter, in order to guarantee that it models the true motion field. The proposed image registration technique is also computationally simpler than other methods for local image registration such as the pyramid-based image registration techniques and uses only local information in the images. Both these features make the proposed technique quite applicable in practical applications (e.g., imaging sensor networks) where the computational and memory resources are scarce. Send correspondence to G. Caner, email:caner@ece.rochester.edu This work is partly supported by the National Science Foundation under grant number ECS-42817. Computational Imaging III, edited by Charles A. Bouman, Eric L. Miller, Proceedings of SPIE-IS&T Electronic Imaging, SPIE Vol. 674 2 SPIE and IS&T 277-786X//$1 19

2. THEORY OF THE ADAPTIVE FILTERING FRAMEWORK The image registration problem can be regarded as a system identification problem by the following scenario: Pixel values in one image, I 1 (x, y) can be expressed in terms of the pixel values in the other image I 2 (x, y), assuming that these two images contain overlapping views of the same scene (e.g, multiple camera views, or successive frames of a video). We can express the relation between I 1 (x, y) andi 2 (x, y) as a spatially varying system response, h o (x, y; x o,y o ): I 2 (x o,y o )= h o (x, y; x o,y o )I 1 (x, y)+e(x o,y o ) (1) x,y where e(x o,y o ) denotes the deviations from the system response. Using this relation, we formulate image registration problem as a system identification problem, where the system response, h o ( ) is estimated in an adaptive filtering framework. Though computation of system response, h o ( ) provides prediction of one image from the other, it does not guarantee the estimation of the true motion field between the images. In the next subsections, we discuss the conditions that the adaptive filter should satisfy in order to utilize the proposed adaptive filtering framework for image registration. 2.1. Review of 1-D Adaptive Filtering Before explaining how 2-D adaptive filtering is utilized for image registration, we review the 1-D adaptive filtering. 1-D adaptive filtering is typically a two step process as shown in Fig. 1: i) a filtering process, where the filter coefficients, ĥ(t, t o) are convolved with the input signal, v 1 (t), to produce an estimate of the desired response, v 2 (t o ), and ii) an adaptive process where the set of filter coefficients are adjusted using the resulting estimation error, e(t o ). For the commonly used least-mean-square (LMS) adaptation algorithm, the adaptive filtering process is given as: ˆv 2 (t o ) = ĥ(t, t o )v 1 (t) (2) (t U) e(t o ) = v 2 (t o ) ˆv 2 (t o ) (3) ĥ(t +1,t o ) = ĥ(t, t o)+βe(t o )v 1 (t), t U (4) where β and U denote the adaptation step-size and the support of the 1-D filter, respectively. Consider the scenario where the desired response v 2 (t o ) is related to the input signal v 1 (t), through a system model h(t, t o )asv 2 (t o )= t h(t, t o)v 1 (t). Under appropriate conditions, the adaptive filter coefficients ĥ(t, t o) closely approximate and track slow changes in h(t, t o ). The adaptation step-size β determines the speed of convergence, tracking capability, and the closeness of the approximation. v (t) 1 v (t) 2 Filtering Process h(t o,t ) Filter Coefficient Adaptation ^ 2 + e (t) + v (t) Figure 1. Least-mean-square adaptation 16 SPIE-IS&T/ Vol. 674

2.2. 2-D Adaptive Filtering for Image Registration The adaptive filtering technique inherently works in 1-D domain. In order to use it for image registration (i.e. estimate h o (x, y; x o,y o )), it should be extended properly to 2-D domain. For two-dimensional images, the adaptive filter takes the format of a 2-D finite impulse response (FIR) filter, h(x,y). Figure 2 shows the support of the filter, R, on the reference image, I 1 (x, y). Using the 2-D notations, the 2-D LMS adaptation algorithm can be written as: 1)Filter output (Prediction phase): 2)Estimation error: 3)Filter adaptation (Update phase): Î 2 (x o,y o )= (x,y R) ĥ b (x, y; x o,y o )I 1 (x, y) () e(x o,y o )=I 2 (x o,y o ) Î2(x o,y o ) (6) ĥ a (x, y; x o,y o )=ĥb(x, y; x o,y o )+βe(x o,y o )I 1 (x, y), (x, y) R (7) 4)Initializing the filter for the next pixel, (x n,y n ): ĥ b (x, y; x n,y n )=ĥa(x, y; x o,y o ) (8) where (x, y) R and β is the adaptation step-size. The subscripts b and a denote before and after adaptation, respectively. Under the appropriate conditions, (i.e., the right step-size and the right filter size) ĥ(x, y; x o,y o ) converges to the system model, h o (x, y; x o,y o ) which maps the reference image to the current image. Reference image Current image I 1(x, y) I 2(x, y) ^ h(x, y; x o, y o) on a 3*3 support, R I (x, y ) 2 o o Figure 2. 2-D adaptive filtering for images 2.3. Motion Modeling Through Adaptive Filter Although the estimation of system response, h o ( ) can provide a prediction of I 2 (x, y) fromi 1 (x, y), it models the true motion field between I 1 (x, y) andi 2 (x, y) only if the following conditions are satisfied: i) the system response should have a single peak, and ii) its elements should sum to 1.. Figures 3 and 4 show two system responses, estimated for two different image pairs, respectively for the image pair shown in Figure and for the image pair shown in Figure 6. As observed in Figure 3, the system response has a single peak and hence models the true motion field between the images shown in Figure. The system response shown in Figure 4 has more than one peak, because the input images are two different images. Figure 7 shows the absolute registration error image, after the proposed 2-D adaptive filtering method is applied to these uncorrelated images. From Figures 4 and 7, we conclude that the system response, h o ( ) does not model the true motion field between the uncorrelated images even though it can be used to predict one image from another. SPIE-IS&T/ Vol. 674 161

.6 h(x, y; xo,yo).4.2.2 1 1 y x 1 1 Figure 3. 2-D adaptive filter estimated for flower images.1 h(x, y; xo,yo).1...1 1 1 y x 1 1 Figure 4. 2-D adaptive filter estimated for two uncorrelated images 3. IMPLEMENTATION In this section, we discuss the issues relating to the implementation of the proposed adaptive filtering framework for image registration. In 1-D adaptive filtering, time provides a natural order for 1-D signals. However, for 2-D images, a scanning order should be established in order to apply the proposed 2-D adaptive filtering technique. This order can be established through the use of space-filling curves that map multi-dimensional data to 1- D space. In the proposed adaptive filtering framework, in order for the adaptive filter to converge and track spatially varying changes in the motion field, the desirable space-filling curve should have the following properties: i) adjacent pixels in 1-D space should be adjacent in 2-D image plane, and ii) consecutive pixels in 1-D space should be within the same neighborhood in 2-D image plane. Hilbert Curves 6 provide these conditions, and therefore used to map the 2-D image plane to 1-D. Figure 8 shows a Hilbert curve for a 2-D region of size 16 16. In the standard 2-D LMS adaptation algorithm, one pixel information is utilized to estimate the adaptive filter coefficients at the corresponding location in the current image. In other terms, with one equation, a set of filter coefficients are computed (i.e., more unknowns than number of equations). This results in a number of valid solutions, known as aperture problem in the optical flow estimations. 3 To regularize the problem and avoid indiscriminate changes in the filter coefficients, we utilize block LMS adaptation algorithm rather 162 SPIE-IS&T/ Vol. 674

Figure. Flower images Figure 6. Two uncorrelated images than standard LMS. In this technique, a uniform motion constraint is applied over a block of pixels around the current pixel, for which the adaptive filter is estimated. Large displacements among the images may require a very large adaptive filter. However, filter size cannot be simply increased in the adaptation process because it will affect the convergence characteristic of the filter and increase the computational burden. In order to be able to use a filter with a constant size in the adaptation process, and still keep tracking large locally varying changes in the mis-registration, adaptive filter is shifted to its center of mass after each update, and integer pixel shifts are saved along with the filter coefficients. Another important variable in the proposed adaptive filtering framework is the adaptation step-size, β. This variable determines the rate at which the adaptive filter coefficients h(x, y; x o,y o ) are adapted. However, there is a trade-off between the convergence characteristic and tracking capability of the filter. A large β can provide a better tracking of the system response, when there is a significant variation in the system model, h o (x, y; x o,y o ), though it may result in large gradient estimation noise, and prevent the filter to converge. In order to provide fast tracking capability and also prevent gradient noise amplification, we apply normalized version of the LMS adaptation algorithm. The proposed image registration technique is designed to solve local mis-registration among the images. Therefore, it should follow an initial global alignment phase in order to handle large displacements between the images and coarsely align them. In the experiments, initial global alignment was performed following two approaches: a) 2-D parametric registration, 7 and b) fitting a parametric motion model into a number of correspondent image features in the images. SPIE-IS&T/ Vol. 674 163

Figure 7. Registration error after the proposed registration method Figure 8. Hilbert Curve on a 16 16 square 4. EXPERIMENTAL RESULTS We applied the proposed image registration technique on both real and simulated images. Figure 9 shows two input images, which are captured by a moving digital camera. The scene consists of multiple planes, thus invalidating the planar scene assumption. The spatially-locally varying mis-registration among the two images results from camera motion, and differences in lighting due to deviations from lambertian assumptions. Before applying the proposed local image registration method, the input images are aligned globally with respect to the book on the left side of the 3-D scene. This is achieved by selecting a number of correspondent feature points on the book object manually, and fitting an eight parametric motion model into the correspondent points. After warping one input image towards the other with the estimated motion parameters, the absolute registration error image, shown in Figure 1 is computed. The estimated motion parameters are then used to update the adaptive filter for the first pixel on the scan order of the current input image. The adaptive filter is then updated following the scan order established by Hilbert Curves, using the proposed 2-D adaptive filtering technique. The absolute registration error image, shown in Figure 11 is computed by using the following set of parameters in the adaptive filtering framework: 1) adaptation step size is set to.2, 2) a constant size adaptive filter (13 13) is used in the adaptation process 3) block size for the block LMS is set to (3 3), 4) number of iterations per image is set to. Both of the Figures 1 and 11 are contrast-enhanced (i.e., multiplied by 2) for better presentation. The psnr values of the error images after the initial global alignment and the proposed registration method are 17.93 db and 31.24 db, respectively. The proposed technique is also compared with hierarchical Lucas-Kanade technique, 8 using a window size of (9 9) that results in the highest psnr. The corresponding psnr of the absolute registration error image is computed to be 23.3 db (i.e., number of pyramid levels is 3). 164 SPIE-IS&T/ Vol. 674

In order to confirm that the estimated adaptive filter models the true motion, we check the filter coefficients, (i.e., they should have a single peak and they should sum to 1.). Figure 12 shows the converged filter at an arbitrary location. Figure 13 shows a 2D map of integer pixel shifts. The camera motion resulting in the image motion between the two images can be observed from this figure. The pixel locations, where the integer pixel shifts are not uniform, indicate the occluded regions, where the motion field cannot be estimated. There are some overlaps observed in Figure 13. This is because computation of Hilbert curve requires the 2-D region to be a square. In the implementation, input images are divided into overlapping squares, and a Hilbert Curve is computed for each square. Then, the proposed 2-D adaptive filtering technique is performed on each square iteratively. Figure 9. left:reference input image, right:current input image for multiple planar object Figure 1. Registration error after global alignment The proposed adaptive filtering method is also applied to simulated images, in order to evaluate its performance when the input images have large rotation/scale differences. Figure 14 shows two such images, where the left input image is built from the right input image. Figure 1 shows the absolute difference image between the images with a psnr of 17.96 db. The proposed 2-D adaptive filtering technique for image registration is applied to the images, with the same parameters used in the first experiment. Figure 16 shows the absolute registration error image with a psnr of 31. db. Both Figures 1 and 16 are contrast-enhanced (i.e., multiplied by 2) for better presentation. SPIE-IS&T/ Vol. 674 16

Figure 11. Registration error after the proposed registration method.3 h(x, y; xo,yo).2.1.1 1 1 y x 1 1 Figure 12. 2-D adaptive filter for multiple planar object. CONCLUSION In this paper, we presented a new local image registration technique using 2-D adaptive filtering techniques. We set some conditions on the adaptive filter, (i.e., it should have a single peak) in order to estimate the true motion field. Using Hilbert Curves to traverse the input images, we guaranteed a slowly-varying system response and achieved filter convergence together with improved tracking capability. We performed extensive experiments using both real images and simulated images. Experiment results indicate that the proposed technique can register images with locally varying motion even when the input images have small amount of rotation/scale/perspectivity differences. The proposed registration method also provides higher psnr s when compared to 2-D parametric registration and Lucas-Kanade local image registration technique. REFERENCES 1. L. G. Brown, A survey of image registration techniques, ACM Computing Surveys 24, pp. 32 376, Dec. 1992. 2. B. Zitova and J. Flusser, Image registration methods: A survey, Image and Vision Computing 21(11), pp. 977 1, 23. 3. A. M. Tekalp, ed., Digital Video Processing, Prentice Hall, Upper Saddle River, NJ, 199. 166 SPIE-IS&T/ Vol. 674

y 4 4 3 3 2 2 1 1 1 2 3 4 6 x Figure 13. 2Dmapofintegerpixelshifts Figure 14. left:reference input image, right:current input image for simulation experiment 4. H. Sawhney, Simplifying motion and structure analysis using planar parallax and image warping, International Conference on Pattern Recognition, pp. A43 A48, 1994.. S. Haykin, Adaptive Filter Theory, Prentice Hall, NJ, fourth ed., 22. 6. H. Sagan, Space-filling curves, Springer, Berlin, 1994. 7. J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, Hierarchical model-based motion estimation, European Conference on Computer Vision, pp. 237 22, May 1992. 8. B. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, Proc. DARPA Image Understanding Workshop, 1981. SPIE-IS&T/ Vol. 674 167

Figure 1. Simulation experiment: Difference between the input images Figure 16. Simulation experiment: Registration error after the proposed registration method 168 SPIE-IS&T/ Vol. 674