A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

Similar documents
Fast Electronic Digital Image Stabilization. Carlos Morimoto Rama Chellappa. Computer Vision Laboratory, Center for Automation Research

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Yes. Yes. Yes. Video. Vibrating? Define nine FOBs. Is there any moving object intruding the FOB? Is there any feature in the FOB? Selection of the FB

Hybrid Video Stabilization Technique for Hand Held Mobile Videos

Improved Video Mosaic Construction by Accumulated Alignment Error Distribution

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Video Stabilization, Camera Motion Pattern Recognition and Motion Tracking Using Spatiotemporal Regularity Flow

Chapter 3 Image Registration. Chapter 3 Image Registration

Invariant Features from Interest Point Groups

Center for Automation Research, University of Maryland. The independence measure is the residual normal

Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology

Motion Estimation. There are three main types (or applications) of motion estimation:

Local Image Registration: An Adaptive Filtering Framework

Object Recognition with Invariant Features

EE795: Computer Vision and Intelligent Systems

Using temporal seeding to constrain the disparity search range in stereo matching

EE795: Computer Vision and Intelligent Systems

CS 4495 Computer Vision Motion and Optic Flow

Efficient Block Matching Algorithm for Motion Estimation

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

Motion Tracking and Event Understanding in Video Sequences

ELEC Dr Reji Mathew Electrical Engineering UNSW

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

A Real-time Algorithm for Atmospheric Turbulence Correction

Motion Estimation for Video Coding Standards

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

Research on Evaluation Method of Video Stabilization

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Dense Image-based Motion Estimation Algorithms & Optical Flow

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Feature Tracking and Optical Flow

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Marcel Worring Intelligent Sensory Information Systems

Comparison Between The Optical Flow Computational Techniques

Multi-stable Perception. Necker Cube

Real-Time Motion Analysis with Linear-Programming Λ

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Synthesizing Realistic Facial Expressions from Photographs

AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING

Global Flow Estimation. Lecture 9

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Peripheral drift illusion

Feature Tracking and Optical Flow

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Global Flow Estimation. Lecture 9

A Robust Two Feature Points Based Depth Estimation Method 1)

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Robust Camera Pan and Zoom Change Detection Using Optical Flow

Motion and Target Tracking (Overview) Suya You. Integrated Media Systems Center Computer Science Department University of Southern California

Optical Flow Estimation with CUDA. Mikhail Smirnov

Optical flow and tracking

A Reliable FPGA-based Real-time Optical-flow Estimation

MOTION COMPENSATION IN BLOCK DCT CODING BASED ON PERSPECTIVE WARPING

Camera Stabilization Based on 2.5D Motion Estimation and Inertial Motion Filtering

CS6670: Computer Vision

Hand-Eye Calibration from Image Derivatives

Digital Image Stabilization and Its Integration with Video Encoder

Error Equalisation for Sparse Image Mosaic Construction

NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

EECS 556 Image Processing W 09

Optical Flow Estimation

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

Real-time target tracking using a Pan and Tilt platform

Notes 9: Optical Flow

1-2 Feature-Based Image Mosaicing

An introduction to 3D image reconstruction and understanding concepts and ideas

TERM PAPER REPORT ROBUST VIDEO STABILIZATION BASED ON PARTICLE FILTER TRACKING OF PROJECTED CAMERA MOTION

A Semi-Automatic 2D-to-3D Video Conversion with Adaptive Key-Frame Selection

Spatio-Temporal Stereo Disparity Integration

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen

Outline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios

Performance study on point target detection using super-resolution reconstruction

VC 11/12 T11 Optical Flow

Novel Iterative Back Projection Approach

Automatic Stabilization of Image Sequences

Dense Motion Field Reduction for Motion Estimation

Optical Flow Estimation versus Motion Estimation

Robust Video Super-Resolution with Registration Efficiency Adaptation

arxiv: v1 [cs.cv] 28 Sep 2018

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Idle Object Detection in Video for Banking ATM Applications

Lecture 16: Computer Vision

Image Mosaicing with Motion Segmentation from Video

Lecture 16: Computer Vision

Prediction-based Directional Search for Fast Block-Matching Motion Estimation

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

CSE 252B: Computer Vision II

Image processing and features

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Computer Vision Lecture 20

Transcription:

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr, d.xu@tees.ac.uk, I.french@tees.ac.uk, b2060811@tees.ac.uk ABSTRACT Video sequences often suffer from jerky movements between successive frames. In this paper we present a stabilisation method to extract the information from successive frames and to correct the undesirable effects. In the method, the optical flow is computed and the estimated motion vectors using the Horn-Schunck algorithm are passed to the model-fitting unit that stabilizes and smoothens the video sequence. A synchronization module supervises the whole system. The measures based on the fidelity of the system (PSNR) are given to demonstrate the stabilisation and efficiency of the system. KEYWORDS: Motion Estimation, Motion Model, Video Stabilisation, Optical Flow, Horn- Schunck Technique, Image Sequence Smoothening, System Fidelity, Global Transformation Fidelity (GTF). 1. INTRODUCTION Image stabilisation is a key preprocessing step in video analysis and processing. In general stabilisation is a kind of warping of video sequences, in which the image motion due to camera rotation or vibration is totally or partially removed. Most proposed algorithms compensate for all motions [1,2,3,4,5,6], so the resultant background remains motionless. The motion models in [1,2] usually combine a pyramid structure to evaluate the motion vectors with an affine motion model to represent rotational and translational camera motions, and others [7] use a probabilistic model with a Kalman filter to reduce the motion noises and to obtain stabilized camera motions. Chang et al [8] use the optical flow between consecutive frames to estimate the camera motion by fitting a simplified affine motion model. Hansen et al [5] describe an image stabilisation system, which uses a multi-resolution, iterative process to estimate the affine motion parameters between levels of Laplacian pyramid images. The parameters through a refinement process achieve the desired precision. This paper describes a simplified stabilisation algorithm where an iterative process based on a coarse to fine fashion is used. The motion vectors are firstly estimated using the block matching technique, between two successive fields, and then the dense motion field is estimated using the motion vectors and the Horn-Schunck algorithm. By fitting an affine motion model, the motion parameters are computed and the currently stabilized i th video frame is based on the previously stabilized frame or the original i th frame. The ambiguity between image motion caused by 3D rotation and that caused by 3D translation (per frame) is solved by analysing the direction of motion vectors [9,10] and their standard deviation. The organisation of this paper is as follows. Section 2 gives a brief overview of video stabilisation algorithm. The proposed motion model based stabilisation method is described in detail in section 3. Section 4 presents the simulation results of the method. And finally, the conclusions are drawn in section 5. 2. OVERVIEW OF VIDEO STABILISATION ALGORITHM The electronic image stabilisation system provides the stable images originated from unstable incoming video sequence or unstable imaging sensor. In such a system, a stabilisation algorithm is usually adopted to estimate and correct affine deformations between successive frames as well

as to align images into the stabilized video ready for display. The first frame of a video sequence is often used to define the reference coordinate system. By applying the appropriate motion models, such as affine model, the subsequent frames are aligned so that each frame is warped to align with the reference frame and then properly displayed. A stabilisation system usually includes three major components: a motion estimation module, a motion compensation module and an image synchronization unit. In the next section, we present the implementation of the motion estimation and motion compensation modules in our system. 3. MOTION MODEL BASED STABILISATION METHOD 3.1 Motion Estimation The accuracy of the stabilisation system mainly depends on the motion vectors produced during the interframe motion estimation. We use a coarse to fine method, in which we perform a block correlation at a course scale and then interpolate the resulting estimates and pass them through 15 iterations of Horn and Schunck s algorithm [11]. The smoothness parameter is chosen as 30; the block size equals to 8 and the search range in both directions is 7 pixels. To measure block correlation, the MSE criterion is used. Under user control, this stabilisation method can be made to compensate for translation, rotation, zooming, etc, through analysing the motion vectors [9]. In the case of scaling and/or rotation, and with N motion vectors between two successive frames, the affine motion parameters can be estimated by solving the following over-constrained linear system. x1 y1 x2 y2 x N y N y1 x1 y 2 x2 y N x N 1 0 0 1 1 0 0 1 * 1 0 0 1 [ a b c d ] T x1 + u1horn y1 + v1horn = x N + u Nhorn y N + v Nhorn B C A And the vector C can then computed as: t 1 t C = ( B B) B A In the case of small rotations, i.e., cosθ = sinθ = θ, the system has three unknowns (θ, c,d), which are solved by the least square method (the scaling factor has to be computed first). The above method to produce the motion vectors is very sensitive to outliers or misplaced data. Therefore, the motion vectors above a certain value are characterized as outliers and are substituted by their median value. Here, geometric mean, Harmonic mean, standard deviation, median and trim-mean techniques have been applied and tested. The last two are most resistant (robust) to outliers. 3.2 Motion Correction and Compensation In order to obtain stabilised output video sequences, different types of filters have been applied and tested to smoothen the video sequences. The recursive Kalman filtering is used to remove camera vibrations. The moving average filter smoothens data by replacing each data with the average of the neighbouring data defined within the span. The span is set to be equal to seven. The locally weighted scatter plot smooth uses weighted linear regression to smooth data, and the Savitzky-Golay [12] filter is used as a generalized moving average filter. In our system, a memory is used for storing the motion vectors (or the motion parameters) of the five frames (realtime). The flowchart of the proposed video stabilisation process is shown in Fig. 1.

Image Acquisition System Identifying available Devices Logging Data to Disk Load and Read a video sequence Color Space Conversion video Format Conversion Frame k Frame k-1 Block Maching Optical Flow Horn-Schunck Motion Estimation Unit Ouliers filtering M.V's Analysis Motion Model Fit θ, Χ, Υ,S v u Global Parameters Memory & Smoothing Motion Synchronization Image Compensation Motion compensation unit Stabilized Video Sequence Figure 1. Flowchart of the proposed video stabilisation process The motion compensation warps the smoothened motion vector (u,v) or motion parameters (θ, Χ, Υ,S), as shown in Fig. 1. The original frame and the previously stabilized frame are used to perform the motion compensation frame by frame. The problem with this approach is that the errors caused at the earlier stages of the video stabilisation will be propagated to the subsequent frames. Hence, a comparator (supervised) is used to compare the motion parameters (or motion vectors) produced from the original video sequence with those produced from the stabilized video sequence. A synchronization frame is then transmitted to prevent stabilisation failure. Fig.2 shows the compensation process. Frame i Frame i-1 Frame i-2 Frame i Original Video Sequence Model Fit Model Fit Model Fit Stabilized Frame Stabilized Frame Stabilized Frame Stabilized Video Sequence Synchronization Unit Figure 2. Compensation process to stabilise video sequences

4. SIMULATION RESULTS To analyse the performance of the proposed motion model based method, simulations have been carried out based on the QCIF format (176 pixels by 144 lines) video sequences (200 frames each), which are uploaded into a PC in.avi format. We converted the RGB (24bits) colour space to YCbCr colour space and worked on the Y plane. The PC used is a Pentium IV 2.8 GHz with 2GB RAM. The block size is chosen as 8 8 pixels and search range is -7 to +7 pixels in both horizontal and vertical directions. The MSE has been used as a search criterion. Fig. 3 shows an example of a stabilized frame from the video sequence my clock and the rotation motion vectors estimated before the Optical Flow is shown in Fig. 4. The random vectors detected in the image frame are due to zooming in/out effect or small translational motion produced during the video recording. Since dynamic processes, like stabilisation, cannot be displayed with still images, we displace only in Fig. 5 the variations of the three parameters (θ, c, d). In the figure, the initial parameters are compared with the smoothened ones and the experiment shows good results. The PSNR or the Interframe Transformation Fidelity (ITF) is given in Fig. 6, which shows high values of PSNR, i.e. the fidelity of the system is high. Fig. 7 shows that the GTF drops from frame to frame since each new frame has less overlap with the reference frame and after the 30 th frame the sequence does not overlap with the reference. Figure 3. My clock video sequence Figure 4. The dense optical field Figure 5. Original and smoothened motion parameters

Figure 6. Measure of the I.T. Fidelity. Figure 7. Measure of the G.T. Fidelity 5. CONCLUSIONS A video stabilisation method using the Horn-Schunck motion estimation technique is presented in this paper. In the method, the video sequence is firstly recorded and uploaded to a PC. Secondly, the motion vectors are examined on an image-by-image basis and fitted with the appropriate motion model. Thirdly, the synchronization is realised to prevent system failure. These three characteristics make the stabilisation system applicable for real-time applications when the camera is connected to a PC. The advantages of the presented technique are its simplicity, robustness and stability of each computational step. However, several aspects of our method can be improved to achieve better performance, for example de-noising and recognizing blurring images at the initial stages of the stabilisation are desirable. The fidelity measurement (the PSNR in our case) used to evaluate the performance of the system is not absolute since it depends on the video sequence being stabilized and on the motion model used, but it is useful when we compare different stabilisation systems. 6. REFERENCES [1] C. Morimoto and R. Chellapa, Automatic digital image stabilisation, Proceedings of IEEE International Conference on Pattern Recognition, 1997, pp.660-665. [2] C. Morimoto and R. Chellapa, Fast electronic digital image stabilisation for off-road navigation, Proceedings of 13th International Conference on Pattern Recognition, Vol.3, August 1996, pp.284-288. [3] P. Burt and P. Anandam, Image stabilisation by registration to a reference mosaic, Proceedings of DARPA Image Understanding Workshop, Monterey, CA, 1994, pp.425-434. [4] L. S. Davis, R. Bajcsy, R. Nelson and M. Herman, RSTA on the move, Proceedings of DARPA Image Understanding Workshop, Monterey, CA, 1994, pp.435-456. [5] M. Hansen, P. Anandan, K. Dana, G. Van der Wal and P. J. Burt, Real time scene stabilisation and mosaic constraction, in Proc. DARPA Image Understanding Workshop, Monterey, CA, 1994, pp.457-465. [6] M. Irani, B. Rousso and S. Peleg, Recovery of ego-motion using image stabilisation, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, 1994, pp.454-460. [7] A. Litvin, J. Konrad and W. C. Karl, Probabilistic video stabilisation using Kalman filtering and masaicking, Proceedings of IS&T/SPIE symposium on Electronic Imaging, Image and Video Communications, 2003, pp.20-24. [8] H. C. Chang et all, Arobust and efficient video stabilisation algorithm, Proceedings of IEEE ICME, 2004, pp.29-32.

[9] I. Koprina and S. Carrato, Temporal video segmentation: A survey, Signal Processing: Image communication, Vol.16, 2001, pp.477-500. [10] J. C. Tucker and A. De Sam Lazaro, Image stabilisation for a camera on a moving platform, Intelligent Systems and Robotics Laboratory, Department of Mechanical and Materials Eng. Washington State University, Pullman WA 99164-2920 [11] B. K. P. Horn and B. G. Schunck, Determining optical flow, Artificial Intelligence, Vol.17, 1981, pp. 185-203. [12] A. Savitzky and M.J.E Golay, Smoothening and differentiation of data by simplified least squres procedures, Analytical Chemistry, Vol.35, No.8, 1964, pp.1627-1639.