Video Stabilization Using SIFT-ME Features and Fuzzy Clustering

Size: px
Start display at page:

Download "Video Stabilization Using SIFT-ME Features and Fuzzy Clustering"

Transcription

1 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems September 25-30, San Francisco, CA, USA Video Stabilization Using SIFT-ME Features and Fuzzy Clustering Kevin L. Veon, Student Member, IEEE, Mohammad H. Mahoor, Member, IEEE, Richard M. Voyles, Senior Member, IEEE Department of Electrical and Computer Engineering University of Denver, Denver CO Abstract We propose a digital video stabilization process using information that the scale-invariant feature transform (SIFT) provides for each frame. We use a fuzzy clustering scheme to separate the SIFT features representing global motion from those representing local motion. We then calculate the global orientation change and translation between the current frame and the previous frame. Each frame s translation and orientation is added to an accumulated total, and a Kalman filter is applied to estimate the desired motion. We provide experimental results from five video sequences using peak signal-to-noise ratio (PSNR) and qualitative analysis. I. INTRODUCTION Video stabilization is the process of removing unwanted motion or jitter from a video, while maintaining the desired motion. The human perceptual system is sensitive to high frequency motions, which cause the discomfort many people feel while watching shaky video. Visual quality and viewability of a video can be improved by smoothing these high frequency motions. Viewability is important for applications such as surveillance and teleoperation of unmanned vehicles. Motion due to vibration is usually treated as zero-mean Gaussian noise, so it is safe to remove these motions without negatively affecting the task s accuracy. This is similar to how the human spatial awareness system works. The vestibular ocular reflex causes the human eye to move in opposition to the motion of the head. This effectively helps stabilize the visual field of a human when slight motions such as vibrations occur. The three most common approaches to digital video stabilization include block matching, KLT feature tracking, and SIFT feature tracking. Block matching is most often used in these processes due to its relatively low computational complexity compared to finding robust features [1], [2], [3], [4], [5]. Feature tracking is often more accurate and robust with respect to noise and rotation. Many implementations use KLT feature tracking because it is feasible to run in real-time [6], [7], [8]. Some people have used SIFT feature tracking previously due to its robustness and high accuracy, though it is unlikely to run in real-time currently [9], [10], [11]. Two types of motion exist in videos: local motion and global motion. Local motion represents the motion of dynamic objects in the scene, such as cars, animals, or people. These objects have their own inherent motion combined with the camera motion. Unless a model is known, or assumed, a priori for each object s motion, it is counterproductive to use them in the estimation of camera motion. Global motion is the motion of the scene induced by the motion of the camera, which consists of both desired motions (panning, zooming) and undesired motions (jitter). It represents the motion of fixed objects in the scene such as buildings, trees, or streets. These static objects are good candidates for estimating camera motion. Desired motion can be separated from the global motion by looking at the accumulation of error or using a method such as Kalman filtering. The separation of local and global motion is a key concept in digital video stabilization. Several techniques make use of the consistency or inconsistency of accelerations and velocities. Delta flow [6] is one such technique that has successfully been used to separate these two categories of motion. Using illumination-based optical flow [12], [13], delta flow assumes that local motion has acceleration and velocity that is inconsistent with respect to time, while global motion has acceleration and velocity that is consistent with respect to time. Although the moving objects themselves do not actually have inconsistent accelerations or velocities, the illumination-based optical flow at each pixel will be inconsistent when an object moves over the area. Alternatively, feature tracking methods of optical flow [14], [15] have successfully been used. [16] separates local and global motion by treating each feature as an individual object with linear motion. Because static objects should share consistent velocities and because there are normally more static objects than moving objects, a majority vote is taken to estimate the global motion. Fuzzy logic has been used previously for video stabilization in [17]. They used a least-squares estimation of the global motion. They then create two vectors to find the estimation error. The first vector is calculated by finding the difference between the current frame s feature and the previous frame s matched feature. The second vector is calculated by using the estimated motion and calculating where the matched feature should be in the current frame based on its location in the previous frame. The resultant vector from subtracting these two vectors is the estimation error, which is then used in a fuzzy logic system to determine good candidates for a refined least-squares estimation. We propose to use SIFT feature tracking for our video stabilization method. A key difference between our method /11/$ IEEE 2377

2 and the previous SIFT-based video stabilization processes is that we utilize the orientation given by the SIFT feature. As far as we know, nobody has directly used the orientation of SIFT features in video stabilization, though it has been used for human activity recognition [18] where the augmented SIFT feature tracker was named SIFT-ME. We separate local and global motion by introducing a trust value for each SIFT- ME feature and using a fuzzy clustering technique to find the most trusted group of features. We then use Kalman filtering to determine desired motion, which allows for situations where the camera is meant to move. The remainder of the paper is organized as follows. Section 2 contains a short discussion about SIFT-ME, fuzzy set theory, and Kalman filtering. Section 3 provides a detailed description of our proposed video stabilization technique. Section 4 describes the experimental procedure and results. Section 5 introduces possible future work and concludes the paper. II. BACKGROUND This section contains relevant theory and discussion of the three primary tools used in our video stabilization approach: SIFT motion estimation (SIFT-ME), fuzzy set theory, and Kalman filtering. A. SIFT Motion Estimation Derived from SIFT [14], SIFT-ME is a feature that describes both the translation and in-plane rotation of tracked SIFT features [18]. The SIFT-ME feature is a vector containing three values: the rotation β about the center of the feature, the magnitude ρ of the translation, and the direction α of the translation. The vector <β,ρ,α> is sufficient to describe both the translation of a feature and its in-plane rotation: x t cos(β) sin(β) ρ cos(α) y t = sin(β) sin(β) ρ sin(α) x t 1 y t The orientation of SIFT features is often ignored. This information is unavailable with features such as KLT or Harris corners, but is readily available with SIFT. The rotation of the tracked feature was used in [18] to determine the motion of body parts for human activity recognition. The use of both rotational and translational components of SIFT- ME outperformed the use of only the translational component for activity recognition. B. Fuzzy Clustering and Fuzzy Sets Classical set theory has binary membership qualities: either the element is a member of the set, or it is not [19]. This distinction is absolute. It follows classical logic, which similarly defines propositions to be definitely true or definitely false. Classical set theory is valid when the boundaries for membership can be clearly and precisely defined. Fuzzy set theory is an extension of classical set theory that is not as restrictive about membership. It allows for partial. (1) membership, which can be any value in the range [0,1]. Unlike classical logic, fuzzy logic allows for propositions to have a degree of truth. This is particularly useful when data is imprecise, inaccurate, or not clearly defined and known. The classic example of fuzzy set theory s usefulness is in the case of height. The set tall is not clearly defined. Different people will have different opinions about what constitutes being tall. If a precise boundary is set at a height of 2.0m, then someone whose height is 1.99m will not be considered tall. This does not make sense, as we do not distinguish such a small difference as significant enough to change our opinion of whether or not someone is tall. Fuzzy set theory allows us to define a degree of membership to a set at any given value. A membership function is defined that describes the level of membership over the domain. In the example of the set tall, we might define a ramp function that ramps from 0 to 1 starting at 1.6m and ending at 2.0m, maintaining a value of 1 for all values greater than 2.0m. This membership function follows the intuition that as a person s height increases, it is more likely that people will consider him to be tall. C. Kalman Filtering The Kalman filter is an adaptive least-square error filter which estimates the state of a system subject to Gaussian noise. It attempts to take imprecise linear data and predict the most likely current state of the system iteratively. The state update equation is: x t = Ax t 1 + Bu t + w t 1 (2) where A is the state model, x is the state vector of the system, B is the control input model, u is the control input vector, and w N (0,Q) is the Gaussian state model noise. If Q has a lower value, the state model effectively has a higher weight. The output update equation is: y t = Cx t + n t (3) where C is the selection mask for which state variables are visible and n N (0,R) is the measurement noise. III. VIDEO STABILIZATION PROCESS This section contains a description of each step in our proposed video stabilization process. The three major steps of the process include the separation of local and global motion, estimation of motion, and finally, motion compensation. A block diagram describing the steps is shown in Figure 1. A. Separation of Local and Global Motion We extract SIFT features from the previous and current frames. Each SIFT feature is assigned a trust value which represents how often it has been used to estimate global motion. New features are assigned a nominal value. Any feature that is used to calculate the global motion has its trust value incremented so it is more likely to be picked in the next iteration. This method relies on the assumption 2378

3 Fig. 1. Block diagram displaying our stabilization process for video sequences. It is also separated into macro-blocks which represent SIFT, fuzzy clustering, motion estimation, and motion compensation. that background features are chosen instead of foreground features. To satisfy this assumption, the first few frames of the video should not include any local motion so the trust values of background features are guaranteed to be higher than any other feature. To approximate separation of local and global motion, we employ fuzzy clustering. Instead of the classic fuzzy c-means clustering [20], [21], we choose to perform two steps for clustering. First, the k-means clustering algorithm is used to define the k cluster centroids. We then use the Euclidean distance of the rotational or translational components of the SIFT-ME features from each centroid to assign a membership value: M(x i,c j ) = 1 x i c j 2 i, j. (4) The random nature of k-means clustering, as well as the uncertainty in the value of k, are the primary reasons that we use fuzzy clustering. Ideally, we would prefer to use every background feature in the calculation of global motion in order to always increment their trust values. Using classical set theory, if the relationship between local and global motion is ambiguous, a large k value would be necessary to improve the quality of local and global motion separation; however, if the value of k is too large, it is likely that very few features will belong to each set. Choosing any of these sets leads to increasing the trust of only a small amount of features. If these few features become occluded or unusable later, there is less potential for differentiating between local and global motion. Fuzzy set theory alleviates this issue. Even if k is chosen to be a large value, many more features could have relatively high membership values in multiple clusters. As long as a feature has a high enough membership in a cluster, it can be considered part of that cluster. This removes the need for a feature to be a member of a single distinct cluster and allows a larger amount of features to have their trust values incremented during each iteration. Given the fuzzy clusters and the trust value for each feature, the cluster with the highest weighted sum is chosen as the best cluster c best : c best = argmax c n i=1 M(x i,c) T xi (5) where x i is the i th member of the set of features x, M(x i,c) is the membership of x i to cluster c, and T xi is the trust value of x i. Any member of the best cluster above a threshold membership value t is chosen as a representative member which will then be used to calculate the global motion: B. Motion Estimation C = {x i x i x,m(x i,c best ) t}. (6) We use a motion model that is equivalent to the calculation of SIFT-ME features as shown in Equation (1): [ ] [ ][ ] [ ] xt cos(θ) sin(θ) xt 1 dx = + (7) sin(θ) cos(θ) dy y t y t 1 This model consists of an in-plane rotation θ followed by a translation <dx,dy> in the x and y directions respectively. This model does not consider affine motion (perspective changes). We make the assumption that perspective changes are not significant enough to negatively impact the visual quality of the stabilized video when the camera is fixed or the reference frame is updated dynamically. The most important type of motion in this model is the rotation. If rotation is not accurately estimated, the translation cannot effectively be estimated. As such, the first step to motion estimation in our approach is to estimate the rotation between successive frames. This is done by first performing fuzzy clustering as described in Section III-A on the orientation component β of matched SIFT-ME features. The mean value of the best cluster of orientation changes is chosen as the estimated global motion. Clearly this is not sufficient to separate all local and global motion unless all local motion has some inherent rotation. It is not important yet that the motion be separated entirely. As long as the local rotations chosen in this round of fuzzy clustering are consistent with the global rotations the process will continue unhindered. By clustering the values, it is ensured that the chosen global and local β values will be similar. The representative members of the best cluster are then used to estimate the translation of each feature. This is done. 2379

4 (a) (b) (c) (d) (e) (f) (g) (h) Fig. 2. Original and stabilized video side-by-side comparison of selected frames. (a) Original Lab Video, (b) Stabilized Lab video, (c) Original ONDESK video, (d) Stabilized ONDESK video, (e) Original STREET video, (f) Stabilized STREET video, (g) Original ONROAD video, (h) Stabilized ONROAD video. by finding the resultant between two vectors, v = v1 v2. The first vector v1 consists of the x and y values of the current frame s feature. The second vector v2 is created by rotating the x and y values of the previous frame s feature with respect to the new orientation using the rotation matrix in Equation (7). d xˆ x cos(θˆ ) sin(θˆ ) xt 1 = t (8) d yˆ yt yt 1. sin(θˆ ) cos(θˆ ) This results in a combination of local and global translations depending on which features were chosen in the orientation clustering. To finally separate out the local and global motion, fuzzy clustering must be performed on the estimated translations of each feature. If local translations are consistent with global translations, they will potentially be chosen as representative members of the best cluster of translations. We do not consider this to be a significant problem because local motion features are generally not as stable as global motion features, so they will likely disappear after a short sequence of frames. The mean of the chosen translations is chosen as the estimated global translation. C. Motion Compensation The estimated motion from Section III-B is added to an accumulated total and treated as sensor feedback for a Kalman filter. The state of the system is updated using this information and the state model chosen for the video in question. The current estimated state represents the desired motion, which is then subtracted from the accumulated global motion to achieve a more stable motion. Once the motion is estimated using the Kalman filter for the current frame it is a straightforward process to place the new frame back in line with the estimated motion. By solving Equation (7) for <xt 1,yt 1 >, the equation for motion compensation can be found as: xˆ cos(θˆkt ) sin(θˆkt ) xt d xˆkt = (9) yˆ sin(θˆkt ) cos(θˆkt ) yt d yˆkt where the subscript kt denotes the difference between the accumulated motion and the Kalman filter estimation of the motion at time t and hats are used to denote estimated values. IV. E XPERIMENTS We implemented our video stabilization process using R MATLAB. We used the VLFeat library [22] to calculate the SIFT features for each frame. Each video sequence we used was saved in a file beforehand and the stabilization process was run offline. A. Experimental Data We tested our stabilization process on several videos. Sample frames from four of these videos can be seen in Figure 2. The first is a video taken in our lab on a handheld camcorder at a resolution of 640x480. The purpose of this video is to test the translation component of the stabilization process with little change in orientation. The next three videos are the ONDESK, STREET, and ONROAD video sequences from [11], which all have a resolution of 160x120. This difference in resolution demonstrates that our stabilization process is robust enough to work at low resolution where fewer features are available for motion estimation. The ONDESK video is similar to our lab video, however, it consists of faster motion and more significant rotations. The STREET video is similar to the ONDESK video with the exception that a car drives through the scene, allowing for the testing of the separation of local and global motion. The ONROAD video is taken by a person walking forward with significant vibrations due to the walking motion. This shows that our stabilization technique is not limited to stationary cameras. These four videos are meant to have no desired camera motion. The state model used in the Kalman filter is a matrix of zeros and the covariance of the model is given a very low value to ensure that the desired motion is always zero. The final video is from the YoutubeTM video community showing the perspective of a base jumper during a jump [23]. This video has significant desired vertical and horizontal motion, and no desired rotational motion. This is used to test the dynamic reference frame from the Kalman filtering approach for estimating desired and undesired motion. A simple state model using x and y positions and velocities is used. We do not use θ position or velocity since it is not desired to have any rotation. We chose this video due to its similarity to aerial surveillance video. 2380

5 Fig. 4. Graphs of the measured offsets and the filtered (desired) horizontal, vertical, and angular motions of the base jump video. Fig. 3. Graph of the Peak Signal-to-Noise ratio of the original ONDESK video and the stabilized ONDESK video. B. Experimental Results The peak signal-to-noise ratio (PSNR) is often used to measure the similarity between two images. It is primarily used for measuring the quality of image compression, but has been adopted as a measure of quality of video stabilization in the case of stationary cameras with static backgrounds primarily [9], [24], [25]. The ONROAD and base jump video are not considered for PSNR calculations due to the dynamic backgrounds. Higher PSNR values represent better video quality for stabilization. The PSNR value for each frame of the original ONDESK and our stabilized version are shown in the graph in Figure 3. The PSNR value for our stabilized video never drops below that of the original video. Results of the other video sequences are similar. The interframe transformation fidelity (ITF) measurement was introduced in [9]. It is essentially the average of the PSNR over the video from the second frame forward. ITF gives a rough estimate of the overall quality of the stabilized video in a single value. Like PSNR itself, higher ITF values represent higher quality video stabilization. Table I contains the ITF values for the video sequences tested. In all cases, the ITF of our stabilized videos is higher than the ITF of the original videos. The ITF of our stabilized videos increases by 5-7dB, which is fairly good. PSNR does not always directly translate into perceived video quality. Because of this, it is important to look at the visual quality of the stabilized video in a subjective manner, without consideration of numerical errors. When looking at a stabilized video, a human will generally look to objects to determine the quality of stabilization. If an object is translated a single pixel in any direction, the numerical error will likely increase dramatically, while the visual quality to humans is left unchanged. We have included a side-by-side comparison of the first four video sequences and our stabilized video sequences in Figure 2. Comparing the original and stabilized lab video, it is apparent that there is a significant perspective change when looking at the chair in the bottom right of the frame. The chair is very close to the camera, so the horizontal translation of the camera changed the perspective significantly. On the other hand, both the computer and the printer appear to have relatively little change in location. Overall, this is the least visually appealing stabilization, though the ITF value in Table I would suggest otherwise. The ONDESK video shows much better results, likely due to the lack of significant depth differences causing errors due to perspective changes when the camera is moved. The notebook in the stabilized video appears to not move at all. The only noticeable difference throughout the video while looking at the notebook is the change in light reflection as the camera moves, but its location remains fixed. No portion of the scene appears to move from its original location, even when significant rotation is introduced. This result demonstrates that the orientation component of SIFT features is just as accurate as the translational components, validating its direct usage. The STREET video shows similar results to the ONDESK video. Significant jitter in orientation occurs throughout the video, but our stabilization process handles it gracefully. The building on the left side of the frame not moving shows that the stabilization process has had a significant impact. There is no change in the location of the building, and very little rotation in the stabilized video. This video demonstrates the separation of local and global motion. A car drives through the scene a few seconds into the video, which can be seen in Figure 2. Throughout the time the car is driving through the scene, it is apparent that the building in the background remains fixed. The fuzzy clustering method we use successfully performs the desired separation. The stabilized ONROAD video shows a drastic decrease in jitter from the original ONROAD video even though the movement of the camera is of high frequency and relatively 2381

6 TABLE I TABLE OF THE ITF VALUES (IN DB) OF THE ORIGINAL AND STABILIZED LAB, ONDESK, AND STREET VIDEOS. Video Original ITF Stabilized ITF Increase in ITF LAB ONDESK STREET large magnitude. The road in the scene remains in a constant position in each frame, making it appear that the camera is moving forward very smoothly. This type of motion with large vibrations occurs fairly often while driving ground robots forward over rough terrain. The stabilized base jump video has improved viewability over the original. Figure 4 shows the graphs of the measured horizontal, vertical, and angular offsets as well as the estimated desired motion from Kalman filtering. Both the vertical and horizontal offsets of the stabilized video are smoother than the original as can be seen in the figure. Peaks are less pronounced, but the motion tracks the sensed data closely. Because no angular motion is desired, the filtered curve is a constant zero. Without determining an appropriate state model for the Kalman filter, the video could move beyond the viewing frame, leaving the video unviewable. V. CONCLUSION AND FUTURE WORK We presented our video stabilization process, which uses only information provided by SIFT. Local and global motion are separated using fuzzy clustering and assigning trust values to features to represent how often they are chosen for global motion estimation. We showed that our stabilization method improves the PSNR of an unstable video from a stationary camera, and that visually the video is significantly more stable. We also have shown that our stabilization method works well when the camera is moving using Kalman filtering to estimate the desired motion. The speed of SIFT feature extraction is a bottleneck in our process that determines whether or not it can be run in real-time. An implementation of SIFT on a GPU would significantly speed up the calculation of SIFT features, potentially allowing our process to run in real-time on a robotic system. Instead of digitally moving the frames in line with a reference, our stabilization process could be used as input to a visual servoing controller that controls a pan/tilt system to physically align the camera. This could be combined with an IMU-based pan/tilt controller which could work at a higher frequency than the framerate of our stabilization process. VI. ACKNOWLEDGMENTS This research was supported by grant IIP from the National Science Foundation. REFERENCES [1] A. Batur and B. Flinchbaugh, Video stabilization with optimized motion estimation resolution, in 2006 IEEE International Conference on Image Processing, Oct. 2006, pp [2] K. Liu, J. Qian, and R. Yang, Block matching algorithm based on ransac algorithm, in 2010 International Conference on Image Analysis and Signal Processing, Apr. 2010, pp [3] M. Ondrej, Z. Frantisek, and D. Martin, Software video stabilization in a fixed point arithmetic, in First International Conference on the Applications of Digital Information and Web Technologies, ICADIWT 2008., Aug. 2008, pp [4] H. Shen, Q. Pan, Y. Cheng, and Y. Yu, Fast video stabilization algorithm for uav, in IEEE International Conference on Intelligent Computing and Intelligent Systems, ICIS 2009., vol. 4, Nov. 2009, pp [5] P. Shi, Y. Zhu, and S. Tong, Video stabilization in visual prosthetics, in IEEE/ICME International Conference on Complex Medical Engineering, CME 2007., May 2007, pp [6] J. Cai and R. Walker, Robust video stabilisation algorithm using feature point selection and delta optical flow, Computer Vision, IET, vol. 3, no. 4, pp , Dec [7] Q. Luo and T. Khoshgoftaar, An empirical study on estimating motions in video stabilization, in IEEE International Conference on Information Reuse and Integration, IRI 2007., Aug. 2007, pp [8] Y. Matsushita, E. Ofek, W. Ge, X. Tang, and H.-Y. Shum, Fullframe video stabilization with motion inpainting, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 7, pp , July [9] S. Battiato, G. Gallo, G. Puglisi, and S. Scellato, Sift features tracking for video stabilization, in 14th International Conference on Image Analysis and Processing, ICIAP 2007., Sept. 2007, pp [10] Y. Shen, P. Guturu, T. Damarla, B. Buckles, and K. Namuduri, Video stabilization using principal component analysis and scale invariant feature transform in particle filter framework, IEEE Transactions on Consumer Electronics, vol. 55, no. 3, pp , Aug [11] J. Yang, D. Schonfeld, C. Chen, and M. Mohamed, Online video stabilization based on particle filters, in 2006 IEEE International Conference on Image Processing, Oct. 2006, pp [12] B. K. Horn and B. G. Schunck, Determining optical flow, Massachusetts Institute of Technology, Cambridge, MA, USA, Tech. Rep., [13] B. D. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, in Proceedings of the 7th international joint conference on Artificial Intelligence, vol. 2, 1981, pp [14] D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vision, vol. 60, pp , Nov [15] J. Shi and C. Tomasi, Good features to track, Tech. Rep., [16] M. Han and T. Kanade, Multiple motion scene reconstruction with uncalibrated cameras, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 7, pp , July [17] G. P. S. Battiato, G. Gallo and S. Scellato, Fuzzy-based motion estimation for video stabilization using sift interest points, in Proc. SPIE, vol. 7250, [18] G. Wu, M. H. Mahoor, S. Althloothi, and R. M. Voyles, Sift-motion estimation (sift-me): A new feature for human activity recognition, in IPCV, 2010, pp [19] G. J. Klir, U. St. Clair, and B. Yuan, Fuzzy set theory: foundations and applications. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., [20] J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms. Norwell, MA, USA: Kluwer Academic Publishers, [21] J. C. Dunn, A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters, Journal of Cybernetics, vol. 3, pp , [22] A. Vedaldi and B. Fulkerson, VLFeat: An open and portable library of computer vision algorithms, [23] Amazing base jump. Aug [Online]. Available: http: // [Accessed: Mar. 2011] [24] L. Marcenaro, G. Vernazza, and C. Regazzoni, Image stabilization algorithms for video-surveillance applications, in 2001 International Conference on Image Processing, 2001., vol. 1, 2001, pp [25] M. Niskanen, O. Silven, and M. Tico, Video stabilization performance assessment, in 2006 IEEE International Conference on Multimedia and Expo, July 2006, pp

Video Stabilization using SIFT Features, Fuzzy Clustering, and Kalman Filtering

Video Stabilization using SIFT Features, Fuzzy Clustering, and Kalman Filtering University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 1-1-2011 Video Stabilization using SIFT Features, Fuzzy Clustering, and Kalman Filtering Kevin Veon University

More information

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song VIDEO STABILIZATION WITH L-L2 OPTIMIZATION Hui Qu, Li Song Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University ABSTRACT Digital videos often suffer from undesirable

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Real Time of Video Stabilization Using Field-Programmable Gate Array (FPGA)

Real Time of Video Stabilization Using Field-Programmable Gate Array (FPGA) Real Time of Video Stabilization Using Field-Programmable Gate Array (FPGA) Mrs.S.Kokila 1, Mrs.M.Karthiga 2 and V. Monisha 3 1 Assistant Professor, Department of Electronics and Communication Engineering,

More information

Digital Image Stabilization and Its Integration with Video Encoder

Digital Image Stabilization and Its Integration with Video Encoder Digital Image Stabilization and Its Integration with Video Encoder Yu-Chun Peng, Hung-An Chang, Homer H. Chen Graduate Institute of Communication Engineering National Taiwan University Taipei, Taiwan {b889189,

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

Video Stabilization, Camera Motion Pattern Recognition and Motion Tracking Using Spatiotemporal Regularity Flow

Video Stabilization, Camera Motion Pattern Recognition and Motion Tracking Using Spatiotemporal Regularity Flow Video Stabilization, Camera Motion Pattern Recognition and Motion Tracking Using Spatiotemporal Regularity Flow Karthik Dinesh and Sumana Gupta Indian Institute of Technology Kanpur/ Electrical, Kanpur,

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,

More information

Pictures at an Exhibition

Pictures at an Exhibition Pictures at an Exhibition Han-I Su Department of Electrical Engineering Stanford University, CA, 94305 Abstract We employ an image identification algorithm for interactive museum guide with pictures taken

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

SEMI-ONLINE VIDEO STABILIZATION USING PROBABILISTIC KEYFRAME UPDATE AND INTER-KEYFRAME MOTION SMOOTHING

SEMI-ONLINE VIDEO STABILIZATION USING PROBABILISTIC KEYFRAME UPDATE AND INTER-KEYFRAME MOTION SMOOTHING SEMI-ONLINE VIDEO STABILIZATION USING PROBABILISTIC KEYFRAME UPDATE AND INTER-KEYFRAME MOTION SMOOTHING Juhan Bae 1,2, Youngbae Hwang 1 and Jongwoo Lim 2 1 Multimedia IP Center, Korea Electronics Technology

More information

Available online at ScienceDirect. Procedia Computer Science 22 (2013 )

Available online at   ScienceDirect. Procedia Computer Science 22 (2013 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 22 (2013 ) 945 953 17 th International Conference in Knowledge Based and Intelligent Information and Engineering Systems

More information

Efficient Block Matching Algorithm for Motion Estimation

Efficient Block Matching Algorithm for Motion Estimation Efficient Block Matching Algorithm for Motion Estimation Zong Chen International Science Inde Computer and Information Engineering waset.org/publication/1581 Abstract Motion estimation is a key problem

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

TERM PAPER REPORT ROBUST VIDEO STABILIZATION BASED ON PARTICLE FILTER TRACKING OF PROJECTED CAMERA MOTION

TERM PAPER REPORT ROBUST VIDEO STABILIZATION BASED ON PARTICLE FILTER TRACKING OF PROJECTED CAMERA MOTION TERM PAPER REPORT ROBUST VIDEO STABILIZATION BASED ON PARTICLE FILTER TRACKING OF PROJECTED CAMERA MOTION EE 608A DIGITAL VIDEO PROCESSING : TERM PAPER REPORT ROBUST VIDEO STABILIZATION BASED ON PARTICLE

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow

Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow www.ijarcet.org 1758 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow

More information

Hybrid Video Stabilization Technique for Hand Held Mobile Videos

Hybrid Video Stabilization Technique for Hand Held Mobile Videos Hybrid Video Stabilization Technique for Hand Held Mobile Videos Prof. Paresh Rawat 1.Electronics & communication Deptt. TRUBA I.E.I.T Bhopal parrawat@gmail.com, Dr. Jyoti Singhai 2 Prof. Electronics Deptt

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Research on Evaluation Method of Video Stabilization

Research on Evaluation Method of Video Stabilization International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Periodic Pattern Detection for Real-Time Application

Periodic Pattern Detection for Real-Time Application Periodic Pattern Detection for Real-Time Application Giovanni Puglisi 1 and Sebastiano Battiato 1 Dipartimento di Matematica e Informatica University of Catania, Italy {puglisi,battiato}@dmi.unict.it Abstract.

More information

Lucas-Kanade Image Registration Using Camera Parameters

Lucas-Kanade Image Registration Using Camera Parameters Lucas-Kanade Image Registration Using Camera Parameters Sunghyun Cho a, Hojin Cho a, Yu-Wing Tai b, Young Su Moon c, Junguk Cho c, Shihwa Lee c, and Seungyong Lee a a POSTECH, Pohang, Korea b KAIST, Daejeon,

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Expression Detection in Video. Abstract Expression detection is useful as a non-invasive method of lie detection and

Expression Detection in Video. Abstract Expression detection is useful as a non-invasive method of lie detection and Wes Miller 5/11/2011 Comp Sci 534 Expression Detection in Video Abstract Expression detection is useful as a non-invasive method of lie detection and behavior prediction, as many facial expressions are

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

CS231A Section 6: Problem Set 3

CS231A Section 6: Problem Set 3 CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics

More information

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

Video Stabilization Based On Global Motion Estimation Using Background Feature Point Classification

Video Stabilization Based On Global Motion Estimation Using Background Feature Point Classification Video Stabilization Based On Global Motion Estimation Using Background Feature Point Classification Anupriya R, Gomathy U, Karunya C, Anitha Y(Assistant Professor) Electronics and Communication Engineering,

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

INFRARED AUTONOMOUS ACQUISITION AND TRACKING

INFRARED AUTONOMOUS ACQUISITION AND TRACKING INFRARED AUTONOMOUS ACQUISITION AND TRACKING Teresa L.P. Olson and Harry C. Lee Teresa.Lolson@lmco.com (407) 356-7109 Harrv.c.lee@lmco.com (407) 356-6997 Lockheed Martin Missiles and Fire Control - Orlando

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW HONGBO LI, MING QI AND 3 YU WU,, 3 Institute of Web Intelligence, Chongqing University of Posts and Telecommunications,

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

BCC Optical Stabilizer Filter

BCC Optical Stabilizer Filter BCC Optical Stabilizer Filter The Optical Stabilizer filter allows you to stabilize shaky video footage. The Optical Stabilizer uses optical flow technology to analyze a specified region and then adjusts

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Outline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios

Outline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios Outline Data Association Scenarios Track Filtering and Gating Global Nearest Neighbor (GNN) Review: Linear Assignment Problem Murthy s k-best Assignments Algorithm Probabilistic Data Association (PDAF)

More information

Occlusion Robust Multi-Camera Face Tracking

Occlusion Robust Multi-Camera Face Tracking Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,

More information

16720 Computer Vision: Homework 3 Template Tracking and Layered Motion.

16720 Computer Vision: Homework 3 Template Tracking and Layered Motion. 16720 Computer Vision: Homework 3 Template Tracking and Layered Motion. Instructor: Martial Hebert TAs: Varun Ramakrishna and Tomas Simon Due Date: October 24 th, 2011. 1 Instructions You should submit

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

VISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE. Zhihai He, Ram Venkataraman Iyer, and Phillip R. Chandler

VISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE. Zhihai He, Ram Venkataraman Iyer, and Phillip R. Chandler VISION-BASED UAV FLIGHT CONTROL AND OBSTACLE AVOIDANCE Zhihai He, Ram Venkataraman Iyer, and Phillip R Chandler ABSTRACT In this work, we explore various ideas and approaches to deal with the inherent

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information