CSE 237B Project Report Implementing KLT Algorithm on IPAQ. 1 Implementing KLT Algorithm on ipaq Problem Statement... 2

Size: px
Start display at page:

Download "CSE 237B Project Report Implementing KLT Algorithm on IPAQ. 1 Implementing KLT Algorithm on ipaq Problem Statement... 2"

Transcription

1 CSE B Project Report Implementing KLT Algorithm on IPAQ 0 Implementing KLT Algorithm on ipaq Implementing KLT Algorithm on ipaq.... Problem Statement.... Introduction.... Project Plan.... KLT Algorithm Description..... KLT Feature selection..... KLT Feature Tracking.... KLT Algorithm Implementation..... Design Considerations..... Implementation Decision.... Performance Metrics..... Impact of Image resolution on Feature selection and Feature Tracking time Feature Tracking time wrt number of features to track..... Code Optimization.... Summary.... Future Work.... References:....0 Appendix... 0

2 CSE B Project Report Implementing KLT Algorithm on IPAQ 0 0. Problem Statement The aim of this project is to accomplish following tasks Implement Kanade-Lucas-Tomasi (KLT) Feature tracking algorithm on IPAQ Integrate IPAQ with Axis Network Camera which can provide streaming data to IPAQ and use this data to track the features. To Measure the performance of KLT algorithm and optimize it.. Introduction KLT algorithm is used in Computer Vision community to track the features in an image. The KLT tracker is based on the early work of Lucas and Kanade []. There are basically two steps which have to take place during feature tracking: Selecting Good features to track in an image. Tracking the features in an image sequence. Good features are located by examining the minimum eigenvalue of each by gradient matrix, and features are tracked using a Newton-Raphson method of minimizing the difference between the two windows. Multiresolution tracking allows for even large displacements between images. When features are lost the algorithm replaces the lost features by finding other new features.. Project Plan ) Study of KLT algorithm. ) Implementation of KLT (Kanade-Lucas-Tomasi ) algorithm for feature selection / tracking on IPAQ. ) Integration of Axis web camera with IPAQ and provide feedback to the camera to move in the direction of the movement of the image. ) To measure Performance benchmarks on the KLT algorithm.

3 CSE B Project Report Implementing KLT Algorithm on IPAQ. KLT Algorithm Description.. KLT Feature selection... Introduction to feature selection The most important step before tracking is the selection of trackable features. How do we select features which are trackable? The keyword during feature selection is texture of the image. Areas with a varying texture pattern are mostly unique in an image, while uniform or linear intensity areas are often common and not unique. The aim of the KLT algorithm is to find the coordinates in the image which has a varying texture Finding a good feature A texture pattern can only exist if we look at multiple pixels in an area. The features that we are tracking are more accurately described as feature windows containing texture information. The area of such a feature window can vary, depending on the number of features which needs to be located. If we consider a feature window of size W ( say x pixels). Then the varying texture pattern can be captured by the image gradient. The image gradient is defined as: g = g g x y For example when image Fig is used to find the selectable features. The The image gradients in X and Y coordinates are shown in figures and.

4 CSE B Project Report Implementing KLT Algorithm on IPAQ Figure The original image for which the selectable features need to calculated Fig Image gradient in X axis Now consider the product Fig Image gradient in Y axis gg T = gx gxg y g xg y g y If we now integrate the matrix derived above over the area W, we get: Z = g W gxg x x y g g y wdx g y 0 The x matrix Z contains pure texture information and by analyzing its eigenvalues we can classify the texture in the region W. Two small eigenvalues represents a roughly constant intensity pattern within W. One small and one big eigenvalue means that a linear (unidirectional)

5 CSE B Project Report Implementing KLT Algorithm on IPAQ pattern was detected. If Z has two large eigenvalues, then the area W contains a feature which can be tracked. This feature can either be a corner, a so called salt-and-pepper texture or any texture pattern which has a prominent enough change in intensity in more than one direction. The equation for Z forms an intricate part of the Kanade-Lucas-Tomasi tracking algorithm.it is necessary to establish a minimum threshold for the value of the eigenvalues. In conclusion, if the two eigen values of Z are λ and λ we accept a window if min ( λ, λ ) > predefined threshold λ Figure shows the features selected by the algorithm for its first image figure. 0 Fig Shows which features are selected for feature tracking.. KLT Feature Tracking 0... Introduction to the method In a nutshell the technique can be described as follows: First of all features have to be selected which can be tracked from image to image in a video image stream. The selection of the features is based on texture (intensity information). A number of fixed-sized feature windows are selected on the first image of a sequence. These feature windows are tracked from one image to the next using the KLT method []. This method calculates the sum of squared intensity differences between a feature in the previous image and the features in the current image. The displacement of the specific feature is then defined as the displacement that minimizes the sum of differences. This is done continuously between sequential images so that all the features can be tracked.

6 CSE B Project Report Implementing KLT Algorithm on IPAQ 0... Tracking features The fundamentals of tracking can be explained by looking at two images in an image sequence. Let us assume that the first image was captured at time t and the second image at time t + τ. It is important to keep in mind that the incremental time τ depends on the frame rate (capture rate) of the video camera and should be as small as possible. A higher frame rate allows for better tracking. A grayscale image is a pattern of intensities where the intensity values range from 0 to.an image can be represented as function of variables x and y. We add the variable t as the time the image was captured. Thus, any section of an image can be defined by an intensity function I(x,y,t). If we now define a window in an image taken at time t+τ as I(x,y,t+τ ). The basic assumption of the KLT tracking algorithm is I(x,y,t+τ ) = I ( x - x, y - y, t) () From () it is clear that every point in the second window can by obtained by shifting every point in the first window by an amount (x, y). This amount can be defined as the displacement d = (x, y) and the main goal of tracking is to calculate d Calculating feature displacement The basic information has now been established to solve the displacement d of a feature from one window to the next. For simplicity, we redefine the second window as B(x ) = I(x,y,t+τ ) and the first window as A(x - d) = I(x - d) = I(x-x, y-y,t) where x = (x,y). The relationship between the two images is given by B(x) = A(x-d) + η (x) () η (x ) is a noise function caused by interference. An error function which has to be minimized in order to minimize the effect of noise: = W [A(x-d) B(x)] [A(x-d) B(x)] wdx () As we see is a quadratic equation of d. To minimize we need to differentiate with d and set the result to 0. d / dd = W [ A(x) B(x) T g d] gwda = 0 () Where A refers to area of window W. If the terms of are rearranged and if we use the fact that T ( g d)g = (g g T )d it follows that ( W g g T wda) d = W [ A(x) B(x) ] gwda ()

7 CSE B Project Report Implementing KLT Algorithm on IPAQ Equation is known as the Kanade-Lucas-Tomasi tracking equation and can be rewritten as follows: Zd = e () Where Z is a X matrix and e is a vector in two dimensions. The displacement is now simply the solution of equation. 0. KLT Algorithm Implementation The Algorithm is implemented on ipaq. The ipaq connects with a linux host over USB / Serial connection to get the images captured by the axis network camera. The layout of integration is shown in fig. The detailed configuration of ipaq, linux host and the network camera is given in the Appendix section. HTTP Protocol Converts compressed Image formats to PGM Network Linux Host Camera D Image USB/Serial KLT Algorithm On Familiar Linux. Fig Component overview of the Project Implementation.

8 CSE B Project Report Implementing KLT Algorithm on IPAQ.. Design Considerations KLT algorithm uses images of PGM ( Portable gray map ) format and the Axis Network camera gives the images in compressed format ( JPEG format / MJPEG format). Axis Network camera doesn t support serial or USB connectivity. The IPAQ which was used for implementation doesn t have Ethernet connectivity or Blue Tooth connectivity. 0.. Implementation Decision Considering the above mentioned challenges a linux/window host interfaces between the Network Camera and the IPAQ. This solved the above mentioned challanges in following ways: The network camera streams the data to the linux host which does the conversion of JPEGs to PGM format. By doing this the IPAQ doesn t waste any CPU cycles in converting JPEG to PGM format. This may not be necessary if the network camera code can modified to output PGM formats. Check Future works section. The IPAQ connects to the linux host using USB and downloads the PGM images at the rate at which it is capable of processing. The linux host downloads the MPJEG stream over HTTP interface and converts it into PGM format stream. The IPAQ gets the images from the linux host and then runs the KLT algorithm to track the features on downloaded images. The algorithm replaces the lost features in an image frame with the new set of features. ( As seen in Fig and Fig ) 0 Fig : Set of images downloaded from the server. The image is moving in the left direction. Fig: The KLT algorithm successfully tracks the frames in the subsequent set. If any features are lost it replaces it with new features for it to be tracked in the next image sequence.

9 CSE B Project Report Implementing KLT Algorithm on IPAQ 0 As it can be seen from the above figures that the KLT is able to successfully track the feature which is moving in the left direction. The KLT algorithm marks the tracked feature with respect to x,y coordinate for each frame in the image sequence. Using this data the implementation code calculates the displacement of the tracked features and hence the displacement coordinates of the actual image. The original idea of the Project was to track the features and move the camera in the direction of the image movement. Since the Model of Network camera used in the Implementation doesn t support the PTZ ( Pan Tilt Zoom) control commands to Tilt and move the camera so this part could not be implemented. In current implementation it just prints the displacement coordinates in the standard out. 0. Performance Metrics Following performance metrics are analyzed for the system.. Impact of Image resolution on Feature selection and Feature Tracking time.. Feature selection time vs Feature Tracking Time. Feature Tracking time wrt number of features to track.. Algorithm parameters optimizations. Posix method clock_gettime( clock_id, timespec) call is used to measure the performance metrics of the system. Check Appendix for more information. The configuration of the linux host, ipaq and the network camera is given in the appendix section.

10 CSE B Project Report Implementing KLT Algorithm on IPAQ.. Impact of Image resolution on Feature selection and Feature Tracking time. Table shows the performance metrics collected for different image resolution. Image Resolution Feature Selection Time in on ARM Feature Tracking Time on ARM Feature Selection Time in on x Feature Selection Time in on x ( in millisecs) (in millisecs) ( in millisecs) ( in millisecs) 0x0 0. 0x0 0 0x0 0x Time in milli secs x0 0x0 0x0 0x0 Feature Selection Time Feature Tracking Time Image Resolution Fig : shows the graphical view of performance metrics for ARM Processor as depicted in Table. 0

11 CSE B Project Report Implementing KLT Algorithm on IPAQ Time in milli secs Feature Selection Time Feature Tracking Time x0 0x0 0x0 0x0 Image Resolution Fig : shows the graphical view of performance metrics on x system as depicted in Table. This is just to compare the performance values between ARM processor and x linux host.... Analysis It seems clear from the graph that the Increasing image resolution is indirectly impact on performance time of Feature tracking algorithm. The Feature Tracking part seems to be more severely effected than the Feature selection part by increasing the image resolution. In general the tracking of features takes more time then the selection of features Explanation and Conclusion: The KLT divides the actual image into various small windows ( for example x pixels) and then calculates the intensity gradient for each window and compares it with threshold eigen value. By increasing the resolution of the image the computation is directly impacted as intensity gradient for more windows need to be calculated. The x results look very promising as the image resolution decreases. But the performance on ARM is still very dismal. The reason is because the image intensity gradient calculation is very computing intensive and uses floating point calculation. The x has a floating point processor but the IPAQ doesn t have a floating point processor. It seems evident from the KLT algorithm that the tracking time is more computing and memory intensive than the feature selection part. The reason why tracking takes more time is because for tracking, the image is sub sampled couple of times unless it reaches

12 CSE B Project Report Implementing KLT Algorithm on IPAQ a certain pyramid level. KLT algorithm sets the sub sample level to be atleast. So regardless of pyramid level the image will be sampled at least twice for tracking the features. From this graph it can be concluded that the max frame rate which an ipaq can handle with image resolution of size 0x0 is approx - frames per second... Feature Tracking time wrt number of features to track. Feature Selection Vs Feature Tracking Time on x system on a 0x0 image 0 Time in millisecs # of features to select/track Feature Selection Time Feature Tracking time 0 Time in millisecs Feature Selection Vs Feature Tracking Time on ARM on a 0x0 image # of features to select/track Feature Selection Time Feature Tracking time... Analysis ) It seems evident that the number of features to select and track doesn t affect the performance time

13 CSE B Project Report Implementing KLT Algorithm on IPAQ 0... Explanation and Conclusion As explained earlier the KLT algorithm divides the image into various small window and then calculates the image intensity gradient (eigen values) for each smaller window. It then sorts each of the probable features with the decreasing values of eigen values. The n selectable features are the top n features of this sorted list. So regardless of the number of features to select it always have to compute the intensity gradients for the whole image. But it is still logical to give some practical number for feature selection because if the KLT is configured to replace the lost features from one image to another then the probability of feature loss will be more if the number of features to track is more. This in turn will be more expensive because more often the Replace Lost Features algorithm will be called which has the same complexity as Selection Feature List Algorithm Code Optimization The KLT implementation takes various parameters. The value of these input parameters impacts the performance. The parameters which gave significant performance benefits are described as following : TrackingContext->MinDistance: The minimum distance between each feature being selected, in pixels. Used by KLTSelectGoodFeatures() and KLTReplaceLostFeatures(). If this value is big in higher resolution images then the selection and replacing of features will be faster. This value has to modified carefully as it may increase in number of lost features. TrackingContext->window_width, window_height: The size of the feature window, in pixels. KLT algorithm divides the image into various small window and then calculates the image intensity gradient (eigen values) for each smaller window. Each small window is of size window_width x window_height. Higher values gives better result. This value has to modified carefully as it may increase in number of lost features. sequentialmode If TRUE, then the previous image is saved and used later. Used by KLTTrackFeatures() and KLTReplaceLostFeatures() to speed the computation when tracking through an image sequence. writeinternalimages If TRUE, then the internal images used for feature selection and tracking, that is, the smoothed and differentiated versions of the original images, are written to files. For performance benefits this value should be set to false.

14 CSE B Project Report Implementing KLT Algorithm on IPAQ 0 nskippedpixels The number of pixels in between each pair of possible features. Used to speed up the computation of KLTSelectGoodFeatures() npyramidlevels The number of pyramid levels. The lower values reduces the tracking time. Debugging statements : Removing Debugging statements helped the performance. Compiler optimizations: Compiler Optimizations O feature enhances the performance of the system.. Summary The KLT algorithm is successfully implemented on IPAQ (Strong ARM).However its performance is really bad on IPAQ. The KLT Implementation on IPAQ is only able to successfully track images of resolution (0x0) with maximum rateof frames per seconds 0 0. Future Work ) It will be worth to try the following source forge net project which implements a library to support execution in fixed point arithmetic to allow for fast execution on systems with no floating point processor. To overcome the problems of integer overflow, the library calculates a position of the decimal point after training and guarantees that integer overflow can not occur with this decimal point. The link for this project is The following anecdote by the author seems very interesting as it seems he faced the same performance problem which we are facing. In [Nissen et al., 00] I participated in building and programming of an autonomous robot, based on an Compaq IPAQwith a camera attached to it. In [Nissen et al., 00] I participated in rebuilding this robot and adding artificial neural networks (ANN) for use in the image processing. Unfortunately the ANN library that we used [Heller, 00] was too slow and the image processing on the IPAQwas not efficient enough. The IPAQdoes not have a floating point processor and for this reason we had written a lot of the image processing using fixed point arithmetic. From this experience I have learned that rewriting code to fixed point arithmetic, makes a huge difference on the performance of programs running on the ipaq

15 CSE B Project Report Implementing KLT Algorithm on IPAQ ) Implementation of KLT Algorithm o a board which supports floating point processor ) Implementation of KLT Algorithm on ETRAX. Axis camera uses Etrax chip which runs embedded linux OS. The whole algorithm can be implemented in the network camera itself. From the specifications it seems very promising ) Some effort can be given in reducing the complexity of the KLT-Algorithm. References: [] Stan Birchfield, Derivation of Kanade-Lucas-Tomasi Tracking Equation, Unpublished, May. [] Carlo Tomasi and Takeo Kanade, Detection and Tracking of Point Features, Carnegie Mellon University Technical Report CMU-CS--, April. [] [] Appendix ) Clock Details Clock_gettime(timerId, timespec) o Clock_gettime(CLOCK_ID, timespec)gets the current value of specific clock o Clock_idsclock_realtime,clock_monotonic, clock_process_cputime,clock_thread_cputime Clock_getres( CLOCK_ID, timespec) o Gets the resolution for the specific clock_id o resolution is 0 msecs for clock_real_time

16 CSE B Project Report Implementing KLT Algorithm on IPAQ o Linux implementation on ARM supports only CLOCK_REALTIME ) ipaq Configuration details ~ # cat /proc/cpuinfo Processor : StrongARM-0 rev (vl) BogoMIPS :. Features : swp half bit fastmult CPU implementor : 0x CPU architecture: CPU variant : 0x0 CPU part : 0xb CPU revision : Hardware : HP ipaq H00 Revision : 0000 Serial : Memory Info ~ # cat /proc/meminfo total: used: free: shared: buffers: cached: Mem: Swap: MemTotal: 00 kb MemFree: kb MemShared: 0 kb Buffers: 0 kb Cached: 0 kb SwapCached: 0 kb Active: kb Inactive: kb HighTotal: 0 kb HighFree: 0 kb LowTotal: 00 kb LowFree: kb SwapTotal: 0 kb SwapFree: 0 kb ) Linux Host Configuration details: Linux ajitc-linux el # Thu Oct 0:: EDT 00 i

17 CSE B Project Report Implementing KLT Algorithm on IPAQ i i GNU/Linux ajitc-linux:/local/mnt/workspace/cseb->cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : model : model name : Intel(R) Pentium(R) CPU.0GHz stepping : cpu MHz :. cache size : KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : wp : yes flags : fpu vme de pse tsc msr pae mce cx apic sep mtrr pge mca cmov pat pse clflush dts acpi mmx fxsr sse sse ss ht tm bogomips :. ajitc-linux:/local/mnt/workspace/cseb->cat /proc/meminfo total: used: free: shared: buffers: cached: Mem: Swap: 0 0 MemTotal: kb MemFree: kb MemShared: 0 kb Buffers: 0 kb Cached: 0 kb SwapCached: kb Active: 0 kb ActiveAnon: 0 kb ActiveCache: kb Inact_dirty: 0 kb Inact_laundry: 0 kb Inact_clean: kb Inact_target: kb HighTotal: 0 kb HighFree: 0 kb LowTotal: kb LowFree: kb

18 CSE B Project Report Implementing KLT Algorithm on IPAQ SwapTotal: 000 kb SwapFree: 0 kb HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 0 kb

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

Hyper-Threading Performance with Intel CPUs for Linux SAP Deployment on ProLiant Servers. Session #3798. Hein van den Heuvel

Hyper-Threading Performance with Intel CPUs for Linux SAP Deployment on ProLiant Servers. Session #3798. Hein van den Heuvel Hyper-Threading Performance with Intel CPUs for Linux SAP Deployment on ProLiant Servers Session #3798 Hein van den Heuvel Performance Engineer Hewlett-Packard 2004 Hewlett-Packard Development Company,

More information

Brian Leffler The Ajax Experience July 26 th, 2007

Brian Leffler The Ajax Experience July 26 th, 2007 Brian Leffler The Ajax Experience July 26 th, 2007 Who is Amazon.com? 26-July-2007 2 26-July-2007 3 Amazon.com Revenue: $11.45B Employees: 13,900 Market Cap.: $27.99B Ticker symbol: AMZN 26-July-2007 4

More information

Contents 1 Introduction 2 2 Evaluation criteria Requirements for a FIPA-compliant agent platform Software quality metrics.

Contents 1 Introduction 2 2 Evaluation criteria Requirements for a FIPA-compliant agent platform Software quality metrics. Evaluation of FIPA-OS 1.03 Mikko Laukkanen Helsinki, 16th February 2000 Cellular System Development, Sonera Mobile Operator, Sonera Ltd, P.O.Box 970, 00051 SONERA mikko.laukkanen@sonera.com Abstract FIPA-OS

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

rhincodon.org - index

rhincodon.org - index rhincodon.org - index web author 20160909-222837 UTC 目次 1 Server 2 1.0.1 Place........................... 2 1.0.2 CPU............................ 2 1.0.3 Memory.......................... 3 1.0.4 HDD............................

More information

C++ Pub Quiz. Sponsored by: Sponsored by: A 90 minute quiz session at ACCU Terrace Bar, Marriott Hotel, Bristol , April 10, 2014

C++ Pub Quiz. Sponsored by: Sponsored by: A 90 minute quiz session at ACCU Terrace Bar, Marriott Hotel, Bristol , April 10, 2014 C++ Pub Quiz Olve Maudal, feat Lars Gullik Sponsored by: Sponsored by: + + A 90 minute quiz session at ACCU Terrace Bar, Marriott Hotel, Bristol 1600-1730, April 10, 2014 Sponsored by NDC conferences:

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

CS201: Computer Vision Introduction to Tracking

CS201: Computer Vision Introduction to Tracking CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change

More information

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

More information

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Install Cisco ISE on a Linux KVM

Install Cisco ISE on a Linux KVM KVM Hypervisor Support, page 1 Obtain the Cisco ISE Evaluation Software, page 3 Install Cisco ISE on KVM, page 4 KVM Hypervisor Support Cisco ISE supports KVM hypervisor on Red Hat Enterprise Linux (RHEL)

More information

Install Cisco ISE on a Linux KVM

Install Cisco ISE on a Linux KVM KVM Hypervisor Support, on page 1 Obtain the Cisco ISE Evaluation Software, on page 4 Install Cisco ISE on KVM, on page 4 KVM Hypervisor Support Cisco ISE supports KVM hypervisor on Red Hat Enterprise

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Install Cisco ISE on a Linux KVM

Install Cisco ISE on a Linux KVM KVM Hypervisor Support, page 1 Obtain the Cisco ISE Evaluation Software, page 7 Install Cisco ISE on KVM, page 8 KVM Hypervisor Support Cisco ISE supports KVM hypervisor on Red Hat Enterprise Linux (RHEL)

More information

A Performance Study of the Snort IDS Eric Frimpong M.H. MacGregor. TR08-04 Department of Computing Science University of Alberta February, 2008

A Performance Study of the Snort IDS Eric Frimpong M.H. MacGregor. TR08-04 Department of Computing Science University of Alberta February, 2008 A Performance Study of the Snort IDS Eric Frimpong M.H. MacGregor TR08-04 Department of Computing Science University of Alberta February, 2008 1.0 INTRODUCTION With the enormous growth of the IP network,

More information

Install Cisco ISE on a Linux KVM

Install Cisco ISE on a Linux KVM KVM Hypervisor Support, on page 1 Obtain the Cisco ISE Evaluation Software, on page 8 Install Cisco ISE on KVM, on page 8 KVM Hypervisor Support Cisco ISE supports KVM hypervisor on Red Hat Enterprise

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

ViSP tracking methods overview

ViSP tracking methods overview 1 ViSP 2.6.0: Visual servoing platform ViSP tracking methods overview October 12th, 2010 Lagadic project INRIA Rennes-Bretagne Atlantique http://www.irisa.fr/lagadic Tracking methods with ViSP 2 1. Dot

More information

GridFTP Scalability and Performance Results Ioan Raicu Catalin Dumitrescu -

GridFTP Scalability and Performance Results Ioan Raicu Catalin Dumitrescu - GridFTP Scalability and Performance Results 2/12/25 Page 1 of 13 GridFTP Scalability and Performance Results Ioan Raicu iraicu@cs.uchicago.edu Catalin Dumitrescu - catalind@cs.uchicago.edu 1. Introduction

More information

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for

More information

Displacement estimation

Displacement estimation Displacement estimation Displacement estimation by block matching" l Search strategies" l Subpixel estimation" Gradient-based displacement estimation ( optical flow )" l Lukas-Kanade" l Multi-scale coarse-to-fine"

More information

CPU models after Spectre & Meltdown. Paolo Bonzini Red Hat, Inc. KVM Forum 2018

CPU models after Spectre & Meltdown. Paolo Bonzini Red Hat, Inc. KVM Forum 2018 CPU models after Spectre & Meltdown Paolo Bonzini Red Hat, Inc. KVM Forum 2018 Can this guest run on that machine? It depends! Host processor Microcode version Kernel version QEMU Machine type 2 How can

More information

Mosaics. Today s Readings

Mosaics. Today s Readings Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Robust Camera Pan and Zoom Change Detection Using Optical Flow

Robust Camera Pan and Zoom Change Detection Using Optical Flow Robust Camera and Change Detection Using Optical Flow Vishnu V. Makkapati Philips Research Asia - Bangalore Philips Innovation Campus, Philips Electronics India Ltd. Manyata Tech Park, Nagavara, Bangalore

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

tut118 SC2010 Michael Hebenstreit INTEL corp. CRT Datacenter, Senior Cluster Architect

tut118 SC2010 Michael Hebenstreit INTEL corp. CRT Datacenter, Senior Cluster Architect Validation of an HPC Cluster: A Sometimes Neglected Aspect of System Administration walk through of methods and procedures Michael Hebenstreit INTEL corp. CRT Datacenter, Senior Cluster Architect 1 tut118

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

OPPA European Social Fund Prague & EU: We invest in your future.

OPPA European Social Fund Prague & EU: We invest in your future. OPPA European Social Fund Prague & EU: We invest in your future. Patch tracking based on comparing its piels 1 Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine

More information

Exploring Curve Fitting for Fingers in Egocentric Images

Exploring Curve Fitting for Fingers in Egocentric Images Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu

More information

Tracking Computer Vision Spring 2018, Lecture 24

Tracking Computer Vision Spring 2018, Lecture 24 Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How

More information

Shape and Motion from Image Streams: a Factorization Method Part 3. Detection and Tracking of Point Features Technical Report CMU-CS

Shape and Motion from Image Streams: a Factorization Method Part 3. Detection and Tracking of Point Features Technical Report CMU-CS Shape and Motion from Image Streams: a Factorization Method Part 3 Detection and Tracking of Point Features Technical Report CMU-CS-91-132 Carlo Tomasi Takeo Kanade April 1991 Abstract The factorization

More information

Region-of-Interest Prediction for Interactively Streaming Regions of High Resolution Video

Region-of-Interest Prediction for Interactively Streaming Regions of High Resolution Video Region-of-Interest Prediction for Interactively Streaming Regions of High Resolution Video EE392J Group Project Report by Aditya Mavlankar and David Varodayan Information Systems Laboratory, Stanford University,

More information

Error Analysis of Feature Based Disparity Estimation

Error Analysis of Feature Based Disparity Estimation Error Analysis of Feature Based Disparity Estimation Patrick A. Mikulastik, Hellward Broszio, Thorsten Thormählen, and Onay Urfalioglu Information Technology Laboratory, University of Hannover, Germany

More information

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Scan Converting Lines, Circles and Ellipses Hello everybody, welcome again

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

מגישים: multiply.c (includes implementation of mult_block for 2.2\a): #include "multiply.h" /* Question 2.1, 1(a) */

מגישים: multiply.c (includes implementation of mult_block for 2.2\a): #include multiply.h /* Question 2.1, 1(a) */ 1 מגישים: אריאל סטולרמן ודים סטוטלנד פרוייקט תוכנה - תרגיל #2 (2.1) multiply.c (includes implementation of mult_block for 2.2\a): #include "multiply.h" /* Question 2.1, 1(a) */ void mult_ijk(elem A[],

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM

CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM Instructions: Homework 3 has to be submitted in groups of 3. Review the academic integrity and collaboration

More information

3D Corner Detection from Room Environment Using the Handy Video Camera

3D Corner Detection from Room Environment Using the Handy Video Camera 3D Corner Detection from Room Environment Using the Handy Video Camera Ryo HIROSE, Hideo SAITO and Masaaki MOCHIMARU : Graduated School of Science and Technology, Keio University, Japan {ryo, saito}@ozawa.ics.keio.ac.jp

More information

Introduction to Quadratic Functions

Introduction to Quadratic Functions October 19, 2009 Motivation Introduction Why does one go into business? What is the goal of a person running a business? On Wednesday, when we conclude this section, we will see how to accomplish this

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Advanced Computer Architecture

Advanced Computer Architecture 18-742 Advanced Computer Architecture Test 2 April 14, 1998 Name (please print): Instructions: DO NOT OPEN TEST UNTIL TOLD TO START YOU HAVE UNTIL 12:20 PM TO COMPLETE THIS TEST The exam is composed of

More information

1-2 Feature-Based Image Mosaicing

1-2 Feature-Based Image Mosaicing MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Lecture 15 Floating point

Lecture 15 Floating point Lecture 15 Floating point Announcements 2 Today s lecture SSE wrapping up Floating point 3 Recapping from last time: how do we vectorize? Original code double a[n], b[n], c[n]; for (i=0; i

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Dynamic Control Hazard Avoidance

Dynamic Control Hazard Avoidance Dynamic Control Hazard Avoidance Consider Effects of Increasing the ILP Control dependencies rapidly become the limiting factor they tend to not get optimized by the compiler more instructions/sec ==>

More information

John Wawrzynek & Nick Weaver

John Wawrzynek & Nick Weaver CS 61C: Great Ideas in Computer Architecture Lecture 18: Parallel Processing SIMD John Wawrzynek & Nick Weaver http://inst.eecs.berkeley.edu/~cs61c 61C Survey It would be nice to have a review lecture

More information

EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform

EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform The goal of this fourth lab is to ˆ Learn how to detect corners, and use them in a tracking application ˆ Learn how to describe

More information

A fixed-point 3D graphics library with energy-efficient efficient cache architecture for mobile multimedia system

A fixed-point 3D graphics library with energy-efficient efficient cache architecture for mobile multimedia system MS Thesis A fixed-point 3D graphics library with energy-efficient efficient cache architecture for mobile multimedia system Min-wuk Lee 2004.12.14 Semiconductor System Laboratory Department Electrical

More information

Robust Visual Tracking Using the Time-Reversibility Constraint

Robust Visual Tracking Using the Time-Reversibility Constraint Robust Visual Tracking Using the Time-Reversibility Constraint Hao Wu, Rama Chellappa, Aswin C. Sankaranarayanan and Shaohua Kevin Zhou Center for Automation Research, University of Maryland, College Park,

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

Chapter 12. CPU Structure and Function. Yonsei University

Chapter 12. CPU Structure and Function. Yonsei University Chapter 12 CPU Structure and Function Contents Processor organization Register organization Instruction cycle Instruction pipelining The Pentium processor The PowerPC processor 12-2 CPU Structures Processor

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Final Review CMSC 733 Fall 2014

Final Review CMSC 733 Fall 2014 Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Study of Viola-Jones Real Time Face Detector

Study of Viola-Jones Real Time Face Detector Study of Viola-Jones Real Time Face Detector Kaiqi Cen cenkaiqi@gmail.com Abstract Face detection has been one of the most studied topics in computer vision literature. Given an arbitrary image the goal

More information

Troubleshooting Memory

Troubleshooting Memory This chapter contains the following sections: About, page 1 General/High Level Assessment of Platform Memory Utilization, page 2 Detailed Assessment of Platform Memory Utilization, page 2 User Processes,

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Intel Processor Identification and the CPUID Instruction

Intel Processor Identification and the CPUID Instruction Intel Processor Identification and the CPUID Instruction Application Note 485 November 2008 Order Number: 241618-033 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

LANDMARK EXTRACTION USING CORNER DETECTION AND K-MEANS CLUSTERING FOR AUTONOMOUS LEADER-FOLLOWER CARAVAN

LANDMARK EXTRACTION USING CORNER DETECTION AND K-MEANS CLUSTERING FOR AUTONOMOUS LEADER-FOLLOWER CARAVAN LANDMARK EXTRACTION USING CORNER DETECTION AND K-MEANS CLUSTERING FOR AUTONOMOUS LEADER-FOLLOER CARAVAN Andrew B. Nevin Department of Electrical and Computer Engineering Auburn University Auburn, Al 36849,

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb

More information

Edge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms

Edge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms Edge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms Matthias Hofmann, Fabian Rensen, Ingmar Schwarz and Oliver Urbann Abstract Edge detection is a popular technique for extracting

More information

Intel Processor Identification and the CPUID Instruction

Intel Processor Identification and the CPUID Instruction APPLICATION NOTE Intel Processor Identification and the CPUID Instruction February 2001 Order Number: 241618-017 Information in this document is provided in connection with Intel products. No license,

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Feature-Based Image Mosaicing

Feature-Based Image Mosaicing Systems and Computers in Japan, Vol. 31, No. 7, 2000 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J82-D-II, No. 10, October 1999, pp. 1581 1589 Feature-Based Image Mosaicing Naoki Chiba and

More information

KVM Virtualization With Enomalism 2 On An Ubuntu 8.10 Server

KVM Virtualization With Enomalism 2 On An Ubuntu 8.10 Server By Falko Timme Published: 2009-03-29 20:13 Version 1.0 Author: Falko Timme Last edited 03/26/2009 Enomalism ECP (Elastic Computing Platform) provides a web-based control

More information

Chapter 12. Virtualization

Chapter 12. Virtualization Chapter 12 Virtualization Virtualization is used in many contexts in computer systems In Operating Systems (already covered in the lecture): Virtualization of memory Virtualization of block devices (or

More information

CS-465 Computer Vision

CS-465 Computer Vision CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in

More information

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract

More information

Pedestrian Detection with Improved LBP and Hog Algorithm

Pedestrian Detection with Improved LBP and Hog Algorithm Open Access Library Journal 2018, Volume 5, e4573 ISSN Online: 2333-9721 ISSN Print: 2333-9705 Pedestrian Detection with Improved LBP and Hog Algorithm Wei Zhou, Suyun Luo Automotive Engineering College,

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation

More information