University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision
|
|
- Theodora Martin
- 6 years ago
- Views:
Transcription
1 report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract features Phone Client image it is b database image feature algorithm Extract feature? Find closest match a c b Matthew Johnson November 2010
2 2 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Applied Computer Vision Computer Vision is not only an exciting field of research, it is also one of the key new technologies driving innovation in business and industry today. Just as human vision is a fundamental and incredibly rich channel of information which we use for a huge array of activities, advances in computer vision are creating new opportunities for innovation in the way in which we interact with our world through digital devices. In this lecture, we will focus on ways in which the concepts and techniques you have been exposed to so far in this course are being used today in the field of image recognition and search on embedded devices.
3 Mobile Computer Vision 3 Impaired Vision The first thing that becomes clear as one begins to work on a ubiquitous computing device like a mobile phone is that one can no longer take anything for granted. For example, most mobile phones in use today do not have any hardware support for division. This means that any division operation is translated in assembly into a long series of other instructions, thus making it incredibly costly. Here is a partial list of the challenges posed by mobile devices: Limited processor hardware (slow floating point operations, lack of functionality) Small cache (i.e. limited stack space for procedures) Slow access to a small supply of Random Access Memory (RAM) Costly disk access Arcane versions of known programming languages How do we overcome these challenges?
4 4 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Client/Server Model Web Server File Server User Interface image feature algorithm Images + labels master database Extract features Phone Client image feature algorithm image Extract features it is b report? Find closest match The most common and, at first, only way of overcoming these challenges on mobile devices was to offload any significant computation to a remote server. The system on the phone becomes a thin client (meaning that it does no real computation) that sends the image to the server, which computes feature vectors, matches them against a database of known images, and returns the information on the match. In order to add new images to this system, a second system interacts with the server to upload images and labels.
5 Mobile Computer Vision 5 Client/Server Model, cont. This arrangement solves the problem of the mobile device s limited capabilities, but has several problems of its own: If the file server stops functioning, the entire system fails If communication to the file server is lost, the entire system fails The server(s) have to perform massive amounts of parallel computation and database indexing Large quantities of data (i.e. images) require lots of time and bandwidth to transfer Significant delay in response to user actions
6 6 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics report Smart Clients Web Server master database User Interface Images + labels image feature algorithm Extract features Phone Client image it is b database image feature algorithm Extract feature? Find closest match a c b An alternative way of approaching this problem is to embrace and overcome the difficulties of performing vision on mobile devices through research into modifications to existing algorithms and entirely new algorithms which are able to solve computer vision problems with the tools available and within the limits imposed. This would be considered a smart client model, because the client does some or all of the computation. In this model, the client is able to perform feature vector extraction and matching locally, which means that it must maintain a full or partial copy of the master database on the device.
7 Mobile Computer Vision 7 Smart Clients, cont. We can see quite clearly how a smart client system, while it has to overcome the challenges of running in a limited environment, also has many benefits: If the server malfunctions, clients can still operate with their current database Similarly, if communication to the server is lost, the system still operates All computation is done by the phones locally, thereby greatly reducing server-based computation All communication with the server consists of feature vectors and metadata, much smaller than images Immediate response to user actions Clearly there are great gains to be had, but first we must overcome the limitations imposed by mobile platforms.
8 8 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Smart Client: System Design If we are going to achieve Computer Vision on a mobile phone, we must first come to terms with reality. That reality dictates that we have no large constructs in memory, no division, and no floating point numbers. Most of the standard Computer Vision algorithms are simply not available for use. Instead, we will have to rely on more efficient algorithms, and where those are unavailable, on approximations. A crucial requirement for this task is knowing the literature. We must understand the basic underlying principles of a technique so that we know what can be jettisoned without reducing performance. We are going to design a logo recognition system that can run on a mobile phone. It is going to be based on using a distance transform on the output of an edge detector to match logo contours. Thus, this system will contain 3 parts: Edge Detector Distance Transform Contour Fragment Matching
9 Mobile Computer Vision 9 Convolve with Logo Logo Recognition Logo recognition is a key task for many mobile vision applications. Restaurants, shops and other businesses take great care to have distinctive logos that are prominently displayed, making them ideal targets for user localization and value-added experiences. Edge Detection Distance Transform We will use Canny edge detection, the Euclidean distance transform, and a simple threshold-based contour recognizer. In the course of developing this system, we will examine the main techniques by which computer vision algorithms can be adapted to work on an embedded device like a mobile phone.
10 10 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Problems Our chosen techniques pose some very difficult problems to solve. The Canny edge detection algorithm poses the biggest challenge. The system works by following the peaks of the first derivative of a Gaussian convolved over the image to find continuous edge contours. In practice, it is implemented using the following steps, almost all of which pose intractable problems in their original forms: Convolution with a Gaussian: floating point arithmetic Computing derivative in the horizontal and vertical directions: floating point arithmetic Compute edge magnitude: square root computation Non-maximum suppression: floating point arithmetic, arctangent computation Hysteresis: recursion, redundant memory access Euclidean distance computation consists of finding the Euclidean distance at each pixel in the image to the nearest edge. A naïve implementation is computation intensive, but we will examine a technique that makes it very efficient.
11 Mobile Computer Vision 11 Gaussian Approximation When convolving an image with a Gaussian, one must perform 2k floating point multiplications per pixel, where k is the size of the Gaussian kernel, typically 6σ. The standard way of optimizing this is to convert the floating point values to integer approximations, with an integer division at the end. Unfortunately, this method of approximation is still too computationally intensive, requiring 2k integer multiplications and 2 integer divisions. The tool we want to use is called a box filter approximation, as discussed by Rau and McClellan in [4]. A box filter simply sets a pixel to the mean of its neighboring pixels. One interesting property of a box filter in one dimension is that multiple applications approximate a Gaussian, as seen below: i=0 i=1 i=2 i=3
12 12 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Gaussian Approximation, cont. Another property is that it is separable (that is, when computed in the horizontal and vertical directions separately it is equivalent to computing it using a two-dimensional square filter), and as such can be used to filter in two dimensions as well while losing none of its efficiency. Even more interesting is that, by using a rolling sum, this can be computed using one addition, one subtraction and one division per pass per pixel. As it takes about 3 passes to get a good approximation, this results in 6 additions, 6 subtractions and 6 divisions per pixel to convolve in the horizontal and vertical directions, regardless of the size of the kernel. 6 divisions sounds like a limitation, but if the size of the kernel is a power of 2, then we can use bit shifting instead of division, leaving a very efficient Gaussian approximation whose outputs are integers. Because the outputs are integers, we have removed the floating point arithmetic from derivative computation. Similarly, if we store edge magnitude as the magnitude squared (i.e. gx 2 + gy 2 ) then we avoid the square root computation, as all that is important is the ordering of edges by magnitude, not the actual magnitude values.
13 Mobile Computer Vision 13 Non-Maximum Suppression In order to suppress non-maximum edge signals, we must compare every pixel to those perpendicular to its edge direction. Some simple trigonometry allows us to avoid a full arctangent computation and simply use gx and gy to perform interpolation directly. While this is exact, the floating point computation is essential yet too expensive. Therefore, we will use another approximation trick: the lookup table. 0 2 In our system so far, gx and gy are integers. Thus, we can use them to index a two-dimensional array. As can be seen above, there are only 4 possible comparison sets, and thus in each array index will be an integer from 0 to 3 indicating which set to compare. We will fill this array during an initialization stage and thus avoid all floating point operations at runtime.
14 14 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Hysteresis The biggest challenge is hysteresis. During hysteresis, we connect together contours using low and high magnitude thresholds. Each contour consists of edgels (single edges which have passed the non-maximum suppression stage) with a magnitude greater than the lower threshold that are joined together (using 8-way connectivity) with at least one edgel in the contour with a magnitude greater than the higher threshold. The way this is done is by starting at an edgel above the higher threshold, and recursively determining which lower threshold pixels are connected to it This recursion can require many iterations, and given the limitations of a mobile phone is not possible. Thus, we need a different way of doing this. Thankfully, there is a tool in computer vision which will solve this problem for us very efficiently.
15 Mobile Computer Vision 15 Connected Components Connected components analysis takes a binary image (pixels are either 1 or 0) and finds which on pixels are connected together and labels each group of connected pixels (i.e. component) with a unique label. When implemented correctly the technique is very fast, and thus will provide what we need with a considerable speedup over the recursive method. Binary Image Input Labeled Components
16 16 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Connected Components, cont. If we create a binary image from the edgels with magnitudes greater than the lower threshold (indicated by L in the diagram below), then as described by Liu and Haralick [3] we can use connected components to find potential contours within the image by identifying all L pixels which are connected. Then, we simply exclude components which do not contain an edgel whose magnitude is greater than the higher threshold (indicated by H in the diagram). This achieves the same effect, while being greatly more efficient in computation and requiring no recursion whatsoever. L L L L L L L L L L H L L L H H H H H L L L L L L L L L L L Thresholded Maximum Edges Connected Components Extracted Contours L L
17 Mobile Computer Vision 17 Euclidean Distance Transform As mentioned previously, a naïve Euclidean distance transform is too computationally expensive for our needs. We are instead going to use a technique developed by Bailey in [1] which achieves the same accuracy using only integers and a minimum of computation. For a moment, let us reduce the distance transform to a one-dimensional problem. In one dimension, finding the distance from any index to an edge index can be computed very quickly and efficiently in two passes, as seen below:
18 18 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Efficient Euclidean DT Thus, for each row in an image we can find the distance from each pixel to the nearest edge in that row very efficiently. This is, naturally, insufficient. Think now of storing the square distance instead of the distance in each row pixel. This traces out a slice of a paraboloid of revolution, whose center is at the edge pixel, as seen below The paraboloid centered at each edge extends across the entire image, intersecting with others from other edges at various points. Each paraboloid determines the area of influence for each edge. By evaluating al paraboloids at a pixel, we can determine which edge is nearest to that pixel.
19 Mobile Computer Vision 19 Efficient Euclidean DT, cont. If we now examine the columns of our image after storing the square distance in each row pixel, we see a record of all the edges which can affect that column. By calculating the value of each paraboloid at each index, we can see which has the least value and thus find the true squared distance to an edge at each row in the column, as seen below. Initial Final At first, this seems like an O(N 2 ) operation, but by scanning the column in order and keeping track of influence changes, it becomes O(N). Thus, the total complexity of the transform remains linear in the number of pixels.
20 20 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Thin Client: Descriptor Compression While we have discounted to a certain extent server based methods, there have been several advances in descriptor compression in the past few years which make a server-based model increasingly feasible for large-scale deployment. Descriptor compression is an area of research that concentrates on encoding an image descriptor (e.g. a gradient histogram, SURF, SIFT) at a lower bitrate. There are several techniques that are used, but they fall into two categories: quantization and entropy coding. We are going to look briefly at a generalized method for descriptor compression proposed by Johnson in [2]. In this method, any image descriptor meeting simple requirements can be compressed for optimized matching, resulting in significant savings in both storage and computational requirements while improving performance. Image Patch Descriptor Algorithm F compression c = 11 D w
21 Mobile Computer Vision 21 Hufmann Tree Encoding The technique used in [2] for compression is Hufmann Tree Encoding. Hufmann Trees are specialized binary trees in which each node has a value equal to the sum of its children. In this context, they can be used as a lossy method of encoding normalized distributions (like intensity or orientation histograms). First, the values in the distribution (a) are used to construct a Huffman Tree (b). The values are then approximated using a negative power of 2 equal to their depth in the tree (d). In this way, the distribution still sums to 1 but can be represented using N log(n) bits (c), where N is the length of the distribution (a) (b) (c) (d) =
22 22 Engineering Part IIB - Module 4F12 - Computer Vision and Robotics Distance Lookup Matrix One advantage of using Huffman Tree Encoding is that it limits the possible values for each index of the distribution. As most distance metrics (e.g. Minkowski norms, Kullback-Leibler Divergence) can be separated on a per-index basis, this means that we can pre-compute all possible distance scores in a lookup table. As the majority of computation time in an image retrieval application is spent computing distances between different descriptors, this optimization results in sizeable savings in computational requirements. In the diagram below, two distributions (x and y) are compressed (c x and c y ) and then are compared using a distance lookup matrix D x c x D c y = 14 y 0.219
23 Mobile Computer Vision 23 Further Reading References [1] D. Bailey. An efficient Euclidean distance transform. In Proc. of IWCIA04, pages , [2] M. Johnson. Generalized descriptor compression for storage and matching. In Proc. of British Machine Vision Conf., Aberystwyth, Wales, August [3] G. Liu and R. M. Haralick. Two practical issues in Canny s edge detector implementation. In Proc. of Int. Conf. on Pattern Recognition, volume 3, page 3680, [4] Rau R. and J. McClellan. Efficient approximation of Gaussian filters. IEEE Trans. on Signal Processing, 45: , February 1997.
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationImage Processing
Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More information[Programming Assignment] (1)
http://crcv.ucf.edu/people/faculty/bagci/ [Programming Assignment] (1) Computer Vision Dr. Ulas Bagci (Fall) 2015 University of Central Florida (UCF) Coding Standard and General Requirements Code for all
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationComputer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,
More informationEdge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist
More informationEdge Detection (with a sidelight introduction to linear, associative operators). Images
Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to
More informationEdge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick
Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationCOMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS
COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,
More informationComputational Foundations of Cognitive Science
Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature
More informationEdge Detection Lecture 03 Computer Vision
Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,
More informationIntroduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature
More informationComputer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11
Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)
More informationEdges and Binary Images
CS 699: Intro to Computer Vision Edges and Binary Images Prof. Adriana Kovashka University of Pittsburgh September 5, 205 Plan for today Edge detection Binary image analysis Homework Due on 9/22, :59pm
More informationEfficient Interest Point Detectors & Features
Efficient Interest Point Detectors & Features Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Review. Efficient Interest Point Detectors. Efficient Descriptors. Review In classical
More informationEfficient Interest Point Detectors & Features
Efficient Interest Point Detectors & Features Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Review. Efficient Interest Point Detectors. Efficient Descriptors. Review In classical
More informationCS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationconvolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection
COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:
More informationAdvanced Video Content Analysis and Video Compression (5LSH0), Module 4
Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl
More informationCS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian
More informationAn Implementation on Object Move Detection Using OpenCV
An Implementation on Object Move Detection Using OpenCV Professor: Dr. Ali Arya Reported by: Farzin Farhadi-Niaki Department of Systems and Computer Engineering Carleton University Ottawa, Canada I. INTRODUCTION
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationAssignment 3: Edge Detection
Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse
More informationEdge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University
Edge and Texture CS 554 Computer Vision Pinar Duygulu Bilkent University Filters for features Previously, thinking of filtering as a way to remove or reduce noise Now, consider how filters will allow us
More informationEdge detection. Winter in Kraków photographed by Marcin Ryczek
Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, edges carry most of the semantic and shape information
More informationComputer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.
Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with
More informationCAP 5415 Computer Vision Fall 2012
CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented
More informationDIGITAL IMAGE PROCESSING
The image part with relationship ID rid2 was not found in the file. DIGITAL IMAGE PROCESSING Lecture 6 Wavelets (cont), Lines and edges Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion
More informationIMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE
IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE OUTLINE Filters for color images Edge detection for color images Canny edge detection FILTERS FOR COLOR IMAGES
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationReview of Filtering. Filtering in frequency domain
Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect of filter Algorithm: 1. Convert image and filter to fft (fft2
More informationCS664 Lecture #21: SIFT, object recognition, dynamic programming
CS664 Lecture #21: SIFT, object recognition, dynamic programming Some material taken from: Sebastian Thrun, Stanford http://cs223b.stanford.edu/ Yuri Boykov, Western Ontario David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationEdge detection. Winter in Kraków photographed by Marcin Ryczek
Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationTexture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors
Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationMotivation. Gray Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More informationLecture 4.1 Feature descriptors. Trym Vegard Haavardsholm
Lecture 4.1 Feature descriptors Trym Vegard Haavardsholm Feature descriptors Histogram of Gradients (HoG) descriptors Binary descriptors 2 Histogram of Gradients (HOG) descriptors Scale Invariant Feature
More informationEdge Detection. Computer Vision Shiv Ram Dubey, IIIT Sri City
Edge Detection Computer Vision Shiv Ram Dubey, IIIT Sri City Previous two classes: Image Filtering Spatial domain Smoothing, sharpening, measuring texture * = FFT FFT Inverse FFT = Frequency domain Denoising,
More informationEdge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science
Edge Detection From Sandlot Science Today s reading Cipolla & Gee on edge detection (available online) Project 1a assigned last Friday due this Friday Last time: Cross-correlation Let be the image, be
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationEdge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages
Announcements Mailing list: csep576@cs.washington.edu you should have received messages Project 1 out today (due in two weeks) Carpools Edge Detection From Sandlot Science Today s reading Forsyth, chapters
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationFeature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking
Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)
More informationSIFT - scale-invariant feature transform Konrad Schindler
SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationLocal Features Tutorial: Nov. 8, 04
Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International
More informationAnnouncements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10
Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More informationDigital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection
Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Recall: Edge Detection Image processing
More informationWhat is an edge? Paint. Depth discontinuity. Material change. Texture boundary
EDGES AND TEXTURES The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationA Comparison and Matching Point Extraction of SIFT and ISIFT
A Comparison and Matching Point Extraction of SIFT and ISIFT A. Swapna A. Geetha Devi M.Tech Scholar, PVPSIT, Vijayawada Associate Professor, PVPSIT, Vijayawada bswapna.naveen@gmail.com geetha.agd@gmail.com
More informationAnnouncements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.
Announcements Edge Detection Introduction to Computer Vision CSE 152 Lecture 9 Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3. Reading from textbook An Isotropic Gaussian The picture
More informationProf. Feng Liu. Winter /15/2019
Prof. Feng Liu Winter 2019 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/15/2019 Last Time Filter 2 Today More on Filter Feature Detection 3 Filter Re-cap noisy image naïve denoising Gaussian blur better
More informationPreviously. Edge detection. Today. Thresholding. Gradients -> edges 2/1/2011. Edges and Binary Image Analysis
2//20 Previously Edges and Binary Image Analysis Mon, Jan 3 Prof. Kristen Grauman UT-Austin Filters allow local image neighborhood to influence our description and features Smoothing to reduce noise Derivatives
More informationEdge Detection CSC 767
Edge Detection CSC 767 Edge detection Goal: Identify sudden changes (discontinuities) in an image Most semantic and shape information from the image can be encoded in the edges More compact than pixels
More informationEdge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient
Edge Detection CS664 Computer Vision. Edges Convert a gray or color image into set of curves Represented as binary image Capture properties of shapes Dan Huttenlocher Several Causes of Edges Sudden changes
More informationWhat Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)
What Are Edges? Simple answer: discontinuities in intensity. Lecture 5: Gradients and Edge Detection Reading: T&V Section 4.1 and 4. Boundaries of objects Boundaries of Material Properties D.Jacobs, U.Maryland
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationFeature Descriptors. CS 510 Lecture #21 April 29 th, 2013
Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition
More informationLine, edge, blob and corner detection
Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33 Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5
More informationIntroduction to Medical Imaging (5XSA0)
1 Introduction to Medical Imaging (5XSA0) Visual feature extraction Color and texture analysis Sveta Zinger ( s.zinger@tue.nl ) Introduction (1) Features What are features? Feature a piece of information
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationTexture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More informationSolution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur
Pyramids Gaussian pre-filtering Solution: filter the image, then subsample blur F 0 subsample blur subsample * F 0 H F 1 F 1 * H F 2 { Gaussian pyramid blur F 0 subsample blur subsample * F 0 H F 1 F 1
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationComputer Vision for HCI. Topics of This Lecture
Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi
More informationEdge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels
Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface
More informationImplementing the Scale Invariant Feature Transform(SIFT) Method
Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The
More informationChapter 2 Basic Structure of High-Dimensional Spaces
Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,
More information2D Image Processing Feature Descriptors
2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview
More informationAutomatic Image Alignment (feature-based)
Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature
More information3D Object Recognition using Multiclass SVM-KNN
3D Object Recognition using Multiclass SVM-KNN R. Muralidharan, C. Chandradekar April 29, 2014 Presented by: Tasadduk Chowdhury Problem We address the problem of recognizing 3D objects based on various
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb
More informationEdge Detection. EE/CSE 576 Linda Shapiro
Edge Detection EE/CSE 576 Linda Shapiro Edge Attneave's Cat (1954) 2 Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused
More informationImplementation of Canny Edge Detection Algorithm on FPGA and displaying Image through VGA Interface
Implementation of Canny Edge Detection Algorithm on FPGA and displaying Image through VGA Interface Azra Tabassum 1, Harshitha P 2, Sunitha R 3 1-2 8 th sem Student, Dept of ECE, RRCE, Bangalore, Karnataka,
More informationSegmentation I: Edges and Lines
Segmentation I: Edges and Lines Prof. Eric Miller elmiller@ece.tufts.edu Fall 2007 EN 74-ECE Image Processing Lecture 8-1 Segmentation Problem of breaking an image up into regions are are interesting as
More informationFeature Detection and Matching
and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /
More information[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16
Review Edges and Binary Images Tuesday, Sept 6 Thought question: how could we compute a temporal gradient from video data? What filter is likely to have produced this image output? original filtered output
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 2: Edge detection From Sandlot Science Announcements Project 1 (Hybrid Images) is now on the course webpage (see Projects link) Due Wednesday, Feb 15, by 11:59pm
More informationA Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection
International Journal of Research Studies in Science, Engineering and Technology Volume 2, Issue 5, May 2015, PP 49-57 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Robust Method for Circle / Ellipse
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationTopic 4 Image Segmentation
Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive
More information5. Feature Extraction from Images
5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i
More informationEdge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms
Edge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms Matthias Hofmann, Fabian Rensen, Ingmar Schwarz and Oliver Urbann Abstract Edge detection is a popular technique for extracting
More informationFeature Matching and Robust Fitting
Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This
More informationRapid Natural Scene Text Segmentation
Rapid Natural Scene Text Segmentation Ben Newhouse, Stanford University December 10, 2009 1 Abstract A new algorithm was developed to segment text from an image by classifying images according to the gradient
More informationComparison of Feature Detection and Matching Approaches: SIFT and SURF
GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student
More informationMotivation. Intensity Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More information