ELEC Dr Reji Mathew Electrical Engineering UNSW
|
|
- Richard Benson
- 5 years ago
- Views:
Transcription
1 ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW
2 Review of Motion Modelling and Estimation
3 Introduction to Motion Modelling & Estimation Forward Motion Backward Motion
4 Block Motion Estimation Motion Estimation Find d[n] that will minimize the DFD note this for the whole frame p=1 means minimizing MAD, p=2 means minimizing MSE
5 Block Motion Estimation Motion Estimation f 1 f 2 Divide frame f 2 into blocks e.g. 16x16 blocks For each block B j assign a single motion vector d j Minimise
6 Motion Estimation Objective: find set of vectors d j that minimize Independent optimization problem for each block Place some limit on the motion vectors d = (d 1, d 2 ) t Place a limit on the granularity or precision of motion vectors Image pixel grid
7 Motion Estimation Full search Block Matching Algorithm (BMA) Computationally complex Alternative search algorithms for reduced complexity Telescopic searches First search for best match on a sparse motion vector grid a local neighbourhood is searched with a smaller grid
8 Motion Estimation G First, Search on a coarse grid. G Second, find best match Third, Refine search on a finer grid g < G
9 Motion Estimation G g G First, Search on a coarse grid (G). Second, find best match on the coarse grid Third, Refine search on a finer grid g < G Find best match on finer grid (g) Repeat, progressing to finer grids at each iteration.
10 Optical Flow f (s, t 0 ) = f (s + d(s, t), t), t Intensity of each point in the reference frame should remain constant as it is traced along its motion trajectory into all other time instants. Differentiating both sides of equation with respect to time and evaluating at the reference instant t 0
11 Optical Flow Equation instantaneous velocity field inner product two-dimensional spatial gradient of the function f Direction of the spatial gradient. Perpendicular to object boundaries
12 Optical Flow Equation Optical Flow Equation Differential method: can be understood as performing local Taylor series approximations of the image across space and time; Using partial derivatives with respect to the spatial (s) and temporal coordinates (t)
13 Optical Flow Optical flow equation provides insight into motion estimation Decompose the velocity field v s, t into vector fields which are orthogonal and parallel to the local image gradient Optical flow equation: no constraint on the component of velocity which is perpendicular to the image gradient
14 Optical Flow Optical flow equation provides insight into motion estimation Decompose the velocity field v s, t into vector fields which are orthogonal and parallel to the local image gradient Optical flow equation: is insufficient by itself to estimate the motion field One solution Horn & Schunk method
15 Horn & Schunk Method Introduce regularization into the optimization objective Squared error or deviation from ideal OF equation smoothness trade-off factor (regularization weight) High-pass filter
16 Global Motion Start by examining Global translational motion How to estimate global translation Example: stationary scene, camera displacement/translation Block Matching Method Direct search Minimize average DFD energy (squared error) Or minimize sum of absolute values Need to be careful to handle frame boundaries Correlation method Efficient implementation using FFT Phase correlation
17 Global Translational Motion Example video frames measuring pixels Correlation based search requires approximately 240 multiplications per pixel in this case Full search block matching algorithm requires same number of operations to search for motion vectors with a search range of ±8 pixels in each direction For larger search ranges the correlation based method rapidly becomes more efficient
18 Affine and Perspective Motion Models Projection onto the focal plane of a pinhole camera Assume that the scene objects all lie in a plane
19 Affine and Perspective Motion Models Perspective Image model Orthographic projection
20 Affine and Perspective Motion Models Perspective Image model Orthographic projection Make a reasonable assumption that all scene objects line in a plane, which does not pass through the camera s pinhole (i.e. (0,0,0) point).
21 Affine and Perspective Motion Models Perspective Image model Orthographic projection Make a reasonable assumption that all scene objects line in a plane, which does not pass through the camera s pinhole (i.e. (0,0,0) point). This assumption allows for a global motion model with a small number of parameters.
22 Affine and Perspective Motion Models Consider planar scene motion, where the plane is rotated about coordinate origin by an arbitrary angle and translated by some arbitrary amount rigid body motion combination of rotation & translation When the scene undergoes rigid body motion How can we model the motion of the 2D frames? Global motion under Perspective projection Global motion under Orthographic projection Will focus on Orthographic projection
23 Planar Scene: Rigid Body Motion Rigid body motion rotation and translation of the planar scene Each point x in the scene is mapped to a new point x :
24 Planar Scene: Rigid Body Motion For orthographic projection 2D global motion parametrized by: 6 parameter global model. Affine model. Matrix operator A And a translational offset b. Affine model describing global 2D frame motion
25 Estimating Global Motion Global Motion assuming Rigid body motion Example: camera motion & stationary scene Planar scene Example: scene objects lie significantly closer to a common plane than to the camera Orthographic projection Estimating global Affine model parameters?
26 Estimating Global Affine Motion Estimating Global Affine Motion from Local Motion Start by finding motion for every pixel location that minimizes For some appropriate norm Can find the motion field using a variety of methods Example: optical flow or block matching techniques Use the motion field to estimate a global Affine model Derived motion field
27 Estimating Global Affine Motion Estimating Global Affine Motion from Local Motion Start by finding motion for every pixel location d[n] Use the motion field to estimate a global Affine model Derived motion field Motion field itself satisfies an affine model Finding a slightly different set of affine parameters
28 Estimating Global Affine Motion Goal is to find the affine parameters A and b which minimize d n An b 2 2 n Linear least squares problem Affine parameters can be readily determined Linear solution for the affine parameters Initial motion field can be determined by block matching Can selectively remove motion vectors which fit the affine model poorly And then re-estimating A and b from remaining motion vectors Iterative and robust solution
29 Estimating Global Affine Motion Use optical flow directly to determine global affine motion Optical flow equation Find affine model parameters Which minimize This is another linear least squares problem Minimize the squared error Model is linear for the parameters: - 2x2 matrix elements of A and 2-element b
30 Estimating Global Affine Motion Use optical flow directly to determine global affine motion Remember: Optical flow equation is valid for small displacements So how do we handle large motion? Iterative steps starting with an initial/starting motion model then refining the model for small displacements/offsets create a motion compensated frame re-estimate motion affine parameters
31 Estimating Global Affine Motion
32 Hierarchical Motion Estimation Tool for reducing computational complexity and increasing robustness of motion estimation Multi-resolution pyramid or Gaussian pyramid Multi-resolution pyramids have been used for both local and global motion estimation tasks
33 Hierarchical Motion Estimation
34 Hierarchical Motion Estimation
35 Segmentation and Texture Analysis
36 Segmentation and Texture Analysis Segment out objects from an image or video Use cues of features from colour, texture and/or motion Applications object tracking, scene understanding Texture analysis for segmentation useful to discriminate objects on the basis of texture even if there are no obvious changes in the average intensity or colour
37 Digital Image Processing, 3rd ed. Gonzalez & Woods Chapter 10 Segmentation R. C. Gonzalez & R. E. Woods
38 Image Segmentation Segmentation partition an image into groups of pixels(segments) which belong to the same physical object in the original scene Connectivity a key concept each segment should form a connected region A region is said to be connected if we can find a continuous path from any point in the region to any other point in the region which never leaves the region
39 Image Segmentation For digital images (discretised images) 4-connected regions or 8-connected regions
40 Image Segmentation For digital images (discretised images) 4-connected regions every point in the region is connected to every other point by means of a path which involves only the moves, up, down, left and right. 8-connected regions every point in the region is connected to every other point by means of a path which involves eight possible moves, eight immediate neighbours
41 Image Segmentation Image with pixels having one of two values white or grey. 8-connectivity rule: image segmented into two regions white and grey regions 4-connectivity rule: 5 segments labelled A, B, X, Y, Z
42 Region Growing Analogy of growing crystals from seeds User selects a starting pixel as a seed for a new region algorithm then examines all of the neighbours of the seed; any neighbour which is sufficiently similar to the seed is added to the region neighbours of all newly added pixels, are checked using a similarity measure to determine if they should be included in the region process continues, until no more pixel locations can be added to the region, based upon the similarity measure
43 Region Growing Can employ either 4-connectivity or 8-connectivity Consider 4/8 neighbours at each location Start location seed Accept new pixels to be in the region if they are similar to the current (central) pixel value Similarity?
44 Simple Similarity Measures If n new is the location of a newly added region pixel and {k i } is the set of neighbour displacements, Consider each neighbouring pixel x [n new + k i ], choosing to include it in the region if What should Threshold value (T) be? Problems with being too large or small Threshold value Another alternative for segmentation Sensitive to the choice of the seed
45 Simple Similarity Measures Adaptive similarity measure denotes set of pixel locations belonging to the region at step j j=0, Each step j >0, evaluate μ j by averaging all samples in Visit each location Add to the region if
46 Simple Similarity Measures Adaptive similarity measure Can adapt the threshold T T can be based on the variance of samples in the region Constant in the range 2-4
47 Multi-Dimensional Similarity Measures and Feature Vectors Have focused on intensity x [n] as the key measure of similarity Can enhance this description By having multiple values associated with a single location Example colour (R, G, B) From a scalar to a vector descriptor feature vector Extend the prior notation to incorporate feature vectors. Involves determining an average and covariance matrix Gaussian distribution for probability of inclusion
48 Multi-Dimensional Similarity Measures and Feature Vectors
49 Segmentation using Splitting and Merging Opposite to region growing Start with the whole image and split into pieces based on similarity Quad-tree framework Start with the whole image Split into four quadrants (quads) if pixels are not all similar Example: All pixels in a quad within κ standard deviations of quad s mean? NO split into quadrants (repeat) YES do not split (end reached) Recursively applied splitting into finer blocks if required.
50 Segmentation using Splitting and Merging Opposite to region growing Start with the whole image and split into pieces based on similarity Quad-tree framework
51 Segmentation using Splitting and Merging
52 Segmentation using Splitting and Merging Merging Step Once quad-tree splitting operation is complete, merge individual quad-tree segments together into larger regions Example: merge any two regions, and whose means and variances satisfy Can impose this only for connected regions 4-connectivity or 8 connectivity Iterative merge process reapply for merged regions
53 Texture Many objects are distinguished by their internal texture Human visual system is able to see boundaries between different texture regions even if average intensity does not provide a distinction Therefore useful to determine low dimensional feature vectors which can distinguish different types of texture Texture does not formally have a value at a single point a property created by the interaction of local fine structure Texture must always be evaluated within a window with significant spatial support
54 Texture
55 Texture Associate the centre of the window with the texture of the windowed region. By sliding the window around, we may build up a texture map.
56 Texture Texture itself may sometimes be the only useful segmentation cue
57 Histogram Moments Method for measuring the way in which image intensity varies within a window Histogram of the intensity values within the window Histogram features summarised through successively higher moments
58 Histogram Moments
59 Histogram Moments The human visual system has been found to be largely insensitive to histogram moments higher than M 2 (i.e., the variance). However that does not mean that higher moments cannot be useful for segmentation or classification
60 Spectral Analysis Drawback of histogram moments histogram contains no information regarding the spatial configuration of different intensity patterns Alternative and general purpose technique Spectral analysis Example: power spectrum estimation Assuming an NxN window Power spectrum is crudely estimated by taking the DFT over this window and squaring its magnitudes This is the periodogram estimate
61 Spectral Analysis Squaring the magnitude of the DFT Applying the Hanning window
62 Spectral Analysis PSD: How is the power of the image signal distributed over freq. Common approach to calculate PSD: compute N -point DFT of the signal and square its magnitude. known as the periodogram, Covered in Chapter 4, section 5 2D DFT 2 π ω Extracted feature example: Average in circular bands of various radii and bandwidths π π π
63 Spectral Analysis Periodogram is an unreliable measure of the power spectrum. Partly due to: setting signal x[n] to zero everywhere outside the window creates hard boundaries at the window edges produces high frequency content, absent in the true image To minimise the creation of artificial high frequency content use a smooth weighting function which tapers to 0 at the window boundaries
64 Multi-Resolution Transforms x 0 [n] d=0 y 0 [n] x 1 [n] d=1 y 1 [n] x 2 [n] Gaussian Pyramid d=2 y 2 [n] Laplacian Pyramid
65 Multi-Resolution Transforms Synthesis operation (increase sampling x 2) Ideal: sinc interpolation Reduction operation (decrease sampling 2) Ideal: sinc bandlimited down-sampling Assuming factor of 2 change Between levels
66 Multi-Resolution Transforms y d [n] contains only high frequency details from image at level d
67 Multi-Resolution Transforms Feature vectors for texture analysis Corresponding coordinates at each level/resolution Local energy for a centred window A x A Feature vector may be used by a segmentation algorithm, based on region growing or the split and merge technique
ELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Dynamic Range and Weber s Law HVS is capable of operating over an enormous dynamic range, However, sensitivity is far from uniform over this range Example:
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationEXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,
Numerical Analysis and Computer Science, KTH Danica Kragic EXAM SOLUTIONS Computer Vision Course 2D1420 Thursday, 11 th of march 2003, 8.00 13.00 Exercise 1 (5*2=10 credits) Answer at most 5 of the following
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationUsing Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam
Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationDTU M.SC. - COURSE EXAM Revised Edition
Written test, 16 th of December 1999. Course name : 04250 - Digital Image Analysis Aids allowed : All usual aids Weighting : All questions are equally weighed. Name :...................................................
More informationDigital Image Processing
Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents
More informationOptical Flow Estimation with CUDA. Mikhail Smirnov
Optical Flow Estimation with CUDA Mikhail Smirnov msmirnov@nvidia.com Document Change History Version Date Responsible Reason for Change Mikhail Smirnov Initial release Abstract Optical flow is the apparent
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationSchedule for Rest of Semester
Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration
More informationCS-465 Computer Vision
CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationGlobal Flow Estimation. Lecture 9
Motion Models Image Transformations to relate two images 3D Rigid motion Perspective & Orthographic Transformation Planar Scene Assumption Transformations Translation Rotation Rigid Affine Homography Pseudo
More informationOptic Flow and Basics Towards Horn-Schunck 1
Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.
More informationLeow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1
Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationMariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition
to Constraint to Uttendorf 2005 Contents to Constraint 1 Contents to Constraint 1 2 Constraint Contents to Constraint 1 2 Constraint 3 Visual cranial reflex(vcr)(?) to Constraint Rapidly changing scene
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationTHE preceding chapters were all devoted to the analysis of images and signals which
Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to
More informationLecture 11: Classification
Lecture 11: Classification 1 2009-04-28 Patrik Malm Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapters for this lecture 12.1 12.2 in
More informationPart 3: Image Processing
Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationRegion-based Segmentation
Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationObtaining Feature Correspondences
Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationSegmentation of Images
Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a
More informationExperiments with Edge Detection using One-dimensional Surface Fitting
Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,
More informationRuch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS
Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual
More informationMultiple-Choice Questionnaire Group C
Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right
More informationWavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1
Wavelet Applications Texture analysis&synthesis Gloria Menegaz 1 Wavelet based IP Compression and Coding The good approximation properties of wavelets allow to represent reasonably smooth signals with
More informationImage Segmentation. 1Jyoti Hazrati, 2Kavita Rawat, 3Khush Batra. Dronacharya College Of Engineering, Farrukhnagar, Haryana, India
Image Segmentation 1Jyoti Hazrati, 2Kavita Rawat, 3Khush Batra Dronacharya College Of Engineering, Farrukhnagar, Haryana, India Dronacharya College Of Engineering, Farrukhnagar, Haryana, India Global Institute
More informationLecture 8 Object Descriptors
Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationOverview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.
Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationClassification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University
Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate
More informationVC 11/12 T11 Optical Flow
VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture
More informationDigital Image Processing
Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationEECS490: Digital Image Processing. Lecture #19
Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny
More informationTexture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors
Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationLecture 18 Representation and description I. 2. Boundary descriptors
Lecture 18 Representation and description I 1. Boundary representation 2. Boundary descriptors What is representation What is representation After segmentation, we obtain binary image with interested regions
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationComputer Vision for HCI. Motion. Motion
Computer Vision for HCI Motion Motion Changing scene may be observed in a sequence of images Changing pixels in image sequence provide important features for object detection and activity recognition 2
More informationImage Processing. Image Features
Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching
More informationImage Enhancement: To improve the quality of images
Image Enhancement: To improve the quality of images Examples: Noise reduction (to improve SNR or subjective quality) Change contrast, brightness, color etc. Image smoothing Image sharpening Modify image
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationHOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationMidterm Examination CS 534: Computational Photography
Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationBasic Algorithms for Digital Image Analysis: a course
Institute of Informatics Eötvös Loránd University Budapest, Hungary Basic Algorithms for Digital Image Analysis: a course Dmitrij Csetverikov with help of Attila Lerch, Judit Verestóy, Zoltán Megyesi,
More informationComparison Between The Optical Flow Computational Techniques
Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationImage representation. 1. Introduction
Image representation Introduction Representation schemes Chain codes Polygonal approximations The skeleton of a region Boundary descriptors Some simple descriptors Shape numbers Fourier descriptors Moments
More informationA Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation
, pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,
More informationAssignment 3: Edge Detection
Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationAdaptive Multi-Stage 2D Image Motion Field Estimation
Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his
More information82 REGISTRATION OF RETINOGRAPHIES
82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural
More informationImage Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed
Image Segmentation Jesus J Caban Today: Schedule Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed Monday: Revised proposal due Topic: Image Warping ( K. Martinez ) Topic: Image
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationLocal Features: Detection, Description & Matching
Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British
More informationDigital Image Processing Chapter 11: Image Description and Representation
Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that
More informationMotion Estimation and Optical Flow Tracking
Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction
More informationFeature descriptors and matching
Feature descriptors and matching Detections at multiple scales Invariance of MOPS Intensity Scale Rotation Color and Lighting Out-of-plane rotation Out-of-plane rotation Better representation than color:
More information1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)
Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The
More informationFeature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points
Feature extraction Bi-Histogram Binarization Entropy What is texture Texture primitives Filter banks 2D Fourier Transform Wavlet maxima points Edge detection Image gradient Mask operators Feature space
More informationNon-Rigid Image Registration III
Non-Rigid Image Registration III CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS6240) Non-Rigid Image Registration
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationREGION BASED SEGEMENTATION
REGION BASED SEGEMENTATION The objective of Segmentation is to partition an image into regions. The region-based segmentation techniques find the regions directly. Extract those regions in the image whose
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationLecture 6: Multimedia Information Retrieval Dr. Jian Zhang
Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical
More informationImage Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus
Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into
More informationECE Digital Image Processing and Introduction to Computer Vision
ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Recap, SIFT Motion Tracking Change Detection Feature
More information