CSCI 1290: Comp Photo
|
|
- Duane Davidson
- 5 years ago
- Views:
Transcription
1 CSCI 1290: Comp Photo Fall Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all its acknowledgements.
2 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Motion and perceptual organization Even impoverished motion data can evoke a strong percept
3 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Motion and perceptual organization Even impoverished motion data can evoke a strong percept
4 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Video A video is a sequence of frames captured over time A function of space (x, y) and time (t)
5 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Motion Applications Background subtraction Shot boundary detection Motion segmentation Segment the video into multiple coherently moving objects
6 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others
7 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Motion estimation: Optical flow Optic flow is the apparent motion of objects or surfaces Will start by estimating motion of each pixel separately Then will consider motion of entire image
8 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Problem definition: optical flow How to estimate pixel motion from image I(x,y,t) to I(x,y,t+1)? Solve pixel correspondence problem Given a pixel in I(x,y,t), look for nearby pixels of the same color in I(x,y,t+1) Key assumptions I( x, y, t ) I( x, y, t + 1) Small motion: Points do not move very far Color constancy: A point in I(x,y,t) looks the same in I(x,y,t+1) For grayscale images, this is brightness constancy
9 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Optical flow constraints (grayscale images) I( x, y, t ) I( x, y, t + 1) Let s look at these constraints more closely Brightness constancy constraint (equation) I( x, y, t) = I( x + u, y + v, t + 1) Small motion: (u and v are less than 1 pixel, or smoothly varying) Taylor series expansion of I: I I I( x + u, y + v) = I( x, y) + u + v + [higher order terms] x y I I I( x, y) + u + v x y
10 CS 4495 Computer Vision A. Bobick Optical flow equation Combining these two equations 0 = I( x + u, y + v, t + 1) I( x, y, t) I( x, y, t + 1) + I u + I v I( x, y, t) x y Motion and Optic Flow (Short hand: I x = I x for t or t+1)
11 CS 4495 Computer Vision A. Bobick Optical flow equation Combining these two equations 0 = I( x + u, y + v, t + 1) I( x, y, t) I( x, y, t + 1) + I u + I v I ( x, y, t) [ I( x, y, t + 1) I( x, y, t)] + I u + I v I + I u + I v t x y I + I u, v t x y x y Motion and Optic Flow (Short hand: I x = I x for t or t+1)
12 CS 4495 Computer Vision A. Bobick Optical flow equation Combining these two equations 0 = I( x + u, y + v, t + 1) I( x, y, t) I( x, y, t + 1) + I u + I v I ( x, y, t) [ I( x, y, t + 1) I( x, y, t)] + I u + I v I + I u + I v t x y I + I u, v t x y x y Motion and Optic Flow (Short hand: I x = I x for t or t+1) In the limit as u and v go to zero, this becomes exact 0 = I + I u, v t Brightness constancy constraint equation I u + I v + I = x y t 0
13 CS 4495 Computer Vision A. Bobick Motion and Optic Flow How does this make sense? Brightness constancy constraint equation I u + I v + I = 0 x y t What do the static image gradients have to do with motion estimation?
14 The brightness constancy constraint Can we use this equation to recover image motion (u,v) at each pixel? 0 = I + I u, v or t I xu + I yv + It = How many equations and unknowns per pixel? One equation (this is a scalar equation!), two unknowns (u,v) The component of the motion perpendicular to the gradient (i.e., parallel to the edge) cannot be measured 0 If (u, v) satisfies the equation, so does (u+u, v+v ) if I u' v' T = 0 gradient (u,v) (u+u,v+v ) (u,v ) edge
15 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Aperture problem
16 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Aperture problem
17 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Aperture problem
18 CS 4495 Computer Vision A. Bobick Motion and Optic Flow The barber pole illusion
19 CS 4495 Computer Vision A. Bobick Motion and Optic Flow The barber pole illusion
20 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Solving the ambiguity B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp , How to get more equations for a pixel? Spatial coherence constraint Assume the pixel s neighbors have the same (u,v) If we use a 5x5 window, that gives us 25 equations per pixel
21 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Solving the ambiguity Least squares problem:
22 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Matching patches across images Overconstrained linear system Least squares solution for d given by The summations are over all pixels in the K x K window
23 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Conditions for solvability Optimal (u, v) satisfies Lucas-Kanade equation When is this solvable? What are good points to track? A T A should be invertible A T A should not be too small due to noise eigenvalues 1 and 2 of A T A should not be too small A T A should be well-conditioned 1 / 2 should not be too large ( 1 = larger eigenvalue) Does this remind you of anything? Criteria for Harris corner detector
24 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Low texture region gradients have small magnitude small 1, small 2
25 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Edge large gradients, all the same large 1, small 2
26 CS 4495 Computer Vision A. Bobick Motion and Optic Flow High textured region gradients are different, large magnitudes large 1, large 2
27 The aperture problem resolved Actual motion
28 The aperture problem resolved Perceived motion
29 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Errors in assumptions A point does not move like its neighbors Motion segmentation Brightness constancy does not hold Do exhaustive neighborhood search with normalized correlation - tracking features maybe SIFT more later. The motion is large (larger than a pixel) 1. Not linear: Iterative refinement 2. Local minima: coarse-to-fine estimation
30 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Revisiting the small motion assumption Is this motion small enough? Probably not it s much larger than one pixel How might we solve this problem?
31 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Optical Flow: Aliasing Temporal aliasing causes ambiguities in optical flow because images can have many pixels with the same intensity. I.e., how do we know which correspondence is correct? actual shift estimated shift nearest match is correct (no aliasing) To overcome aliasing: coarse-to-fine estimation. nearest match is incorrect (aliasing)
32 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Reduce the resolution!
33 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Coarse-to-fine optical flow estimation u=1.25 pixels u=2.5 pixels u=5 pixels image 1 u=10 pixels image 2 Gaussian pyramid of image 1 Gaussian pyramid of image 2
34 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Coarse-to-fine optical flow estimation run iterative L-K run iterative L-K warp & upsample... image J1 image I2 Gaussian pyramid of image 1 Gaussian pyramid of image 2
35 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Optical Flow Results * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
36 CS 4495 Computer Vision A. Bobick Motion and Optic Flow Optical Flow Results * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
37 Large displacement optical flow, Brox et al., CVPR 2009 CS 4495 Computer Vision A. Bobick Motion and Optic Flow State-of-the-art optical flow, 2009 Start with something similar to Lucas-Kanade + gradient constancy + energy minimization with smoothing term + region matching + keypoint matching (long-range) Region-based +Pixel-based +Keypoint-based
38 CS 4495 Computer Vision A. Bobick Motion and Optic Flow State-of-the-art optical flow, 2015 CNN Pair of input frames Upsample estimated flow back to input resolution Near state-of-the-art in terms of end-point-error Fischer et al
39 CS 4495 Computer Vision A. Bobick Motion and Optic Flow State-of-the-art optical flow, 2015 Synthetic Training data Fischer et al
40 CS 4495 Computer Vision A. Bobick Motion and Optic Flow State-of-the-art optical flow, 2015 Results on Sintel Fischer et al
41 Optical flow Definition: the apparent motion of brightness patterns in the image Ideally, the same as the projected motion field Take care: apparent motion can be caused by lighting changes without any actual motion Imagine a uniform rotating sphere under fixed lighting vs. a stationary sphere under moving illumination.
42 Can we do more? Scene flow Combine spatial stereo & temporal constraints Recover 3D vectors of world motion Stereo view 1 Stereo view 2 t y x t-1 z 3D world motion vector per pixel
43 Scene flow example for human motion Estimating 3D Scene Flow from Multiple 2D Optical Flows, Ruttle et al., 2009
44 Scene Flow [Estimation of Dense Depth Maps and 3D Scene Flow from Stereo Sequences, M. Jaimez et al., TU Munchen]
45 CVMP 2011 Understanding that motion segmentation is hard: AUTOMATIC CINEMAGRAPHS Fabrizio Pece, Kartic Subr, Jan UCL
46 Professional, by hand examples
47 Overview
48 Algorithm
49 Motion segmentation
50 Stabilization
51 Our automatic examples
52 What s interesting: A good example of something that seems simple but rapidly becomes complicated: Registration with parallax (hand-held camera) Motion segmentation in general scenes Fast and HQ loop finding/frame interpolation Try to do this on non-trivial examples. Try to do this on a mobile phone in a few seconds. (Recently released software examples cannot cope).
53 Video Textures Arno Schödl Richard Szeliski David Salesin Irfan Essa Microsoft Research, Georgia Tech [SIGGRAPH 2000]
54 Still photos
55 Video clips
56 Video textures
57 Related work Image-based rendering View Morphing [Seitz 96] Lightfields / Lumigraph [Levoy 96, Gortler 96] Façade [Debevec 96] Virtualized Reality [Kanade 97] many more Novel still views from still images
58 Related work Texture synthesis [Heeger 95] [de Bonet 97] [Efros 99] Infinite extension of 2D pattern
59 Related work Temporal texture synthesis Statistical Learning of Multidimensional Textures [Bar-Joseph 99] [Wei 00] Video as a spatio-temporal texture
60 Related work Video Rewrite [Bregler 97] Assemble video snippets to generate lip motion
61 Related work Periodic motion analysis [Niyogi 94] [Seitz 97] [Polana 97] [Cutler 00] Uses similar analysis tools
62 Problem statement video clip video texture
63 Our approach How do we find good transitions?
64 Finding good transitions Compute L 2 distance D i, j between all frames vs. frame i frame j Similar frames make good transitions
65 Markov chain representation Similar frames make good transitions
66 Transition costs Transition from i to j if successor of i is like j Cost function: C i j = D i+1, j i i+1 ij D i+1, j j-1 j
67 Transition probabilities Probability for transition P i j inversely related to cost: P i j ~ exp ( C i j / s 2 ) high s low s
68 Preserving dynamics
69 Preserving dynamics
70 Preserving dynamics Cost for transition i j C i j = N -1 k = -N w k D i+k+1, j+k i-1 i i+1 i+2 D i-1, j-2 D i, j-1 i j D D i+1, j i+2, j+1 j-2 j-1 j j+1
71 Preserving dynamics effect Cost for transition i j C i j = N -1 k = -N w k D i+k+1, j+k
72 Dead ends No good transition at the end of sequence
73 Future cost Propagate future transition costs backward Iteratively compute new cost F i j = C i j + min k F j k
74 Future cost Propagate future transition costs backward Iteratively compute new cost F i j = C i j + min k F j k
75 Future cost Propagate future transition costs backward Iteratively compute new cost F i j = C i j + min k F j k
76 Future cost Propagate future transition costs backward Iteratively compute new cost F i j = C i j + min k F j k
77 Future cost Propagate future transition costs backward Iteratively compute new cost F i j = C i j + min k F j k Q-learning
78 Future cost effect
79 Finding good loops Alternative to random transitions Precompute set of loops up front
80 Philippe Levieux, James Tompkin, Jan Kautz UCL Interactive Viewpoint Video Textures
81 Interactive Viewpoint Video Textures Philippe Levieux, James Tompkin, Jan Kautz CVMP December 2018
82 Situation Realistic objects take a long time to create and render with computer graphics. For many real-world objects, we can use imagebased rendering. Very appropriate for some applications
83 Motivation [ Product display: Change viewpoint. Convincing depiction. Fast capture. Problem: Only allows static examples.
84 Our goal Introduce motion! Smooth spatial and temporal interpolation. Single input camera. Real-time exploration. Limit to repetitive or stochastic dynamics.
85 Video Textures Schödl, Arno et al., 'Video Textures', in: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, (2000), SIGGRAPH '00.
86 Light Field / Lumigraph Buehler, Chris et al., 'Unstructured lumigraph rendering', in: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, (2001), SIGGRAPH '01.
87 Existing methods Video Textures Our Method Light Fields / Lumigraph Single spatial position. Smooth temporal dynamics. Single camera. Many spatial positions. Smooth temporal dynamics. Single camera. Only repetitive motions. Many spatial positions. Difficult dynamics. Multiple cameras.
88 Our Method
89 Method - Capture
90 Method - Preprocess
91 Method - Preprocess
92 Method - Preprocess
93 Method - Rendering Time Within matrix 1 Within matrix 2 Final frame Across matrix 1->2
94 Method - Rendering Virtual viewpoint is rendered with optical flow. Distance between views is estimated from camera distance measure (slide 23). Flow vectors multiplied by distance to advect frames. Use Brox 2004 approximation by Eisemann 2008 (Floating Textures project).
95 Interactive application
96 Results
97 Scene split in regions
98 Estimating the Camera Distance
99 Varying the number of frames played at each virtual viewpoint
100 Varying the number of virtual viewpoints
101 Timings and Data Costs Capture: 30 minutes Preprocess: 2 to 9 hours n-to-n frame matching for each video and across videos is just expensive. n-to-n is required for good video textures.
102 Timings and Data Costs Rendering real-time interpolating frames: 60Hz 30Hz Additional data costs are small: One n-sided square matrix per video. One n-by-m video between video pairs.
103 Problems
104 How much data can you throw away? Rotations 14 baseline OK. 360 from 24 spatial locations. With short cycle (< 30s), capture would take 30 minutes with no setup or calibration.
105 How much data can you throw away? Dolly shots Focus of expansion (FOE) in dolly shots. Content near FOE is artefact free even at large baselines. Content far from FOE has artefacts even for small baselines. Baseline distances small for this scene as camera close to object. 2 cm 4 cm 7 cm
106 Altering scene content
107 Altering scene content
108 Summary Image-based renderings are often static. Interactive viewpoint video textures! Repetitive / stochastic / psuedo-random motions. Single camera, no calibration. Real-time view + dynamics + content interpolation. Product display / advertising applications.
109 THANK YOU
110 Eurographics 2012, Cagliari, Italy Interactive Multi-perspective Imagery from Photos and Videos Henrik Lieng 1,2 James Tompkin 1 Jan Kautz 1 University College London 1 University of Cambridge 2
111 Eurographics 2012, Cagliari, Italy Multi-perspective Imagery - Picasso 127
112 Eurographics 2012, Cagliari, Italy Multi-perspective Imagery - Hockney 128
113 Eurographics 2012, Cagliari, Italy Multi-perspective Imagery - Escher 129
114 Eurographics 2012, Cagliari, Italy Multi-perspective Imagery - Gonsalves 130
115 Eurographics 2012, Cagliari, Italy Multi-perspective Imagery - Head 131
116 Eurographics 2012, Cagliari, Italy Single-perspective images may not reveal enough information 132
117 Eurographics 2012, Cagliari, Italy Solved by multi-perspective imagery 133
118 Eurographics 2012, Cagliari, Italy Solved by multi-perspective imagery 134
119 Eurographics 2012, Cagliari, Italy Existing work focuses on specific aspects [Brown and Lowe, 2007] [Collomosse and Hall, 2003] [Agarwala et al., 2006] [Nomura et al., 2007] 135
120 Eurographics 2012, Cagliari, Italy 136
121 Eurographics 2012, Cagliari, Italy 137
122 Eurographics 2012, Cagliari, Italy 138
123 Eurographics 2012, Cagliari, Italy 139
124 Eurographics 2012, Cagliari, Italy 140
125 Eurographics 2012, Cagliari, Italy We support any kind of perspective transformation 141
126 Eurographics 2012, Cagliari, Italy Recursive multi-perspective imagery 142
127 Eurographics 2012, Cagliari, Italy OUR SYSTEM 143
128 Eurographics 2012, Cagliari, Italy Overview Code and data available online 144
129 Eurographics 2012, Cagliari, Italy Calibration (Structure from Motion) Bundler for photographs [Snavely et al., 2006]. Voodoo camera tracker for videos [Thormählen, 2006]. Can also use Photosynth (Microsoft) or VisualSFM [Wu, 2011]. Outputs 3D point cloud and camera parameters. 145
130 Eurographics 2012, Cagliari, Italy Prototype Interface 146
131 Eurographics 2012, Cagliari, Italy Portal propagation for image i Project 3D points from SfM to i; for point in user-defined portal p Find n closest projected points to p; Select the closest point in 3D to camera; end end Ensures that we use a visible 3D point. Refine poor propagations. 147
132 Eurographics 2012, Cagliari, Italy Warp and composite 148
133 Eurographics 2012, Cagliari, Italy Warp and composite For each point (x,y) in a propagated portal, move to (x,y ): x = a 0 + a 1 x + a 2 y + a 3 xy y = b 0 + b 1 x + b 2 y + b 3 xy Looking at our quadratic polynomial: Lines map to lines. Parallelism is not preserved. Use α-blending when compositing. 149
134 Eurographics 2012, Cagliari, Italy Warp interpolation (optical flow) With No interpolation 150
135 Eurographics 2012, Cagliari, Italy Warp interpolation (optical flow) SIFT flow [Liu et al., 2011] Robust to illumination changes. Slow (1 minute for 1MP portals). Use real-time GPU optical flow algorithms for more `interactiveness. E.G., [Pauwels and Van Hulle, 2008] 151
136 Eurographics 2012, Cagliari, Italy Colour matching Use [Reinhard et al., 2001]. 152
137 Eurographics 2012, Cagliari, Italy Object Removal 153
138 Eurographics 2012, Cagliari, Italy System recap: Code and data available online 154
139 Eurographics 2012, Cagliari, Italy Evaluation Interviews with professional artists (2D/3D). Positive towards the novelty of the results. Positive towards speed improvement. Popping artefacts not distracting. Nonplussed about automatic colour matching. But essential for video. Compared to artists current systems: Produces as good or better results. Is easier and faster to use. 155
140 Eurographics 2012, Cagliari, Italy Multi-perspective imagery from videos 156
141 Eurographics 2012, Cagliari, Italy 157
Peripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationNinio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,
Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationECE Digital Image Processing and Introduction to Computer Vision
ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Recap, SIFT Motion Tracking Change Detection Feature
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationComputer Vision Lecture 20
Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,
More informationVisual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys
Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the
More informationEECS 556 Image Processing W 09
EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign
More informationOverview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.
Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationLucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationAutomatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski
Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Today Go over Midterm Go over Project #3
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationMotion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization
More informationVC 11/12 T11 Optical Flow
VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationOptical flow. Cordelia Schmid
Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in
More informationOptical flow. Cordelia Schmid
Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationOptical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun
Optical Flow CS 637 Fuxin Li With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Motion and perceptual organization Sometimes, motion is the only cue Motion and perceptual
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationStructure from Motion
11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from
More informationVideo Texture. A.A. Efros
Video Texture A.A. Efros 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Weather Forecasting for Dummies Let s predict weather: Given today s weather only, we want to know tomorrow s Suppose
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More information3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015
3D Photography Marc Pollefeys, Torsten Sattler Spring 2015 Schedule (tentative) Feb 16 Feb 23 Mar 2 Mar 9 Mar 16 Mar 23 Mar 30 Apr 6 Apr 13 Apr 20 Apr 27 May 4 May 11 May 18 May 25 Introduction Geometry,
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationLecture 14: Tracking mo3on features op3cal flow
Lecture 14: Tracking mo3on features op3cal flow Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab Lecture 14-1 What we will learn today? Introduc3on Op3cal flow Feature
More informationLecture 20: Tracking. Tuesday, Nov 27
Lecture 20: Tracking Tuesday, Nov 27 Paper reviews Thorough summary in your own words Main contribution Strengths? Weaknesses? How convincing are the experiments? Suggestions to improve them? Extensions?
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationCS231A Section 6: Problem Set 3
CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationCS664 Lecture #18: Motion
CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use
More informationSynthesizing Realistic Facial Expressions from Photographs
Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1
More informationParticle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease
Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References
More informationLeow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1
Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationSpatial track: motion modeling
Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationSpatial track: motion modeling
Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between
More informationImage-Based Rendering
Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationComputer Vision Lecture 18
Course Outline Computer Vision Lecture 8 Motion and Optical Flow.0.009 Bastian Leibe RWTH Aachen http://www.umic.rwth-aachen.de/multimedia leibe@umic.rwth-aachen.de Man slides adapted from K. Grauman,
More informationLecture 13: Tracking mo3on features op3cal flow
Lecture 13: Tracking mo3on features op3cal flow Professor Fei- Fei Li Stanford Vision Lab Lecture 13-1! What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Problem Set 3 (Q1))
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationSURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD
SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications
More informationData-driven methods: Video & Texture. A.A. Efros
Data-driven methods: Video & Texture A.A. Efros 15-463: Computational Photography Alexei Efros, CMU, Fall 2010 Michel Gondry train video http://youtube.com/watch?v=ques1bwvxga Weather Forecasting for Dummies
More informationCS201: Computer Vision Introduction to Tracking
CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change
More informationWikipedia - Mysid
Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,
More informationOptic Flow and Basics Towards Horn-Schunck 1
Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationImage Processing Techniques and Smart Image Manipulation : Texture Synthesis
CS294-13: Special Topics Lecture #15 Advanced Computer Graphics University of California, Berkeley Monday, 26 October 2009 Image Processing Techniques and Smart Image Manipulation : Texture Synthesis Lecture
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb
More informationBut, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction
Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene
More informationOptic Flow and Motion Detection
Optic Flow and Motion Detection Computer Vision and Imaging Martin Jagersand Readings: Szeliski Ch 8 Ma, Kosecka, Sastry Ch 4 Image motion Somehow quantify the frame-to to-frame differences in image sequences.
More informationVideo Textures. Arno Schödl Richard Szeliski David H. Salesin Irfan Essa. presented by Marco Meyer. Video Textures
Arno Schödl Richard Szeliski David H. Salesin Irfan Essa presented by Marco Meyer Motivation Images lack of dynamics Videos finite duration lack of timeless quality of image Motivation Image Video Texture
More information3D Vision. Viktor Larsson. Spring 2019
3D Vision Viktor Larsson Spring 2019 Schedule Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar 25 Apr 1 Apr 8 Apr 15 Apr 22 Apr 29 May 6 May 13 May 20 May 27 Introduction Geometry, Camera Model, Calibration Features,
More informationData-driven methods: Video & Texture. A.A. Efros
Data-driven methods: Video & Texture A.A. Efros CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2014 Michel Gondry train video http://www.youtube.com/watch?v=0s43iwbf0um
More informationVisual Tracking (1) Feature Point Tracking and Block Matching
Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More informationLocal Image Features
Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment
More informationFeature Based Registration - Image Alignment
Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More informationLocal Image Features
Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best
More informationCPSC 425: Computer Vision
1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationVideo Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationImage warping and stitching
Image warping and stitching May 4 th, 2017 Yong Jae Lee UC Davis Last time Interactive segmentation Feature-based alignment 2D transformations Affine fit RANSAC 2 Alignment problem In alignment, we will
More informationFeature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1
Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline
More informationImage Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi Combine two or more overlapping images to make one larger image Add example Slide credit: Vaibhav Vaish
More informationComputer Vision I - Filtering and Feature detection
Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationWhy is computer vision difficult?
Why is computer vision difficult? Viewpoint variation Illumination Scale Why is computer vision difficult? Intra-class variation Motion (Source: S. Lazebnik) Background clutter Occlusion Challenges: local
More informationFi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow
Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow Lectures 5 & 6 Prof. Fergus Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Facebook 360 photos Panoramas How do we build
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More information