extracted occurring from the spatial and temporal changes in an image sequence. An image sequence

Size: px
Start display at page:

Download "extracted occurring from the spatial and temporal changes in an image sequence. An image sequence"

Transcription

1 Motion: Introduction are interested in the visual information that can be We from the spatial and temporal changes extracted in an image sequence. An image sequence occurring of a series of images (frames) acquired at consists consecutive discrete time instants. They are acquired from the same or dierent view points. 1

2 Motion: Introduction (cont'd) content at dierent frames varies because of the Image motion between the camera and the scene. We relative are either dealing with a static scene yet a moving or a stationary camera but dynamic scene or camera The goal of motion analysis is to characterize the both. motion and use it as a visual cue for object relative scene segmentation or 3D structure detection, Motion is important since it represents reconstruction. changes over time, which is crucial for spatial understanding the dynamic world. 2

3 Motion: Introduction (cont'd) Motion analysis can answer the following questions How many moving objects are there? Which directions are they moving? How fast are they moving What are the structures of the moving objects? 3

4 Tasks in Motion Analysis point correspondences between two neighboring frames 3D motion and 3D structure estimation from the matched points motion based segmentation-divide a scene into regions, each of which may be characterized dierent dierent motion characteristics. by rst two tasks of motion analysis are very much The to the stereo problem. They are dierent however. similar 4

5 Motion Analysis v.s. Stereo The main dierences between the two are: much smaller disparities for motion due to small dierences between consecutive frames. spatial the relative 3D movement between camera and the may be caused by multiple 3D rigid scene since the scene cannot be modeled as transformations single rigid entity. It may contain multiple rigid a objects with dierent motion characteristics. g. 8.3, where the foreground and the background see in dierent directions. The foreground moves move toward the camera while the background (toys) moves 5

6 away from the camera. 6

7 Motion Analysis v.s. Stereo (cont'd) analysis can take advantage of small temporal Motion spatial changes between two consecutive frames. and the past history of the features motion and Specically, (intensity distribution pattern) may be used appearance predict current motion. This is referred to as tracking. to the reconstruction is often not accurate due to small But distance between two consecutive frames. Like baseline stereo, the main bottleneck for motion analysis is to the correspondences. There are two methods: determine methods based on time derivatives (image dierential and matching methods based on tracking. The ow) leads to dense correspondences at each pixel while former 7

8 the latter leads to sparse correspondences. 8

9 Basics in Motion Analysis For subsequent discussion, we assume there is only one rigid relative motion between camera and the object. eld is dened as the 2D vector eld of velocities Motion the image points, induced by relative 3D motion of the view camera and the observed scene. It can between be interpreted as the projection of 3D velocity eld also on the image plane. 9

10 P V C I p v motion field 10

11 Let P =[X Y Z] t be a 3D point relative to the camera frame and p =[x y f] be the projection of P in the image frame. Hence, p = f P Z Let's say the camera moves with some translational T and a rotational movement! =(!x!y!z), movement relative motion between the camera and P can be then characterized as V = ;T ;! P (1) 11

12 V = 0 ;T x +! z Y ;! y Z BBB@ B ;Ty +!xz ;!zx ;T z +! y X ;! x Y 1 CCCA C (2) motion eld in the image is resulted from projecting The onto the image plane. V The motion of image point p is characterized as p = f P Z, we have Given = dp v dt = f ZV ; V zp v 2 Z 12

13 Let! =(!X!Y!Z) t, v =(vx vy), and T =(tx ty tz) t, we have x = t ZX ; txf v Z y = t ZY ; t Y f v Z ;! Y f +! Z Y +! XXY +! X f ;! Z X ;! Y XY f f ;! Y X 2 f +! XY 2 the motion eld is the sum of two components, one Note translation and the other for rotation only, i.e., for t x = t ZX ; t X f v Z 13 f (3)

14 t y = t ZY ; t Y f v Z translational motion component is inversely the to the depth Z. proportional v! x = ;! Y f +! Z Y +! XXY v! y =!Xf ;!ZX ;! Y XY the angular motion component does not carry of object depth (no Z in the equations). information are recovered based on translational motion. Structures 14 f f f f (4) (5) (6) ;! Y X 2 +! XY 2

15 Motion Field: pure translation pure relative translation movement between the Under and the scene, i.e.,! =0,we have camera x = t ZX ; txf v Z vy = t ZY ; t Y f Assume tz 6= 0and p 0 =(x 0 y 0 ) t such that Z x 0 = ft X 15 tz (7)

16 y 0 = ft Y Equation 7 can be rewritten as tz v x = (x ; x 0 ) t Z Z vy = (y ; y 0 ) t Z From equation 8, we can conclude the motion eld vectors for pure 3D translation is radial, all going through point p 0 the motion eld magnitude is inversely proportional 16 Z (8)

17 to depth but proportional to the distance to p 0 the motion eld radiates from a common origin p 0, if t Z > 0 (move away from the camera), it and towards p 0 and away from p 0 otherwise radiates (tz < 0, move toward camera). when t z =0,i.e., movement is limited to x and y then motion eld is parallel to each other directions, since v x = ; ft X Z and v y = ; ft Y Z other 17

18 above conclusions allow us to infer 3D motion from The 2D motion eld, given some knowledge about the their 3D motion (such as translational motion). 18

19 Motion Field: Planar Motion The relative planar motion between the camera and induces a motion eld of quadratic polynomial of scene image coordinates. The same motion eld can be the produced by two dierent planar surfaces undergoing 3D motions due to special symmetry with the dierent coecients. Therefore, 3D motion and polynomial structure recovery can not be based on coplanar points. 19

20 Optical Flow ow is a vector eld in the image that represents Optical approximation of the image motion eld. Optical ow an cannot be computed for motion elds orthogonal to the spatial image gradients. 20

21 P V C I p v n intensity gradient v optical flow o v motion field motion field v.s. optical flow The image brightness constancy equation. 21

22 I t (x y) be the intensity of pixel (x,y) at time t. We Let the intensity of pixel (x,y) at time t+1, i.e, assume I t+1 (x y) remains the same, i.e, di =0 dt this constraint is completely satised only under 1) the is translational motion or 2) illumination motion is parallel to the angular velocity for lambertian direction surface. This can be veried by assuming the lambertian surface model I = n t L 22

23 Hence, Hence, di dt = (! n) t L = (! L) t n when either! =0or! and n are =0only to each other. parallel know I is a function of (x y), which, in turn, are We of time t, I(x(t), y(t), t), Hence function di dx 23 dy =0 (9) ( dn = )t L dt

24 and spatial intensity represent represents temporal intensity gradient. we come to the image brightness constancy Hence equation (5I) t v + I t =0 (10) is called image intensity gradient. This equation may 5I used to estimate the motion eld v. Note this be has no constraint on v when it is orthogonal to equation So equation 10 can only determine motion ow 5I. in the direction of the intensity gradient, i.e., component projection of motion eld v in the gradient direction the 24

25 (due to the dot product). This special motion eld is called optical ow. So, optical ow is always parallel to image gradient. 25

26 Aperture Problem Example this example, the motion is horizontal while the In are vertical except for the vertical edges at gradients ends. Optical ows are not detected except for the edges. This may be used to do motion-based vertical detection. edge 26

27 27

28 28

29 29

30 Optical Flow v Estimation To estimate optical ow, we need an additional since equation 10 only provides one equation constraint 2 unknowns. for each image point p and a N N neighborhood R, For p is the center, assume every point in the where neighborhood has the same optical ow v (note this is a constraint and may not hold near the edges smoothness the moving objects). of 5 t I(x y)v(x y)+i t (x y) =0 (x y) 2 R 30

31 y) can be estimated via v(x 2 = X (x y)2r (5 t I(x y)v(x y)+i t (x y)) 2 The least-squares solution to v(x y) is where A = v(x y) =(A t A) ;1 A t b 2 5 t I(x 1 y 1 ) t I(x 2 y 2 ). 5 t I(xN yn ) b = ;[I t (x 1 y 1 ) I t (x 2 y 2 ) ::: I t (x N y N )] t

32 the CONSTANT FLOW algorithm in Trucoo's book. see this technique often called the Lucas-kanade Note Its advantages include simplicity in method. and only need rst order image implementation The spatial and temporal image derivatives derivatives. be computed using a gradient operator (e.g., Sobel) can are open preceded with a Gaussian smoothing and Note the algorithm only applies to rigid operation. For non-rigid motion, there is a paper by Irani motion. that extends the method to non-rigid motion 99, estimation. the algorithm can be improved by incorporating a Note with each point in the region R such that points weight 32

33 to the center receive more weight than points far closer from the center. away 33

34 Additional Optical Constraints assuming brightness constancy while objects are Besides motion, we can assume smoothness constraint on the in eld, i.e., motion eld projections in x, y, and t motion the same for a small neighborhood. remain these constraints can be formulated as Mathematically, d 2 I dtdx =0, d 2 I dtdy =0, d2 I follows: dtdt Applying them to =0. 9 yields three additional optical constraints: equation v x I xx + v y I yx + I tx = 0 vxixy + vyiyy + Ity = 0 vxixt + vyiyt + Itt = 0 (11) 34

35 The four optical ow constraints are v x I x + v y I y + I t = 0 v x I xx + v y I yx + I tx = 0 v x I xy + v y I yy + I ty = 0 v x I xt + v y I yt + I tt = 0 (12) yields four equations for two unknowns v =(v x v y ). This can therefore be solved using a linear least squares They method by minimizing kav ; bk 2 (13) 35

36 where A = 0 BBBB B B B@ I x I y Ixx Ixy I yx I yy I tx I ty 1 CCCC C 0 BBBB B b = ; B B@ CA C I t Ixt I yt I tt 1 CCCC C C CA : v =(A t A) ;1 A t b (14) 36

37 Computing Image Derivatives Traditional approach to compute intensity derivatives numerical approximation of continuous involves (see appendix A.2). We propose to dierentiations compute image derivatives analytically using a cubic model to obtain an analytical and continuous image facet function that approximates image surface at intensity time (x,y,t). This yields more robust and accurate image estimation due to noise suppression via derivatives by function approximation. smoothing 37

38 Cubic Facet Model Assume the gray level pattern of each small block in an sequence is ideally a canonical 3D cubic image of x y t: polynomial I(x y t) =a 1 + a 2 x + a 3 y + a 4 t + a 5 x 2 + a 6 xy +a 7 y 2 + a 8 yt + a 9 t 2 + a 10 xt + a 11 x 3 + a 12 x 2 y +a 13 xy 2 + a 14 y 3 + a 15 y 2 t + a 16 yt 2 + a 17 t 3 +a 18 x 2 t + a 19 xt 2 + a 20 xyt x y t 2 R (15) The solution for coecients a =(a 1 a 2 ::: a 20 ) t in the Least-squares sense minimizes kda ; Jk 2 and is 38

39 expressed by where D = 0 BBBB B B BBBB B B@ a =(D 0 D) ;1 D 0 J (16) 1 x1 y1 t1 ::: x1y1t1 1 x1 y1 t2 ::: x1y1t x1 y2 t2 ::: x1y2t xx yy tt ::: xx yy tt I n is the intensity value at (x i y j t k ). 1 CCCC C CCCC C J = CA C while performing surface tting, the surface should Note centered at the pixel (voxel) being considered and use be 39 0 BBBB B I1 I2... IN 1 CCCC C C A

40 a local coordinate system, with the center as its origin. for a 3x3x3, neighborhood, the coordinates for x,y So, t are: , , and respectively. and 40

41 Cubic Facet Model (cont'd) Image derivatives are readily available from the cubic model. Substituting a i 's into Eq. (13) yields the facet we actually use: OFCE's A = 0 BBBB B B B@ a 2 a 3 2a 5 a 6 a 6 2a 7 a 10 a 8 1 CCCC C 0 BBBB B b = ; B B@ CA C 41 a 4 a 10 a 8 2a 9 1 CCCC C C CA (17)

42 Optical Flow Estimation Algorithm input is a sequence of N frames (N=5 typical). Let The be a square region of L L (typically L=5). The steps Q estimating optical ow using facet method can be for as follows. summarized Select an image as central frame (normally the 3rd frame if 5 frames are used) For each pixel (excluding the boundary pixels) in the central frame Perform a cubic facet model t using equation 15 { obtain the 20 coecients using equation 16. and { Derive image derivatives using the coecients and 42

43 the A matrix and b vector using equation 17. { Compute image ow using equation 14. { Mark each point with an arrow indicate its ow if its ow magnitude is larger than a threshold. the optical ow vectors to zero for locations where Set A t A is singular (or small determinant). matrix 43

44 Optical Flow Estimation Examples An example of translational movement 44

45 45

46 An example of rotational movement 46

47 47

48 Motion Analysis from Two Frames we are limited to two frames for motion analysis, we If perform motion analysis via a methods that can optical ow estimation with the point matching combines The procedure consists of the following steps: techniques. For each small region R in the rst image Estimate optical ow using equation 8.25 Produce a new region R' by warping R based on the estimated optical ow Compute the correlation between R 0 and the region in the second image. If the corresponding coecient is large enough, the estimated optical ow 48

49 correct. We move on to next location in the image. is result is the optical ow vector for each feature The Refer to feature point matching algorithm on point page 49

50 Motion Analysis from Multiple Frames Motion analysis for multiple image frames are conducted tracking. Tracking is a process that matches feature via from frame to frame. points 50

51 Kalman Filtering A popular technique for feature tracking is called Kalman It is a recursive procedure that estimates the ltering. of a point in the next frame and as well as its position uncertainty, based on the estimates at the previous time. ltering assumes: 1) linear state model 2) Kalman is Gaussian. uncertainty 51

52 Kalman Filtering (cont'd) look at tracking a point p =(x t y t ), where t Let's time instant t. Let's the velocity be represents v t =(v x t v y t ) t. Let the state at t be represented by its and velocity, i.e, s t =[x t y t v x t v y t ] t. The goal location is to compute the state vector from frame to frame. here More specically, given s t, estimate s t+1. 52

53 Kalman Filtering (cont'd) to the theory of Kalman ltering, s t+1, the According vector at the next time frame t+1, linearly relates state to current state s t by the system model as follows s t+1 =s t + w t (18) is the state transition matrix and w t represents where perturbation, normally distributed as system wt N(0 Q). State model describes the temporal part of the system. If we assume the feature movement between two 53

54 frames is small enough to consider the consecutive of feature positions from frame to frame uniform, motion the state transition matrix can be parameterized as = measurement model in the form needed by the The lter is Kalman zt = Hst + vt (19) 54

55 matrix H relates current state to current where and vt represents measurement uncertainty, measurement distributed as vt N(0 R). Measurement normally describes the spatial features of the system. For model measurement is obtained via a feature detection tracking, process. For simplicity and since zt only involves position, H can be represented as H =

56 Kalman Filtering (cont'd) ltering consists of state prediction and state Kalman State prediction is performed using the state updating. model while state updating is performed using the measurement model. 56

57 Σ t+1 (x-, y - ) t+1 t+1 predicted feature pos and search area at time t+1 detected feature at time t (x, y ) t t Step 1: Predicting 57

58 Prediction Kalman Filtering (cont'd) Given current state st and its covariance matrix t, prediction involves two steps: state projection state ; t+1) and error covariance estimation ( ; t+1) as (x summarized in equations 20 and 21. Updating s ; t+1 =s t (20) ; t+1 = t t + Q (21) 58

59 compute the Kalman gain Kt+1 Kt+1 = ; t+1h T ; t+1h T + R H (22) gain matrix K is a weighting factor to determine The contribution of measurement zt+1 and prediction the Hs ; t+1 to the posterior state estimate st+1. The second step is to actually measure the process to zt+1, and then to generate a posteriori state obtain st+1 by incorporation the measurement into estimate 18. The feature detector (e.g., thresholding equation correlation) searches for the region determined by or covariance matrix ; t+1 to nd the feature point the at time t

60 During implementation, the search region, centered the predicted location, may be a square region of at 3y, where x and y are the two eigen values 3x of the rst 2 2 submatrix of ; t+1. Correlation can be used to search the region to identify a method in the region that best matches the detected location feature in the previous time frame. detected point is then combined with the The estimation to produce the nal estimate. prediction Search region automatically changes based on ; t+1. the third step is to combine s ; t+1 with zt+1 to obtain the nal state estimate st+1 60

61 s t+1 = s ; t+1 + K t+1(z t+1 ; Hs ; t+1 ) (23) The nal step is to obtain the posteriori error estimate. It is computed as follows covariance t+1 =(I ; K t+1 H) ;1 t+1 (24) uncertainty with the final estimate (x - y - final position estimation combining ) t+1 t+1 with Z detected position t+1 (x - y - ) t+1 t+1 predicted Z t t+1 t+1 (x, y ) detected feature at time t (x, y ) t t Step 2: Measurement and Updating 61

62 each time and measurement update pair, the After lter recursively conditions current estimate on Kalman all of the past measurements and the process is repeated the previous posterior estimates used to project or with a new a priori estimate. The trace of the state predict covariance matrix is often used to indicate the uncertainty of the estimated position. 62

63 Kalman Filtering Initialization order for the Kalman lter to work, the Kalman lter In be initialized. The Kalman is activated after the needs feature is detected in two frames i and i +1 The initial state vector s0 can be specied as x 0 = xi+1 y 0 = yi+1 vx 0 = xi+1 ; xi vy 0 = yi+1 ; yi The initial covariance matrix 0 can be given as: 63

64 0 = is usually initialized to very large value. It should 0 and reach a stable state after a few iterations. decrease also need initialize the system and measurement error We matrices Q and R. The standard deviation covariance positional system error to be 4 pixels for both x and from directions. We further assume that the standard y for velocity error to be 2 pixels/frame. deviation the state covariance matrix can be quantied Therefore,

65 as Q = Similarly, we can also assume the error for measurement model as 2 pixels for both x and y direction. Thus, R = Both Q and R are assumed be stationary (constant)

66 Limitations with Kalman Filtering assume the state dynamics (state transition) can be modeled as linear assume the state vector has uni-modal and is distribution, can therefore not track Gaussian feature points and require multiple Kalman multiple to track multiple feature points. It can not lters track non-gaussian distributed features. overcome these limitations, the conventional Kalman To has been extended to Extended Kalman Filtering ltering Unscented Kalman Filtering. For details, see the link and Kalman ltering on course website. to 66

67 A new method based on sampling is called Particle was has been used for successfully tracking Filtering and non-gaussian distributed objects. multi-modal 67

68 Networks and Markov Chain Bayesian for Kalman Filtering Interpretation S(t-1) S(t) S(t+1) Z(t-1) Z(t) Z(t+1) Bayesian network and Markov chain Interpretation for Kalman filtering Bayesian network: casual relationships between current state and previous state, and between current state and current observation. Markov process: given S(t), S(t+1) is independent of S(t-1), i.e., S(t+1) is only related to S(t). 68

69 3D Motion and Structure from Motion Field Given the motion eld (optical ow) estimated from an sequence, the goal is to recover the 3D shape of image 3D objects and their 3D motion relative to the the viewing camera. 69

70 3D Motion and Structure from a Sparse Motion Field We want to reconstruct the 3D structure and motion the motion eld generated by a sparse of feature from Among many methods, we discuss the points. factorization method. 70

71 Factorization Method 1) the camera is orthographic 2) M Assumptions: 3D points and N images (N 3). non-coplanar 71

72 Factorization Method pij =(cij rij) denote the jth image point on the ith Let frame. Let ci and ri be the centroid of the image image on the ith image frame. Let Pj =(xj yj zj) be the points 3D points relative to the object frame and let P be jth the centroid of the 3D points. Let c 0 ij = cij ; ci r 0 ij = rij ; ri P 0 j = Pj ; P Due to orthographic projection assumption, we have 72

73 0 c 0 B r 0 ij 1 0 = B ri A C ri 2 1 A C 0 P where ri 1 and ri 2 are the rst two rows of the rotation matrix between camera frame i and the object frame. stacking rows and columns, the equations can be By compactly in the form: written W = RS where R is 2N 3 matrix and S is 3 n, and 73 j

74 R = 0 BBBB B B BBBB r 1 1 r1 2. rn 1 rn 2 1 CCCC C C CCCC C A S =[P 0 1 P 0 2 ::: P0 M ] gives the relative orientation of each frame to the R frame while S contains the 3D coordinates of the object feature points. 74

75 W 2NM = 0 BBBB B B BBBB B BBBB B BB@ c 11 c 12 ::: c 1M r 11 r 12 ::: r 1M c21 c22 ::: c2m r21 r22 ::: r2m.. cn 1 cn 2 ::: cn M rn1 rn2 ::: rnm According to the rank theorem, the matrix W (often registered measurement matrix ) in ideal case has a called rank of 3. This is evident since the columns of maximum CCCC C C CCCC C CCCC C CCA

76 W are linearly dependent on each other due to projection assumption and the same orthographic matrix for all image points in the same frame. rotation 76

77 Factorization Method (cont'd) reality, due to image noise, W may have a rank more In 3. To impose the rank theorem, we can perform a than SVD on W W = UDV t Change the diagonal elements of D to zeros except for the 3 largest ones. Remove the rows and columns of D that do not the three largest eigen values, yielding D' of contain 3 3. D 0 is usually the rst 3 3 submatrix of size D. Keep the three columns U that correspond to the 77

78 largest eigen values of D (they are usually the three three columns) and remove the remaining rst yielding U', which has a dimension of columns, 3. 2N Keep the the three columns V that correspond to the largest eigen values of D (they are usually the three three columns) and remove the remaining rst yielding V', which has a dimension of n 3. columns, W 0 = U 0 D 0 V 0t W 0 is closest to W, yet still satises the rank theorem. 78

79 Factorization Method (cont'd) W 0, we can perform decomposition to obtain Given for S and R. Based on the SVD of estimates W 0 = U 0 D 0 V 0t, we have it is apparent W 0 = ^R = U 0 D ^S = D V 0t ^R ^S solution is, however, only up to an ane This since for any invertible 3 3 matrix Q, transformation = ^RQ and S = Q ;1 ^S also satisfy the equation. We can R matrix Q using the constraint that from the rst nd row, every successive two rows of R are orthonormal (see 79

80 eq. 8.39), i.e., r t i 1 QQt r t i 1 = 1 r t i 2 QQt r t i 2 = 1 r t i 1 QQt r t i 2 = 0 i =1 2 ::: N frames and using the equations Given, we can linearly solve for A = QQ t subject to the above that A is symmetric (i.e., only 6 unknowns). constraint A, Q can be obtained via Choleski factorization. Given Q, the nal motion estimate is R = ^RQ and the Given structure estimate is S = Q ;1 ^S. nal 80

81 Recent work by Kanade and Morris has extended this to camera models such as ane and weak perspective other model. projection latest work by Oliensis (Dec. 2002, PAMI) The a new approach for SFM from only two introduced images. 81

82 Factorization Method (cont'd) The steps for structure from motion (SFM) using factorization method can be summarized Given the image coordinates of the feature points at dierent frames, construct the W matrix. Compute W' from W using SVD Compute ^R and ^S from W' using SVD Solve for the matrix Q linearly. R = ^RQ and S = Q ;1 ^S see the algorithm on page 208 of the textbook. 82

83 Factorization Method (cont'd) to the assumption of orthographic assumption, the Due motion is determined as follows. The translation component of the translation parallel to the image plane proportional to the frame-to-frame motion of the is of the data points on the image plane. The centroid component that is parallel to the camera translation axis can not be determined. optical 83

84 3D Motion and Structure from Dense Motion Field Given an optical ow eld and intrinsic parameters of the camera, recover the 3D motion and structure of viewing observed scene with respect to the camera reference the frame. 84

85 3D Motion Parallax relative motion eld of two instantaneously The points does not depend on the rotational coincident component of motion. two 3D points P =[x y z] t and P 0 =[x 0 y 0 z 0 ] t be Let into the image points p and p 0. The projected motion vector for each point may be corresponding as expressed vx = v T x + v! x vy = v T y + v! y 85

86 0 x = v 0T x + v 0! x v 0 y = v 0T y + v 0! y v at some time instant, p and p 0 are coincident, i.e, if = p 0 =[x y], then the relative motion between them p can be expressed as = v T x ; v0t x =(x ; f T x vx Tz = v T y ; v0t y =(y ; f T y vy Tz T z )( ; T z Z 0 ) Z T z )( ; T z Z 0 ) Z it is clear that 1) the relative motion vector (vx vy) 86

87 does not depend on the rotational component of the 2) the relative motion vector points in the motion of p 0 =(x 0 y 0 )=( T x direction vx vy x;fx 0 y;fy0 4) = Tz T y Tz ) (g. 8.5) 3) (vx vy) t ((y ; y 0 ) ;(x ; x 0 )) t = v! x (y ; y 0 ) ; v! y (x ; x 0 ), where (y ; y0 ;(x ; x0)) is orthogonal to (vx vy). 87

88 Translation Direction Determination two nearby image points p and p 0, the relative Given eld is motion = v T x ; v0t x =(x ; f T x vx Tz = v T y ; v0t y =(y ; f T y vy Tz Z 0 (x ; x0 ) Z 0 (y ; y0 ) The second terms on the right side in above equations negligible if the two points are very close, producing are motion parallax. the 88 T z )( ; T z Z 0 )+T z Z T z )( ; T z Z 0 )+T z Z

89 Translation Direction Determination (cont'd) Given a point p and all its close neighbors, we can relative point between p and each of its compute For each relative motion, we have neighbors. vx vy where (xi yi) is the ith neighbor of p. A least-square can be setup to solve for p0 =(x0 y0), which framework leads to the solution to the direction of the also translation motion (Tx Ty Tz). 89 x i ; fx 0 = ; fy 0 yi

90 Rotation and Depth Determination the pointwise dot product between the optical ow Form each point pi =(xi yi) and the vector at [yi ; y 0 ;(xi ; x 0 )] t yields where v! x and v! y v? = v! x (yi ; y0) ; v! y (xi ; x0) the rotational component of the 3D are They are functions of (!x!y!z) as shown in motion. 8.7 of the textbook. Given a set of points, we equation have several such equations, which allow to solve for can in a linear least-square framework. Given (!x!y!z) and (Tx Ty Tz), the depth Z can be recovered (!x!y!z) using equation 8.7 in the textbook. 90

91 3D Motion based Segmentation a sequence of images taken by a xed camera, nd Given regions of the image corresponding to the dierent the moving objects. 91

92 3D Motion based Segmentation (cont'd) Techniques for motion based segmentation using optical ow image dierencing Optical ow allows to detect dierent moving objects as as to infer object moving directions. The optical ow well may not be accurate near the boundaries estimates between moving objects. dierencing is simple. It, however, can not infer Image motion of the objects. 3D 92

93 Particle Filtering for Tracking s represent the state vector and z represent Let the t t from a feature detector) at time t (resulted measurement t Z respectively. =(z Let z t;1 ::: t0) be the t t t and S measurement =(s history s t;1 ::: s 0 ) be the t t state history. The goal is to determine the posterior distribution P (s t jz t ), from which we will be probability to determine a particular (or a set of) s able where t P (s t jz t ) are locally maximum. 93

94 Particle Filtering for Tracking (cont'd) kp(z t js t )P (s t jz t;1 ) P (s t jz t ) = P (s t jz t z t;1 ::: z 0 ) k a normalizing constant to ensure the where is to one. We assume given s distribution, z is integrates t t of independent, where j = t ; 1 t; 2 ::: 0. P (z js z ) t j t likelihood of the state s t and P (s t jz t;1 ) is the represents referred to as temporal prior. P (s t jz t;1 ) = Z P (s t js t;1 Z t;1 )p(s t;1 jz t;1 )ds t;1 94

95 = Z P (s t js t;1 )p(s t;1 jz t;1 )ds t;1 we Z t;1 is independent of s where given s t;1. So assume t P js ) (z consists of two components: P (s js t t;1 ) the t t dynamics or state transition and p(s t;1 jz t;1 ) temporal posterior state distribution at the previous time the It shows how the temporal dynamics and the instant. state distribution at the previous time instant posterior propagates to current time. 95

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual

More information

Lecture 20: Tracking. Tuesday, Nov 27

Lecture 20: Tracking. Tuesday, Nov 27 Lecture 20: Tracking Tuesday, Nov 27 Paper reviews Thorough summary in your own words Main contribution Strengths? Weaknesses? How convincing are the experiments? Suggestions to improve them? Extensions?

More information

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20 Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

CS-465 Computer Vision

CS-465 Computer Vision CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in

More information

Spatial track: motion modeling

Spatial track: motion modeling Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline 1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

VC 11/12 T11 Optical Flow

VC 11/12 T11 Optical Flow VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Optical Flow Estimation

Optical Flow Estimation Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion Introduction to Computer Vision CSE 152 Lecture 20 HW 4 due Friday at Midnight Final Exam: Tuesday, 6/12 at 8:00AM-11:00AM, regular classroom Extra Office Hours: Monday 6/11 9:00AM-10:00AM

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

The 2D/3D Differential Optical Flow

The 2D/3D Differential Optical Flow The 2D/3D Differential Optical Flow Prof. John Barron Dept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 Email: barron@csd.uwo.ca Phone: 519-661-2111 x86896 Canadian

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Haowei Liu LECTURE 16 Structure from Motion from Tracked Points 16.1. Introduction In the last lecture we learned how to track point features

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Spatial track: motion modeling

Spatial track: motion modeling Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between

More information

Multi-Frame Correspondence Estimation Using Subspace Constraints

Multi-Frame Correspondence Estimation Using Subspace Constraints International Journal of Computer ision 483), 173 194, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Multi-Frame Correspondence Estimation Using Subspace Constraints MICHAL IRANI

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Structure from Motion

Structure from Motion Structure from Motion Lecture-13 Moving Light Display 1 Shape from Motion Problem Given optical flow or point correspondences, compute 3-D motion (translation and rotation) and shape (depth). 2 S. Ullman

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract Scene Reconstruction from Multiple Uncalibrated Views Mei Han Takeo Kanade January 000 CMU-RI-TR-00-09 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1513 Abstract We describe a factorization-based

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion HW 4 due Friday Final Exam: Tuesday, 6/7 at 8:00-11:00 Fill out your CAPES Introduction to Computer Vision CSE 152 Lecture 20 Motion Some problems of motion 1. Correspondence: Where

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Vision par ordinateur

Vision par ordinateur Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Structure from Motion. Lecture-15

Structure from Motion. Lecture-15 Structure from Motion Lecture-15 Shape From X Recovery of 3D (shape) from one or two (2D images). Shape From X Stereo Motion Shading Photometric Stereo Texture Contours Silhouettes Defocus Applications

More information

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Multi-scale 3D Scene Flow from Binocular Stereo Sequences

Multi-scale 3D Scene Flow from Binocular Stereo Sequences Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports 2004-11-02 Multi-scale 3D Scene Flow from Binocular Stereo Sequences Li, Rui Boston University Computer

More information

Structure from Motion

Structure from Motion 11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from

More information

Chapter 7: Computation of the Camera Matrix P

Chapter 7: Computation of the Camera Matrix P Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Basilio Bona DAUIN Politecnico di Torino

Basilio Bona DAUIN Politecnico di Torino ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information