Specular Flow and the Recovery of Surface Structure

Size: px
Start display at page:

Download "Specular Flow and the Recovery of Surface Structure"

Transcription

1 Specular Flow and the Recovery of Surface Structure Stefan Roth Michael J. Black Department of Computer Science, Brown University, Providence, RI, USA Abstract In scenes containing specular objects, the image motion observed by a moving camera may be an intermixed combination of optical flow resulting from diffuse reflectance (diffuse flow) and specular reflection (specular flow). Here, with few assumptions, we formalize the notion of specular flow, show how it relates to the 3D structure of the world, and develop an algorithm for estimating scene structure from 2D image motion. Unlike previous work on isolated specular highlights we use two image frames and estimate the semi-dense flow arising from the specular reflections of textured scenes. We parametrically model the image motion of a quadratic surface patch viewed from a moving camera. The flow is modeled as a probabilistic mixture of diffuse and specular components and the 3D shape is recovered using an Expectation-Maximization algorithm. Rather than treating specular reflections as noise to be removed or ignored, we show that the specular flow provides additional constraints on scene geometry that improve estimation of 3D structure when compared with reconstruction from diffuse flow alone. We demonstrate this for a set of synthetic and real sequences of mixed specular-diffuse objects. 1. Introduction We address the problem of recovering the image motion and 3D shape of a specular surface viewed by a moving camera. Previous work on this topic has focused on the shape or motion of isolated specular features or specular highlights. While local shape information can be recovered from both static [12] and moving highlights [19], these represent a subset of the general class of specular reflections. More generally, a specular surface may reflect a distorted view of the illumination field surrounding it. In this case the specular reflections may be dense or semi-dense in that they appear over a relatively large area. When the camera or scene is in motion, these specular reflections give rise to 2D image motion which we call specular flow [19, 30]. In this paper we formalize the notion of specular flow, relate it mathematically to the 3D structure of the scene and com- (a) (b) (c) Figure 1. Moving surface with diffuse markings and specular reflections from an unknown textured scene. Our method relates the 3D surface shape and camera motion to the 2D image motion of diffuse and specular pixels. The algorithm automatically recovers the surface shape and flow and probabilistically classifies pixels as diffuse or specular. (a) One image from a sequence. (b) Recovered diffuse flow (magnified) corresponding to the motion of the surface texture. (c) Recovered specular flow (magnified) corresponding to the motion of the specular texture. bine diffuse and specular flow to estimate surface curvature. Consider the image in Figure 1 of a reflective, textured surface viewed from a moving camera. The motion of the surface texture gives rise to the diffuse flow, the mathematics of which is well understood. The reflected scene on the other hand gives rise to specular image motion where the relationship to 3D structure is significantly more complex. Our key insight is that the spatial pattern of specular flow from a textured scene provides rich information that can constrain estimates of surface shape. Consequently, rather than treating specular reflections as noise to be removed, we treat them as a rich signal which provides information about 3D shape. It is worth emphasizing that, in contrast to prior work, we are not focusing on the motion of specular highlights but rather on the spatially extended flow fields arising from the reflection of textured scenes. The paper makes a number of contributions that go beyond the state of the art. (1) We relate specular flow to 3D shape and camera motion. Solving for the specular image motion given shape and camera motion is a difficult problem and we exploit an approximation that makes the estimation of specular flow tractable. (2) We exploit this formulation to recover optical flow using a direct method. This is an extension of layered parametric motion estimation meth-

2 ods [24] and in that spirit we exploit a probabilistic mixture model formulation and solve for the motion using an Expectation-Maximization (EM) algorithm. (3) In solving for the image motion the mixture model provides a classification of the scene pixels as specular or diffuse. (4) The EM algorithm is automatically initialized using an estimate of 3D shape obtained from a dense optical flow field that contains both diffuse and specular flow. To do so we exploit the epipolar constraint to identify flow vectors consistent with the diffuse flow and use these to obtain an initial estimate of 3D structure. (5) We show that 2-frame specular flow constrains the 3D shape and we provide the first formal characterization of these constraints. These constraints are, in general, different from the diffuse motion constraints hence combining diffuse and specular flow may improve the accuracy of shape estimation. Our technique makes weak assumptions compared with previous work. In particular we do not need to know the illumination field or the illumination direction. Like previous methods we assume known camera motion. Furthermore, frame-to-frame camera motions are assumed to be small. We assume surfaces have distinct regions of specular and diffuse reflectance and that their shape can be approximated by a smooth implicit function or by piecewise quadratic functions [3]. The resulting theory of specular flow extends the prior art to spatially extended, unknown light sources and leads to the first complete algorithm to exploit specular flow for shape recovery. 2. Previous work The use of specular reflections in estimating scene structure has a long history in computer vision. Many techniques recover information about surface shape by assuming knowledge of the light source shape, its position, and/or its orientation [12, 13, 16, 22, 23]; the illumination is treated as a form of structured light. In general however, we may not know the structure of the illumination field, or even the direction of illumination. In this case a single image provides little information about the surface shape. Alternatively, many techniques for estimating structure treat specular reflections as violations of Lambertian reflectance, detect them, and remove them from consideration [14, 15, 28]. Treating specular reflections as noise however fails to exploit the relationship between specular motion and surface shape. Methods that do exploit specular motion have typically exploited specular highlights (points or localized regions of specular reflection) and their motion over many frames [19, 27]. 2-frames and a known light source. Early specular reconstruction techniques assumed the illuminant was known and used 2-frames with known camera motion (i.e., specular stereo). They showed that the motion of a specular highlight can be combined with the known depth of a nearby point to obtain constraints on local surface curvature [5, 6, 31]. Furthermore, the convexity/concavity of the surface can be determined using the relative motion between a specular highlight and a nearby surface point [31]. Extended temporal constraints. It is possible to recover parts of the surface geometry by tracking a single specular feature over a long series of frames with known camera motion [19]. But in order to uniquely recover the surface profile along the trajectory, the feature needs to be tracked to and from occlusion boundaries. This only provides the surface geometry along the path of the feature. Optical flow and specular reflections. Many methods have looked at the estimation of optical flow with reflective or transparent surfaces where the traditional brightness constancy assumption is violated. [2, 4, 11, 21, 25, 29]. These methods however have focused on the 2D motion problem and have not looked at using specular motion to estimate 3D shape. Waldon and Dyer [30] introduced the notion of the reflection flow field (what we call specular flow) in the case of a stationary camera and moving specular object. They extend the brightness constancy assumption to include two terms: one for diffuse reflectance and one for specular reflectance. They show that the specular flow can dominate the diffuse flow depending on the surface curvature and point out that in such cases the specular flow can be recovered using standard gradient-based optical flow techniques, but did not couple it to the shape of the surface. In contrast to previous work, we formulate a parametric model of specular flow that directly relates image motion to 3D shape. Rather than focusing on specular highlights over many frames we use the spatially extended specular flow of a textured scene between two frames to constrain surface shape. Finally our only assumption about the illumination field is that it is far from the reflecting surface. 3. Parametric models of diffuse and specular motion We derive a parametric model of diffuse and specular image motion between two image frames as a function of camera motion and the shape of the viewed object. We assume that the surface is given by a known, twice differentiable implicit function g(x R 3 ; θ) = 0, where θ are the parameters of the implicit representation. Furthermore, we assume that both the internal and the external camera parameters are known. We denote the fixed internal parameters of the camera as K R 3 3 ; the external parameters for frame i are denoted as (R i, t i ), where R i R 3 3 is the camera rotation and t i R 3 is the camera translation. The resulting projection matrix for frame i is then written as P i = K (R i, t i ). We make the important assumption that the features in theilluminationfield (e.g. traditional light sources as well as

3 p d p' n x d' n' x' Figure 2. A specular object reflects a feature from an illumination field I at infinity in direction s. The object is viewed from two different viewpoints p and p. The feature is reflected from the surface at points x and x in two different frames with the corresponding surface normals n and n. The vectors d and d specify the direction from the reflection point to the respective camera. illuminated objects in the surroundings) are sufficiently far away so that the lighting direction that induces a particular specular feature in multiple frames remains approximately constant across frames. This is illustrated in Figure 2. The assumption of distant illumination allows us to model the specular image motion without making additional assumptions about the scene surrounding the viewed object Diffuse motion Given the parametric surface as well as the camera parameters, we compute the parametric motion field that corresponds to motion of diffuse surface features, which we call diffuse flow. For every pixel u =(u 1,u 2 ) in frame i, we identify a ray along which the corresponding diffuse surface feature must lie. This ray is given as s r(λ) =p + λ d, (1) where p = R T i t i is the camera center and d = R T i K 1 (u 1,u 2, 1) T is the ray direction from the surface toward the viewpoint. To find the surface point x in 3D that projects to u we determine the intersection of the ray with the implicit surface so that x = r(λ) and g(x; θ) =0. In case of multiple intersections, the surface feature corresponds to the intersection that is closest to the camera. Given the surface point x, it can be projected into another view (assuming that it is visible), e.g. for frame i +1the image location is given by u = (y 1 /y 3,y 2 /y 3 ) where y = P i+1 x. The diffuse flow from frame i to frame i +1is finally given as the difference in image coordinates: w D (u) =u u Specular motion I Computing the specular motion is unfortunately not as straightforward as the diffuse case, even with the assumption that the illumination field is at infinity. To compute the specular motion for a pixel u, wefirst identify the 3D surface point x that reflects a specular feature. As for diffuse motion, we find x by intersecting the viewing ray with the implicit shape. In case of specular reflection the observed feature comes from the unknown, distant illumination field and not directly from the surface of the object. This means that to compute the image motion of the feature, we first have to find the surface point x that reflects the same specular feature in frame i +1in which the camera is centered at p. This is illustrated in Figure 2. To find x it is helpful to define the following: The surface normal at point x is given as the gradient of the implicit surface description, i.e. n(x) = g(x). We write the unit normal as ˆn = n/ n and the unit ray direction (from above) as ˆd = d/ d. From the law of reflection, the direction s of the specular feature on the illumination field is s =2ˆd Tˆn ˆn ˆd. (2) Based on our assumption of distant illumination, the same illumination direction gives rise to the specular feature in frame i +1. For the purpose of finding x it will be easier to restate the law of reflection, in particular that the normal direction bisects the viewing and the lighting directions: ˆd + s = ñ = ν n, (3) where ν is a scaling factor. Since x is unknown at this point, we rewrite this as p x p x + s = ν g(x ). (4) Determining the reflection point amounts to finding an x so that (4) holds for some ν under the constraint that x is on the surface, i.e. g(x ; θ) = 0. Even for simple implicit surfaces such as a sphere this is difficult to do analytically. For finite distance light sources this problem is known as Alhazen s Billiard problem, a classical problem in mathematics first posed by Ptolemy [18, 26]. Even for a circle in 2D the problem is hard: Given a camera at a point a and a light source at b find the points on the circle which reflect a ray from a into b. There are in general 4 such points; at most two actually amount to reflection rather than refraction [26]. For objects more complex than a sphere, no closed form solution exists [10]. However, Chen and Arvo [10] propose a local perturbation method that describes the differential behavior of specularities given either camera or light source motion. This has been exploited in computer graphics to render specular reflections efficiently; we use their method to approximate the specular flow. The method is based on the path that the reflection point takes on the surface as the camera moves from p to p. This path can be described using a smooth path function

4 f θ : R 3 R 3 that assigns a surface point for each camera location (given the surface parameters θ). It holds that f θ (p) = x and f θ (p ) = x. While evaluating f θ (p ) amounts to the same, difficult problem as above, here we perform a Taylor approximation of the path function around p basedontheassumptionthatp and p are close: where f θ(p) p x = f θ (p ) (5) f θ (p)+ f θ(p) p (p p) (6) x + f θ(p) p (p p), (7) is called the path Jacobian, which in contrast to f θ (p ) itself can be expressed in closed form. We omit a detailed derivation of the path Jacobian as it is almost identical to the derivation in [10]. The main difference is that we assume the illumination field to be infinitely far away, which actually simplifies the derivation and makes is possible to compute the path Jacobian without knowing the position of the illumination. This sets the proposed method apart from previous work that often made more restrictive assumptions about the illumination. It is important to note that the only other requirement for computing the Jacobian is that we can evaluate the gradient and the Hessian of the implicit surface equation. This will hold for quadratic surfaces as considered later, but the method extends to more flexible surface representations, such as the ones used in [27]. Summarizing the parametric model of specular flow: Given a pixel u in frame i, find the associated surface point x. Then approximately find the nearby surface point x that causes the same specular feature to appear in frame i+1us- ing the path perturbation method described above. The surface point x can be projected into the view at frame i +1, to give u =(y 1 /y 3,y 2 /y 3 ) where y = P i+1 x. The specular flow for distant illumination from frame i to frame i+1 with small camera motion is finally given as the difference in image coordinates: w S (u) =u u. 4. Properties of specular flow We have described how specular flow relates to the 3D geometry of the scene and to the camera motion under the assumption of distant illumination and small camera motion. Now we show that the observed specular flow can be used to constrain the geometry of an unknown object and that it provides additional constraints on surface shape not present in the diffuse flow. For clarity and brevity of exposition we present the insights in an informal way; the formal expression of these is straightforward. Definition 1. A specular feature viewed from two different viewpoints is said to fulfill theweak specular constraint,if the object causing the specular feature is infinitely far away from the specular surface. In this case the reflected feature appears under the same illumination direction at any point on the surface. Observation 1. Assume that a single specular feature is observed from two different viewing directions. In general the feature will be seen reflected from two surface points. Then the weak specular constraint constrains two of the four parameters of the associated surface normals. Proof. From the law of reflection we know that for each of the two viewpoints the viewing direction, the surface normal, and the lighting direction all lie in a plane. Because the viewpoint and the viewing direction lie in this plane, we can parametrize the plane for each viewpoint using a single parameter, i.e. the rotation angle around the viewing ray. By virtue of the weak specular constraint, the lighting direction has to be identical in both cases, so the only admissible lighting direction is given as the intersection of the two planes. The planes intersect because we assumed the viewing directions to differ. Once the lighting direction is known, the two surface normals are given as the vectors that bisect the lighting direction and each viewing direction. If we interpret the motion of a specular feature as arising from the motion of a diffuse surface marking on a rigid object, then this virtual feature will generally appear at an incorrect depth. The distance of the feature to the actual object surface depends on the surface curvature as well as the viewing angle [5, 19, 28]. Specular reflections from convex surfaces appear behind the surface; for concave surfaces they lie in front. Consider a simple case where light is reflected off a spherical mirror so that it appears to come from the center of the sphere. As the camera motion goes to zero the location of the virtual feature converges to the point half-way between the surface and the center of the sphere [19, 28]. To understand how diffuse and specular motion constrain the surface geometry, we perform a simple experiment relating the depth and curvature of a spherical surface patch to the camera motion. For known camera motion and known parameters of the surface we can compute the parametric flow as above. We now vary the distance of the patch to the camera as well as the curvature of the patch and compute the average angular error [1] between the associated image motion and the true image motion. Figure 3 shows the results for diffuse and specular flow. The left plot shows that, for small camera motions, diffuse flow constrains the depth of the patch much better than the curvature. The specular flow on the other hand provides a different, and complimentary, constraint that comes from the fact that the virtual depth of a specular feature is a function of both the depth and curvature of the surface. This motivates using the combination of diffuse and specular constraints for recovering surface geometry of mixed diffuse/specular surfaces.

5 depth curvature depth curvature Figure 3. Geometrical constraints of diffuse and specular flow: Average angular error of parametric flow for a spherical surface patch compared to the true parametric flow (depth = 6, curvature = 0.25). (left) Diffuse flow. (right) Specular flow. 5. Mixture model: diffuse and specular flow There are a number of ways to exploit specular flow to recover surface shape. One might first compute dense optical flow and then classify the flow vectors as diffuse or specular using the epipolar constraint [28]. While diffuse motion (of rigid objects) obeys the epipolar geometry, the image motion of specular features does not always obey the epipolar constraint. Using the classified flow vectors we could find the surface parameters by matching the parametric motions derived in Section 3 with the measured diffuse and specular flow. This method has the disadvantage that the classification of flow vectors into specular and diffuse is not always reliable. As [28] notes, the amount of epipolar deviation is dependent on the curvature of the surface. Additionally, the dense estimation of optical flow in the case of mixed diffuse and specular reflection is challenging. We follow a different approach here and develop a direct method for recovering the surface geometry. In particular, we propose a mixture model of parametric motions, where each pixel can either be diffuse, specular, or belong to an outlier class. The mixture model extends previous work on multiple parametric motions [24]. In the following we assume that the external camera parameters have been determined ahead of time through a separate process Model formulation We formulate the problem of recovering surface shape from an image pair as one of probabilistic inference. In particular, we find the parameters of the surface shape θ that maximize the likelihood of the image O i given the subsequent frame O i+1 : θ =argmaxp(o i O i+1,θ). (8) θ As in [24] we assume conditional independence of the pixels in frame i so that we can rewrite (8) using p(o i O i+1,θ)= p(o i (u) O i+1,θ), (9) u where O i (u) is the grey-value of image O i at pixel u and we take the product over all the pixels u. Furthermore, we 5 assume that every pixel can be either diffuse, specular, or it can be an outlier, which we formulate using the per-pixel mixture model p(o i (u) O i+1,θ) = m D p D (O i (u) O i+1,θ)+ m S p S (O i (u) O i+1,θ)+ (10) m O p O. m D, m S,andm O are the a-priori mixture weights of the diffuse, specular, and outlier components; p D and p S are the component likelihoods for diffuse and specular pixels, and p O is a fixed outlier probability. In order to specify the component likelihoods, we denote the parametric diffuse flow for a surface with parameters θ at pixel u as w D (u) and the specular flow as w S (u). Assuming Gaussian noise, the component likelihood for specular flow is written as p S (O i (u) O i+1,θ)= N (O i (u); O i+1 (u + w S (u)),σ); (11) the diffuse component likelihood is defined analogously. Note that, unlike previous mixture model approaches we do not assume a linearized brightness constancy assumption. Instead we adopt a warping approach (see e.g. [9]), which assumes O i (u) =O i+1 (u + w S (u)). This is more appropriate for specular motions where the image motion may be large and a linearization of the brightness constancy assumption would be inappropriate Inference algorithm Inference of the model parameters is performed using the Expectation-Maximization (EM) algorithm [17], which alternates between probabilistically assigning pixels to causes (specular flow, diffuse flow, or outlier) and maximizing the expected log-likelihood of the model parameters given the assignments. We omit the detailed derivation here, as our model is similar to other mixture models of parametric motion [24]. It is interesting though to note that the E-step yields ownership weights that segment pixels as being consistent with either diffuse or specular motion. If we denote the components as c C = {S, D, O}, the ownership weights for a pixel u are given as π(c, u; O i, O i+1,θ)= m c p c (O i (u) O i+1,θ) c C m c p c (O i(u) O i+1,θ). To simplify inference, we will keep the prior probabilities over the mixture components m c as well as the component variance σ fixed (m c =1/ C and σ =10). We should also point out that maximizing the expected likelihood in the M-step is difficult given the developed model because the specular motion depends on the surface parameters in highly nonlinear ways. With no closed form solution for the model parameters θ we perform a local optimization of the expected likelihood.

6 (a) (b) (c) (d) (e) (f) Figure 4. Reconstruction from a synthetic specular/diffuse sequence. (a) Example frame from a pair. (b) Ownership weights for reconstructed diffuse motion. (c) Ownership weights for reconstructed specular motion. (d) Outlier ownerships weights corresponding to boundary between specular and diffuse regions. (e) Diffuse flow (magnified). (f) Specular flow (magnified). 6. Experimental evaluation We evaluate the performance of surface reconstruction using specular flow with synthetic and real image sequences for which we know the ground truth. In these experiments, we restrict the surface geometry to spherical patches, but use both convex and concave shapes. The implicit surface description in this case is given as g(x; θ) = x c 2 r 2 =0, (12) where the reconstruction needs to determine the center and radius of the sphere θ = {c,r}. It is worth noting that the method can be applied in local image patches and does not exploit the outline of the object being viewed Experiments with synthetic data We rendered 20 image sequences ( pixels) of spherical surface patches with various positions and radii; 10 of them convex and 10 concave. The camera motion is a random rotation around a point near the sphere with a uniformly random rotation axis and a small, random angle. The surface patches exhibit contiguous regions of either diffuse reflectance or specular reflection, which are determined by a random binary surface texture. Both diffuse and specular parts are textured with natural textures. Figure 4 shows such a synthetic frame. To remove a potential bias from the outline of the sphere, the patches are chosen so that the images show only a part of the full sphere. Furthermore, the motion in a boundary of 5 pixels around the edges of the image is ignored to avoid effects from occlusion and disocclusion. Boundary pixels are forced to be explained by the outlier layer and thus do not affect the estimation. Nevertheless, occlusion and disocclusion may still happen between the various specular and diffuse regions since they each have different apparent motions; this is automatically dealt with by the outlier component (Fig. 4 (d)). The EM algorithm is initialized by perturbing the ground truth with random noise. The surface shape is then reconstructed from diffuse and specular motion using 25 iterations of EM as described above. Figure 4 shows the ownership weights (b d) and parametric flow fields (e f) associated with one of the reconstructed shapes. For the ownership weights, white/black indicates high/low ownership respectively. Note that gray ownership weights correspond to homogeneous regions where both specular and diffuse motion explain the image data. To determine whether the specular flow helps constrain surface reconstruction we compared against a similar mixture model with only 2 classes: diffuse motion and an outlier class which can account for parts of the image that are not consistent with the diffuse motion. To measure the reconstruction accuracy we computed the error in locating the sphere center and in determining the radius. The median error over all 20 shapes using the combined specular/diffuse model was units for the center, and units for the radius; for the purely diffuse model, the error was 1.03 and 0.90 units respectively. The difference of the medians is statistically significant according to a rank sum test; the p-values are for the center and for the radius. The diffuse model was sometimes severely affected by the specular motion and converged to solutions far from the true solution. We conclude that for scenes containing significant specular and diffuse motions, the combined model significantly improves the accuracy of surface recovery Reconstruction of a shiny ball To demonstrate the applicability to real imagery, we captured a shiny ball from 6 different views and reconstructed its position and radius from two of the views. The ball as shown in Figure 1 is highly reflective but also exhibits diffuse reflectance corresponding to surface paint. We captured the ball in a textured environment using a standard digital camera, which was calibrated using the software from [8]. A calibration grid was captured alongside the shiny ball in order to estimate the external parameters of the camera, which was done using the same software. We manually determined the center of the ball in the 6 views and used this to estimate the ground truth position of the ball s center in 3D. The radius of the ball was measured by hand (45mm). To avoid a potential bias from the outline of the sphere, we reconstructed the object only from a masked portion of the image as shown in Figure 5. Pixels outside of the masked region are again forced to be explained by the outlier layer.

7 error increased to 20.0mm for the center and 8.9mm for the radius. Again this suggests the benefit of the weak specular flow constraint when reconstructing objects with mixed specular-diffuse appearance. Figure 5. Specular reconstruction of a real, reflective object. (top) Two views of a specular ball viewed through a circular aperture. Note how the surface paint and the reflections move differently. (bottom) Ownership weights of the diffuse flow (left) and the specular flow (right) computed from the reconstructed shape. To automatically initialize EM with approximate surface geometry, we first compute dense optical flow between the image pair using a standard method [4]. Given the known camera geometry, we classify the flow vectors into specular or diffuse based on the epipolar deviation [28] and reject the putative specular flow vectors with large epipolar deviation from consideration during initialization. From the camera motion and diffuse flow we then estimate the depth of the diffuse surface points. Finally we fit the spherical surface that minimizes the distance to the estimated depth values. Reconstruction from the initialization was performed as for the synthetic experiments. The reconstruction error of the initialization alone was 15.1mm for the center of the sphere, and 5.1mm for the radius. Running the EM algorithm based on the parametric mixture of specular and diffuse flow lead to substantially improved results: The error for the center and radius was 2.8mm and 0.4mm, respectively. Figure 5 shows the ownership weights as determined by the EM algorithm and Figure 1 shows the parametric diffuse and specular flow fields for the reconstructed shape. We can clearly see that the diffuse layer picks up diffuse features such as the outline of the surface paint, and that the specular flow brings out reflected structures such as the checkerboard, ceiling lamps etc. Parts that do not exhibit any significant spatial structure are assigned equal probability for either being specular or diffuse. Finally, we compared this against a mixture model with only diffuse flow and an outlier layer. The reconstruction 7. Discussion and future work We have formalized the notion of specular flow, i.e. the image motion that results from a camera moving around a reflective object. Unlike previous work that dealt with single specular highlights, specular flow represents the dense (or semi-dense) flow field that arises from reflective surfaces in textured environments. We showed how, under the assumption of distant illumination fields, specular flow can be related to the 3D geometry of reflective objects defined by parametric implicit functions, and we furthermore presented a method for computing the specular flow approximately under the assumption of small camera motion. Moreover, we analyzed how the assumption of distant illumination, which we termed the weak specular constraint, can be used to constrain the geometry of an unknown reflective object. Many reflective objects in the world show both diffuse reflectance as well as specular reflection, and thus exhibit a mixture of diffuse flow (the image motion of diffuse surface markings) and specular flow. We explored whether specular flow provides constraints on the geometry of such objects that go beyond those provided by diffuse flow, and found that diffuse and specular flow provide complementary constraints. This motivates combining diffuse and specular cues for surface reconstruction. We proposed a parametric mixture model for surface recovery that is based on parametric models of diffuse and specular flow. Approximate inference is performed using the EM algorithm, which also provides ownership weights that segment the image into diffuse and specular pixels. We applied the model to synthetic image sequences as well as a real sequence and found that using both specular and diffuse flow leads to a better reconstruction accuracy than reconstruction from the diffuse flow alone. Given the complexity of the specular flow field in relation to 3D surface geometry it is interesting to ask whether humans use such a cue in judgments of surface shape. Blake and Bülthoff [7] studied the use of specularities in static stereo perception and hypothesized that there is a straightforward connection between specular stereo and specular motion perception. To explore whether humans use specular motion in judging surface shape we performed a series of experiments [20] on the perception of specular flow with random dot displays that remove the dependence on any cues besides diffuse and specular motion. Random dot displays corresponding to diffuse and specular flow were not perceived as coming from a shiny object. Despite this we found that the combination of diffuse and specular flow im-

8 proved discrimination between convex and concave shapes. Further studies are needed to better understand how well humans are capable of perceiving geometry from specular flow. For machine vision there are a number of promising directions for future work. The proposed model assumes that the diffuse and specular motions occur in distinct image regions. Future work should consider more general situations related to transparency, where regions can contain both diffuse and specular components at the same time. Furthermore, we have only applied our model to simple parametric surfaces. It will be interesting to study more flexible surface representations, for example a variational framework such as [27]. The method here may also be useful for classifying material properties of surfaces from video. Finally, the reconstruction method assumes that the external camera parameters are recovered through an independent process. Future work should address the joint recovery of camera motion and surface geometry. Acknowledgments. We thank Tomer Moscovich for assisting with the image capture for the experiments, and Fulvio Domini for helpful discussions. This work was supported by NSF IIS References [1] J. L. Barron, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. IJCV, 12(1):43 77, [2] J. R. Bergen, P. J. Burt, R. Hingorani, and S. Peleg. A threeframe algorithm for estimating two-component image motion. PAMI, 14(9): , [3] P. J. Besl and R. C. Jain. Segmentation through variableorder surface fitting. PAMI, 10(2): , [4] M. J. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. CVIU, 63(1):75 104, [5] A. Blake. Specular stereo. IJCAI, pp , [6] A. Blake and G. Brelstaff. Geometry from specularities. ICCV 1988, pp [7] A. Blake and H. Bülthoff. Does the brain know the physics of specular reflection? Nature, 343: , [8] J.-Y. Bouguet. Camera calibration toolbox for Matlab. http: // doc/. [9] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. ECCV 2004, v. 4, pp [10] M. Chen and J. Arvo. Theory and application of specular path perturbation. ACM Transactions on Graphics, 19(4): , [11] N. Cornelius and T. Kanade. Adapting optical flow to measure object motion in reflectance and X-ray image sequences. ACM Siggraph/Sigart Interd. Workshop on Motion: Representation and Perception, pp , [12] G. Healy and T. O. Binford. Local shape from specularity. Computer Vision, Graphics, and Image Processing, 42:62 86, [13] K. Ikeuchi. Determining surface orientations of specular surfaces by using the photometric stereo method. PAMI, 3(6): , [14] H. Jin, A. Yezzi, and S. Soatto. Variational multiframe stereo in the presence of specular reflections. 3DPVT 2002, pp [15] S. Lin, Y. Li, S. B. Kang, X. Tong, and H.-Y. Shum. Diffuse-specular separation and depth recovery from image sequences. ECCV 2002, v. 3, pp [16] S. K. Nayar, K. Ikeuchi, and T. Kanade. Determining shape and reflectance of Lambertian, specular, and hybrid surfaces using extended sources. International Workshop on Industrial Applications of Machine Intelligence and Vision, pp , [17] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. M. I. Jordan, editor, Learning in Graphical Models, pp Kluwer, [18] P. M. Neumann. Reflections on reflection in a spherical mirror. Am. Math. Monthly, 105(6): , [19] M. Oren and S. K. Nayar. A theory of specular surface geometry. IJCV, 24(2): , [20] S. Roth, F. Domini, and M. J. Black. Specular flow and the perception of surface reflectance. Journal of Vision, 3(9):413a, [21] B. Sarel and M. Irani. Separating transparent layers through layer information exchange. ECCV 2004, v. 4, pp [22] S. Savarese and P. Perona. Local analysis for 3D reconstruction of specular surfaces. CVPR 2001, v. 2, pp [23] S. Savarese and P. Perona. Local analysis for 3D reconstruction of specular surfaces - Part II. ECCV 2002, v.2,pp [24] H. S. Sawhney and S. Ayer. Compact representation of videos through dominant and multiple motion estimation. PAMI, 18(8): , [25] M. Shizawa and K. Mase. Principle of superposition: A common computational framework for analysis of multiple motion. IEEE Workshop on Visual Motion, pp , [26] J. D. Smith. The remarkable Ibn al-haytham. Mathematical Gazette, 76(475): , [27] J. E. Solem, H. Aanæs, and A. Heyden. A variational analysis of shape from specularities using sparse data. 3DPVT 2004, pp [28] R. Swaminathan, S. B. Kang, R. Szeliski, A. Criminisi, and S. K. Nayar. On the motion and appearance of specularities in image sequences. ECCV 2002, v. 1, pp [29] R. Szeliski, S. Avidan, and P. Anandan. Layer extraction from multiple images containing reflections and transparency. CVPR 2000, v. 1, pp [30] S. Waldon and C. R. Dyer. Dynamic shading, motion parallax and qualitative shape. IEEE Workshop on Qualitative Vision, pp , [31] A. Zisserman, P. Giblin, and A. Blake. The information available to a moving observer from specularities. Image and Vision Computing, 7(1):38 42, 1989.

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Recovering Shape from a Single Image of a Mirrored Surface from Curvature Constraints

Recovering Shape from a Single Image of a Mirrored Surface from Curvature Constraints Recovering Shape from a Single Image of a Mirrored Surface from Curvature Constraints Marshall F. Tappen University of Central Florida Orlando, Florida mtappen@eecs.ucf.edu Abstract This paper presents

More information

Notes 9: Optical Flow

Notes 9: Optical Flow Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find

More information

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade Textureless Layers CMU-RI-TR-04-17 Qifa Ke, Simon Baker, and Takeo Kanade The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract Layers are one of the most well

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

General specular Surface Triangulation

General specular Surface Triangulation General specular Surface Triangulation Thomas Bonfort, Peter Sturm, and Pau Gargallo MOVI - GRAVIR - INRIA - 38330 Montbonnot, FRANCE http://perception.inrialpes.fr Abstract. We present a method for the

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Robust Model-Free Tracking of Non-Rigid Shape. Abstract Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840

More information

Variational Multiframe Stereo in the Presence of Specular Reflections

Variational Multiframe Stereo in the Presence of Specular Reflections Variational Multiframe Stereo in the Presence of Specular Reflections Hailin Jin Anthony J. Yezzi Stefano Soatto Electrical Engineering, Washington University, Saint Louis, MO 6330 Electrical and Computer

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects

Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects Katja Doerschner (1), Paul Schrater (1,,2), Dan Kersten (1) University of Minnesota Overview 1. Introduction

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual

More information

A Bottom Up Algebraic Approach to Motion Segmentation

A Bottom Up Algebraic Approach to Motion Segmentation A Bottom Up Algebraic Approach to Motion Segmentation Dheeraj Singaraju and RenéVidal Center for Imaging Science, Johns Hopkins University, 301 Clark Hall, 3400 N. Charles St., Baltimore, MD, 21218, USA

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Optical Flow Estimation versus Motion Estimation

Optical Flow Estimation versus Motion Estimation Optical Flow Estimation versus Motion Estimation A. Sellent D. Kondermann S. Simon S. Baker G. Dedeoglu O. Erdler P. Parsonage C. Unger W. Niehsen August 9, 2012 1 Image based motion estimation Optical

More information

How to Compute the Pose of an Object without a Direct View?

How to Compute the Pose of an Object without a Direct View? How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Lucas-Kanade Without Iterative Warping

Lucas-Kanade Without Iterative Warping 3 LucasKanade Without Iterative Warping Alex RavAcha School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel EMail: alexis@cs.huji.ac.il Abstract A significant

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., 1 3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption Yael Moses Member, IEEE and Ilan Shimshoni Member,

More information

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20 Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk

More information

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17 Stereo Wrap + Motion CSE252A Lecture 17 Some Issues Ambiguity Window size Window shape Lighting Half occluded regions Problem of Occlusion Stereo Constraints CONSTRAINT BRIEF DESCRIPTION 1-D Epipolar Search

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Announcements. Stereo Vision Wrapup & Intro Recognition

Announcements. Stereo Vision Wrapup & Intro Recognition Announcements Stereo Vision Wrapup & Intro Introduction to Computer Vision CSE 152 Lecture 17 HW3 due date postpone to Thursday HW4 to posted by Thursday, due next Friday. Order of material we ll first

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Online Video Registration of Dynamic Scenes using Frame Prediction

Online Video Registration of Dynamic Scenes using Frame Prediction Online Video Registration of Dynamic Scenes using Frame Prediction Alex Rav-Acha Yael Pritch Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem,

More information

Optical Flow Estimation

Optical Flow Estimation Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

Reflection & Mirrors

Reflection & Mirrors Reflection & Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media A ray of light is

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Part II: Modeling Aspects

Part II: Modeling Aspects Yosemite test sequence Illumination changes Motion discontinuities Variational Optical Flow Estimation Part II: Modeling Aspects Discontinuity Di ti it preserving i smoothness th tterms Robust data terms

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Constructing a 3D Object Model from Multiple Visual Features

Constructing a 3D Object Model from Multiple Visual Features Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

34.2: Two Types of Image

34.2: Two Types of Image Chapter 34 Images 34.2: Two Types of Image For you to see an object, your eye intercepts some of the light rays spreading from the object and then redirect them onto the retina at the rear of the eye.

More information

Optics II. Reflection and Mirrors

Optics II. Reflection and Mirrors Optics II Reflection and Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media The

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

3D and Appearance Modeling from Images

3D and Appearance Modeling from Images 3D and Appearance Modeling from Images Peter Sturm 1,Amaël Delaunoy 1, Pau Gargallo 2, Emmanuel Prados 1, and Kuk-Jin Yoon 3 1 INRIA and Laboratoire Jean Kuntzmann, Grenoble, France 2 Barcelona Media,

More information

Stereo Matching.

Stereo Matching. Stereo Matching Stereo Vision [1] Reduction of Searching by Epipolar Constraint [1] Photometric Constraint [1] Same world point has same intensity in both images. True for Lambertian surfaces A Lambertian

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

3D Motion Segmentation Using Intensity Trajectory

3D Motion Segmentation Using Intensity Trajectory 3D Motion Segmentation Using Intensity Trajectory Paper ID: 323 Abstract. Motion segmentation is a fundamental aspect of tracking in a scene with multiple moving objects. In this paper we present a novel

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information