SURFACES WITH OCCLUSIONS FROM LAYERED STEREO

Size: px
Start display at page:

Download "SURFACES WITH OCCLUSIONS FROM LAYERED STEREO"

Transcription

1 SURFACES WITH OCCLUSIONS FROM LAYERED STEREO a dissertation submitted to the department of computer science and the committee on graduate studies of stanford university in partial fulfillment of the requirements for the degree of doctor of philosophy Michael H. Lin December 2002

2 c Copyright by Michael H. Lin 2003 All Rights Reserved ii

3 I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Carlo Tomasi (Principal Advisor) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Christoph Bregler I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Dwight Nishimura (Electrical Engineering) Approved for the University Committee on Graduate Studies: iii

4 Abstract Stereo, or the determination of 3D structure from multiple 2D images of a scene, is one of the fundamental problems of computer vision. Although steady progress has been made in recent algorithms, producing accurate results in the neighborhood of depth discontinuities remains a challenge. Moreover, among the techniques that best localize depth discontinuities, it is common to work only with a discrete set of disparity values, hindering the modeling of smooth, non-fronto-parallel surfaces. This dissertation proposes a three-axis categorization of binocular stereo algorithms according to their modeling of smooth surfaces, depth discontinuities, and occlusion regions, and describes a new algorithm that simultaneously lies in the most accurate category along each axis. To the author s knowledge, it is the first such algorithm for binocular stereo. The proposed method estimates scene structure as a collection of smooth surface patches. The disparities within each patch are modeled by a continuous-valued spline, while the extent of each patch is represented via a labeled, pixelwise segmentation of the source images. Disparities and extents are alternately estimated by surface fitting and graph cuts, respectively, in an iterative, energy minimization framework. Input images are treated symmetrically, and occlusions are addressed explicitly. Boundary localization is aided by image gradients. Qualitative and quantitative experimental results are presented, which demonstrate that, for scenes consisting of smooth surfaces, the proposed algorithm significantly improves upon the state of the art, more accurately localizing both the depth of surface interiors and the position of surface boundaries. Finally, limitations of the proposed method are discussed, and directions for future research are suggested. iv

5 Acknowledgements I would like to thank my advisor, Professor Carlo Tomasi, for bestowing upon me his unwavering support, guidance, generosity, patience, and wisdom, even during unfortunate circumstances of his own. I would also like to thank the members of my reading and orals committees Professors Chris Bregler, Dwight Nishimura, Amnon Shashua, Robert Gray, and Steve Rock for their thought-provoking questions and encouraging comments on my work. I am indebted to Stan Birchfield, whose research and whose prophetic words led me to the subject of this dissertation. Thanks also to Burak Göktürk, Héctor González-Baños, and Mark Ruzon for their assistance in the preparation of my thesis defense, and to the other members of the Stanford Vision Lab and Robotics Lab. I am grateful to Daniel Scharstein and Olga Veksler, whose constructive critiquing of preliminary versions of this manuscript helped to shape its final form. Finally, I would like to thank my family and friends, and especially Lily Kao, who have made the process of completing this work much more pleasant. v

6 Contents Abstract Acknowledgements iv v 1 Introduction Foundations of Stereo Solving the Correspondence Problem Surface Interiors vs. Boundaries A Categorization of Stereo Algorithms Continuity Discontinuity Uniqueness A Brief Survey of Stereo Methods Pointwise Color Matching Windowed Correlation Regularization Cooperative Methods Dynamic Programming Graph-Based Methods Layered Methods Our Proposed Approach Outline of Dissertation vi

7 2 Preliminaries Design Principles Mathematical Abstraction Desired Properties Consistency Smoothness Non-triviality Energy Minimization Surface Fitting Defining Surface Smoothness Surfaces as 2D Splines Surface Non-triviality Surface Smoothness Surface Consistency Surface Optimization Segmentation Segmentation by Graph Cuts Segmentation Non-triviality Segmentation Smoothness Segmentation Consistency Segmentation Optimization Integration Segmentation Consistency, Revisited Overall Optimization Iterative Descent Merging Surfaces Initialization Post-Processing vii

8 6 Experimental Results Quantitative Evaluation Metric Quantitative Results Map Venus Sawtooth Tsukuba Qualitative Results Cheerios Clorox Umbrella Discussion and Future Work Efficiency Theory vs. Practicality Generality A Image Interpretation 76 A.1 Interpolation A.2 Certainty Bibliography 79 viii

9 List of Tables 2.1 Contributions to energy Our overall optimization algorithm Our post-processing algorithm Layout for figures of complete results ix

10 List of Figures 1.1 The geometry of an ideal pinhole camera The geometry of triangulation The geometry of the epipolar constraint An example of a smooth surface An example of discontinuities with occlusion Map results Map error distributions Venus results Venus error distributions Sawtooth results Sawtooth error distributions Tsukuba results Tsukuba error distributions Cheerios results Clorox results Umbrella results x

11 Chapter 1 Introduction Ever since antiquity, people have wondered: How does vision work? How is it that we see a three-dimensional world? For nearly two millennia, the generallyaccepted theory (proposed by many, including Euclid [ca BC] and Ptolemy [ca. AD ]) was that peoples eyes send out probing rays which feel the world. This notion persisted until the early 17th century, when, in 1604, Kepler published the first theoretical explanation of the optics of the eye [45], and, in 1625, Scheiner observed experimentally the existence of images formed at the rear of the eyeball. Those discoveries emphasized a more focussed question: How does depth perception work? How is it that, from the two-dimensional images projected on the retina, we perceive not two, but three dimensions in the world? With the advent of computers in the mid-20th century, a broader question arose: how can depth perception be accomplished in general, whether by human physiology or otherwise? Aside from possibly elucidating the mechanism of human depth perception, a successful implementation of machine depth perception could have many practical applications, from terrain mapping and industrial automation to autonomous navigation and real-time human-computer interaction. In a sense, Eucild and Ptolemy had the right idea: methods using active illumination (including sonar, radar, and laser rangefinding) can produce extremely accurate depth information and are relatively easy to implement. Unfortunately, such techniques are invasive and have limited range, and thus have a restricted application domain; purely passive methods 1

12 2 CHAPTER 1. INTRODUCTION would be much more generally applicable. So how can passive depth perception be accomplished? In particular, how can images with only two spatial dimensions yield information about a third? Static monocular depth cues (including occlusion, relative size, and vertical placement within the field of view, but excluding binocular disparity and motion parallax) are very powerful and often more than sufficient; after all, we are typically able to perceive depth even when limited to using only one eye from only one viewpoint. Inspired by that human ability, much research has investigated how depth information can be inferred from a single flat image (e.g., Roberts [59] in 1963). However, most such monocular approaches depend heavily upon strong assumptions about the scene, and for general scenes, such knowledge has proven to be very difficult to instill in a computer. Julesz [42, 43] demonstrated in 1960 that humans can perform binocular depth perception in the absence of any monocular cues. This discovery led to a proliferation of research on the extraction of 3D depth information from two 2D images of the same static scene taken from different viewpoints. In this dissertation, we address this problem of computational binocular stereopsis, or stereo: recovering three-dimensional structure from two color (or intensity) images of a scene. 1.1 Foundations of Stereo The foundations for reconstructing a 3D model from multiple 2D images, are correspondence and triangulation. To understand how this works, let us first take a look at the reverse process, of the formation of 2D images from a 3D scene. Suppose we have an ideal pinhole camera (Figure 1.1), with center of projection P and image plane Π. Then the image p of a world point X is located at the intersection of Π with the line segment through P and X. Conversely, given the center of projection P, if we are told that p is the image of some world point X, we know that X is located somewhere along the ray from P through p. Although this alone is not enough to determine the 3D location of X, if there is a second camera with center of projection Q, whose image of the same world point

13 1.1. FOUNDATIONS OF STEREO 3 X Π p P Figure 1.1: The geometry of an ideal pinhole camera: ray from center of projection P through world point X intersects image plane Π at image point p. X is q, then we additionally know that X is located somewhere along the ray from Q through q (Figure 1.2). That is, barring degenerate geometry, the position of X is precisely the intersection of the rays P p and Qq. This well-known process of triangulation, or position estimation by the intersection of two back-projected rays, is what enables precise 3D localization, and thus forms the theoretical basis for stereo. However, note that the preceding description of triangulation requires the positions of two image points p and q which are known to be images of the same world point X. Moreover, reconstructing the entire scene would require knowing the positions of all such pairs (p, q) that are the image of some pairwise-common world point. Determining this pairing is the correspondence problem, and is a prerequisite to performing triangulation. Correspondence is much more difficult than triangulation, and thus forms the pragmatic basis for stereo. Although stereo is not difficult to understand in theory, it is not easy to solve in practice. Mathematically, stereo is an ill-posed, inverse problem: its solution essentially involves inverting a many-to-one transformation (image formation in this case), and thus is underconstrained. Specifically, triangulation is very sensitive to

14 4 CHAPTER 1. INTRODUCTION X p q P Q Figure 1.2: The geometry of triangulation: world point X lies at intersection of rays from centers of projection P, Q through respective image points p, q. input perturbations: small changes in the position of one or both image points can lead to arbitrarily large changes in the position of the triangulated world point. In other words, any uncertainty in the result of correspondence can potentially yield a virtually unbounded uncertainty in the result of triangulation. This difficulty is further exacerbated by the fact that correspondence is often prone to small errors. 1.2 Solving the Correspondence Problem Thus we see that accurately solving the correspondence problem is the key to accurately solving the stereo problem. In effect, we have narrowed our question from How can passive binocular stereopsis be done? to How can passive binocular correspondence be done? The fundamental hypothesis behind multi-image correspondence is that the appearance of any sufficiently small region in the world changes little from image to image. In general, appearance might emphasize higher-level descriptors over raw intensity values, but in its strongest sense, this hypothesis would mean that the

15 1.2. SOLVING THE CORRESPONDENCE PROBLEM 5 color of any world point remains constant from image to image. In other words, if image points p and q are both images of some world point X, then the color values at p and q are equal. This color constancy (or brightness constancy in the case of grayscale images) hypothesis is in fact true with ideal cameras if all visible surfaces in the world are perfectly diffuse (i.e., Lambertian). In practice, given photometric camera calibration and typical scenes, color constancy holds well enough to justify its use by most algorithms for correspondence. The geometry of the binocular imaging process also significantly prunes the set of possible correspondences, from lying potentially anywhere within the 2D image, to lying necessarily somewhere along a 1D line embedded in that image. Suppose that we are looking for all corresponding image point pairs (p, q) involving a given point q (Figure 1.3). Then we know that the corresponding world point X, of which q is an image, must lie somewhere along the ray through q from the center of projection Q. The image of this ray Qq in the other camera s image plane Π lies on a line l that is the intersection of Π with the plane spanned by the points P, Q, q. Because X lies on Qq, its projection p on Π must lie on the corresponding epipolar line l. (When corresponding epipolar lines lie on corresponding scanlines, the images are said to be rectified; the difference in coordinates of corresponding image points is called the disparity at those points.) This observation, that given one image point, a matching point in the other image must lie on the corresponding epipolar line, is called the epipolar constraint. Use of the epipolar constraint requires geometric camera calibration, and is what typically distinguishes stereo correspondence algorithms from other, more general correspondence algorithms. Based on color constancy and the epipolar constraint, correspondence might proceed by matching every point in one image to every point with exactly the same color in its corresponding epipolar line. However, this is obviously flawed: there would be not only missed matches at the slightest deviation from color constancy, but also potentially many spurious matches from anything else that happens to be the same color. Moreover, with real cameras, sensor noise and finite pixel sizes lead to additional imprecision in solving the correspondence problem. It is apparent that color constancy and the epipolar constraint are not enough to determine correspondence

16 6 CHAPTER 1. INTRODUCTION X? X? Π X? l p? p? p? q P Q Figure 1.3: The geometry of the epipolar constraint: image point p corresponding to image point q must lie on epipolar line l, which is intersection of image plane Π with plane spanned by q and centers of projection P, Q. with sufficient accuracy for reliable triangulation. Thus, some additional constraint is needed in order to reconstruct a meaningful three-dimensional model. What other information can we use to solve the correspondence problem? Marr and Poggio [52] proposed two such additional rules to guide binocular correspondence: uniqueness, which states that each item from each image may be assigned at most one disparity value, and continuity, which states that disparity varies smoothly almost everywhere. In explaining the uniqueness rule, Marr and Poggio specified that each item corresponds to something that has a unique physical position, and suggested that detected features such as edges or corners could be used. They explicitly cautioned against equating an item with a gray-level point, describing a scene with transparency as a contraindicating example. However, this latter interpretation, that each image location be assigned at most one disparity value, is nonetheless very prevalent in practice; only a small number of stereo algorithms (such as [71]) attempt to find more than one disparity value per pixel. This common simplification is in fact justifiable, if pixels are regarded as point samples rather than area samples, under the

17 1.3. SURFACE INTERIORS VS. BOUNDARIES 7 assumption that the scene consists of opaque objects: in that case, each image point receives light from, and is the projection of, only the one closest world point along its optical ray. In explaining the continuity rule, Marr and Poggio observed that matter is cohesive, it is separated into objects, and the surfaces of objects are generally smooth compared with their distance from the viewer [52]. These smooth surfaces, whose normals vary slowly, generally meet or intersect in smooth edges, whose tangents vary slowly [36]. When projected onto a two-dimensional image plane, these threedimensional features result in smoothly varying disparity values almost everywhere in the image, with only a small fraction of the area of an image... composed of boundaries that are discontinuous in depth [52]. In other words, a reconstructed disparity map can be expected to be piecewise smooth, consisting of smooth surface patches separated by cleanly defined, smooth boundaries. These two rules further disambiguate the correspondence problem. Together with color constancy and the epipolar constraint, uniqueness and continuity typically provide sufficient constraints to yield a reasonable solution to the stereo correspondence problem. 1.3 Surface Interiors vs. Boundaries A closer look at the continuity rule shows a clear distinction between the interiors and the boundaries of surfaces: depth is smooth at the former, and non-smooth at the latter. This bifurcation of continuity into two complementary aspects is often reflected in the design of stereo algorithms, because, as noted by Belhumeur [7], depth, surface orientation, occluding contours, and creases should be estimated simultaneously. On the one hand, it is important to recover surface interiors by estimating their depth and orientation, because such regions typically constitute the vast majority of the image area. On the other hand, it is important to recover surface boundaries by estimating occluding contours and creases, because boundaries typically are the most salient image features. (In fact, much of the earliest work on the three-dimensional interpretation of images focussed on line drawing interpretation, in which boundaries

18 8 CHAPTER 1. INTRODUCTION are the only image features.) Birchfield [13] in particular emphasized the importance of discontinuities, which generally coincide with surface boundaries. Moreover, in general, neither surface interiors nor surface boundaries can be unambiguously derived solely from the other. For example, given only a circular boundary that is discontinuous in depth, the interior could be either a fronto-parallel circle or the front hemisphere of a ball; this difficulty is inherent in the sparseness of boundaries. Conversely, given the disparity value at every pixel in an image, while one could threshold the difference between neighboring values to detect depth discontinuities, the selection of an appropriate threshold would be tricky at best; detecting creases by thresholding second-order differences would be even more problematic. This difficulty is inherent in the discrete nature of pixel-based reconstructions that do not otherwise indicate the presence or absence of discontinuities. Furthermore, not only should surface interiors and surface boundaries each be estimated directly, but because boundaries and interiors are interdependent, with the former bordering on the latter, they should in fact be estimated cooperatively within a single algorithm, rather than independently by separate algorithms. In other words, a stereo algorithm should explicitly and simultaneously consider both of the two complementary aspects of the continuity rule: smoothness over surface interiors, and discontinuity across surface boundaries. 1.4 A Categorization of Stereo Algorithms Many stereo algorithms are based upon the four constraints listed in Section 1.2: the epipolar constraint, color constancy, continuity, and uniqueness. Of these, the former two are relatively straightforward, but the manner in which the latter two are applied varies greatly [4, 22, 27, 65]. We propose a three-axis categorization of binocular stereo algorithms according to their interpretations of continuity and uniqueness, where we subdivide continuity according to the discussion of Section 1.3. In the following subsections, we list last, for all three axes, that category which we consider to be the most preferable.

19 1.4. A CATEGORIZATION OF STEREO ALGORITHMS 9 Figure 1.4: An example of a smooth surface (left and right images shown) Continuity The first axis describes the modeling of continuity over disparity values within smooth surface patches. As an example of a smooth surface patch, consider a slanted plane, with left and right images as shown in Figure 1.4. Then the true disparity over the surface patch would also be a slanted plane (i.e., a linear function of the x and y coordinates within the image plane). How might a stereo algorithm model the disparity along this slanted plane, or along any one smooth surface in general? Using this example as an illustration, we propose to categorize smooth surface models into three broad groups; most models used by prior stereo algorithms fall into one of these categories. Constant In these most restricted models, every point within any one smooth surface patch is assigned the same disparity value. This value is usually chosen from a finite, predetermined set of possible disparities, such as the set of all integers within a given range, or the set of all multiples of a given fraction (e.g., 1/4 or 1/2) within a given range. Examples of prior work in this category include traditional sum-of-squareddifferences correlation, as well as [17, 30, 44, 47, 52]. Applied to our example, these models would likely recover several distinct frontoparallel surfaces. This would be a poor approximation to the true answer. While one could lump together multiple constant-disparity surfaces to simulate slanted or curved

20 10 CHAPTER 1. INTRODUCTION surfaces, such a grouping would likely contain undesirably large internal jumps in disparity, especially in textureless regions. It would be desirable to be able to represent directly, not only fronto-parallel surfaces, but also slanted and curved surfaces. Discrete In these intermediate models, disparities are again limited to discrete values, but with multiple distinct values permitted within each surface patch. Surface smoothness in this context means that within each surface, neighboring pixels should have disparity values that are numerically as close as possible to one another. In other words, intra-surface discontinuities are expected to be small. For identical discretization, the smooth surfaces expressible by these models are a strict superset of those expressible by Constant models. Examples of prior work in this category include [7, 41, 60, 86]. Applied to our example, this category would improve upon the previous by shrinking the jumps in disparity to the resolution of the disparity discretization. However, it would be even better if the jumps were completely removed. Although one could fit a smooth surface to the discretized data from these models, such a fit would still be subject to error; e.g., if our slanted plane had a disparity range less than one discretization unit, these models would likely recover a fronto-parallel surface. Real In these most general models, disparities within each smooth surface patch vary smoothly over the real numbers (or some computer approximation thereof). This category can be thought of as the limit of the Discrete category as the discretization step approaches zero. Various interpretations of smoothness can be used; most try to minimize local first- or second-order differences in disparity. Examples of prior work in this category include [1, 3, 11, 70, 75]. Applied to our example, in the absence of other sources of error, these models should correctly find the true disparity. Therefore, among these three categories for modeling smooth surfaces, we find this one to be the most preferable, because it allows for the greatest precision in estimating depth.

21 1.4. A CATEGORIZATION OF STEREO ALGORITHMS 11 Figure 1.5: An example of discontinuities with occlusion. Top: left and right images. Bottom: left and right disparity maps (dark gray = small disparity; light gray = large disparity; white = no disparity) Discontinuity The second axis describes the treatment of discontinuities at the boundaries of smooth surface patches. As an example of a scene with boundaries discontinuous in depth, consider a small, fronto-parallel square floating in front of a larger one, with left and right images as shown in Figure 1.5. Then the true disparity for this scene would be a small square of larger disparity inside a larger square of smaller disparity, with step edges along the top and bottom of the smaller square. How might a stereo algorithm model the disparity across this depth discontinuity? Using this example as an illustration, we propose to categorize discontinuity models into four broad groups; most models used by prior stereo algorithms fall into one of these categories. Specifically, the penalty associated with a discontinuity is examined as a function of the size of the jump of the discontinuity.

22 12 CHAPTER 1. INTRODUCTION Free In this category, discontinuities are not specifically penalized. That is, with all else being equal, no preference is given for continuity in the final, reconstructed disparity map. In particular, these methods often fail to resolve the ambiguity caused by periodic textures or textureless regions. Examples of prior work in this category include traditional sum-of-squared-differences correlation, as well as [44, 52, 74, 84, 86]. Applied to our example, these models would likely produce frequent, scattered errors throughout the interior of the squares: the cross-hatched pattern is perfectly periodic, so the identity of the single, best match would likely be determined by random perturbations. Infinite In this category, discontinuities are penalized infinitely; i.e., they are disallowed. The entire image is treated as one smooth surface. That is, the entire image, as a unit, is subject to the chosen model of smooth surface interiors; almost everywhere continuity in fact applies everywhere. The recovered disparity map is smooth everywhere, although potentially not uniformly so. (Note, however, that the surface smoothness model may itself allow small discontinuities within a single surface.) Examples of prior work in this category include [1, 5, 37, 70]. Applied to our example, these models would not separate the foreground and background squares, but would instead connect them by smoothing over their common boundary. The width of the blurred boundary can vary depending on the specific algorithm, but typically, the boundary will be at least several pixels wide. Convex In this category, discontinuities are allowed, but a penalty is imposed that is a finite, positive, convex function of the size of the jump of the discontinuity. Typically, that convex cost function is either the square or the absolute value of the size of the jump. The resulting discontinuities often tend to be somewhat blurred, because the cost of two adjacent discontinuities is no more than that of a single discontinuity of the same total size. Examples of prior work in this category include [41, 60, 75].

23 1.4. A CATEGORIZATION OF STEREO ALGORITHMS 13 Applied to our example, these models would likely separate the foreground and background squares successfully. However, at the top and bottom edges of the smaller square, where there is just a horizontal line, these models might output a disparity value in between those of the foreground and background. Non-convex In this category, discontinuities are allowed, but a penalty is imposed that is a non-convex function of the size of the jump of the discontinuity. One common choice for that non-convex cost function is the Potts energy [57], which assesses a constant penalty for any non-zero discontinuity, regardless of size. The resulting discontinuities usually tend to be fairly clean, because the cost of two adjacent discontinuities is generally more than that of a single discontinuity of the same total size. Examples of prior work in this category include [7, 21, 24, 30]. Applied to our example, these models would likely separate the foreground and background squares successfully. Moreover, the recovered disparity values would likely contain only two distinct depths: those of the foreground and the background. Therefore, among these four categories for modeling discontinuities, we find this one to be the most preferable, because it reconstructs boundaries the most cleanly, with minimal warping of surface shape Uniqueness The third axis describes the application of uniqueness, especially to the occlusions that accompany depth discontinuities. As an example of a scene with occlusions, consider again the example of the floating square shown in Figure 1.5. The true disparity for this scene would be a small square of larger disparity inside a larger square of smaller disparity, with an occlusion region of no disparity to the left or right side of the smaller square in the left or right images, respectively. The occlusion region is the portion of the background square which, in the other image, is occluded by the foreground square. Points within the occlusion region have no disparity because they do not have corresponding points in the other image, and we define disparity using correspondence (as opposed

24 14 CHAPTER 1. INTRODUCTION to inverse depth). In general, disparity discontinuities are always accompanied by occlusion regions [30], except when the boundary lies along an epipolar line. This is a consequence of uniqueness and symmetry. How might a stereo algorithm model the disparity in this scene, or in others with discontinuities and occlusions? Using this example as an illustration, we propose to categorize occlusion models into four broad groups; most models used by prior stereo algorithms fall into one of these categories. Transparent In this category, uniqueness is not assumed; these models allow for transparency. Our floating squares example does not exhibit transparency, so these models would be of little benefit for it; however, for scenes that do exhibit transparency, these models would be essential for adequate reconstruction. Furthermore, natural scenes often contain fine details (such as tree branches against the sky) that are only a few pixels in width; because of pixelization, these images effectively contain transparency as well [62]. Unfortunately, stereo reconstruction with transparency is a very challenging problem with few existing solutions; one such example of prior work is [71]. One-way In this category, uniqueness is assumed within a chosen reference image, but not considered within the other. That is, each location in the reference image is assigned at most one disparity, but the disparities at multiple locations in the reference image may point to the same location in the other image. Typically, each location in the reference image is assigned exactly one disparity, and occlusion relationships are ignored. That is, these models generally search for correspondences within occlusion regions as well as within non-occlusion regions. Examples of prior work in this category include traditional SSD correlation, as well as [11, 21, 44, 84]. Applied to our example, these models would likely find the correct disparity within the unoccluded regions. Points within the occluded regions will typically be assigned some intermediate disparity value between those of the foreground and background. Note that such an assignment would result in the occluded point and a different,

25 1.5. A BRIEF SURVEY OF STEREO METHODS 15 unoccluded point both being paired with the same point in the other image. This collision, or failure of reciprocal uniqueness, is an undesirable yet readily detectable condition; these models allow it and are thus less than ideal. Asymmetric Two-way In this category, uniqueness is encouraged for both images, but the two images are treated unequally. That is, reasoning about occlusion is done, and the occlusions that accompany depth discontinuities are qualitatively recovered, but there is still one chosen reference image, resulting in asymmetries in the reconstructed result. Examples of prior work in this category include [3, 17, 52, 75, 86]. Applied to our example, these models would likely find the correct disparity within the unoccluded regions, and most of the occlusion region would be marked as such. However, some occluded pixels near the edge of the occlusion region might mistakenly be assigned to the nearer surface; the outline of the reconstructed smaller square would likely look different on its left versus right edges. Symmetric Two-way In this category, uniqueness is enforced in both images symmetrically; detected occlusion regions are marked as being without correspondence. Examples of prior work in this category include [7, 30, 41, 47]. Applied to our example, these models would likely find the correct disparity (or lack thereof) everywhere, barring other sources of error. Therefore, among these four categories for modeling uniqueness and occlusions, we find this one to be the most preferable for our purposes, because it encourages the greatest precision in localizing boundaries in image space by fully utilizing the assumption of uniqueness. 1.5 A Brief Survey of Stereo Methods Section 1.1 explained that stereo correspondence is fundamentally an underconstrained, inverse problem. Section 1.2 proposed that it is fairly straightforward to impose color constancy and the epipolar constraint (by matching each image point

26 16 CHAPTER 1. INTRODUCTION with every image point of the same color in the corresponding epipolar line), but observed that additional constraints are necessary. Section 1.2 concluded by claiming that uniqueness and continuity generally suffice as those further constraints, without discussing how they are applied. This section motivates and reviews a few selected approaches to stereo that have been used in the past, discusses how they use uniqueness and continuity, and explains how they fit within our three-axis categorization Pointwise Color Matching Under the assumption of opacity, uniqueness implies that each image point may correspond only to at most one other image point. Thus, the simplest way to apply uniqueness on top of color constancy and the epipolar constraint would be to match each image point with the one image point of the most similar color in the corresponding epipolar line. This naive technique might work well in an ideal, Lambertian world in which every world point has a unique color, but in practice, the discretized color values of digital images can cause problems. As an extreme example, let us consider a binary random-dot stereogram, in which each image consists of pixels which are randomly and independently black or white. With pointwise correspondence, there will be a true match at the correct disparity, but there will also be a 50% chance of a false match at any incorrect disparity. This is because looking at the color at a single point does not provide enough information to uniquely identify that point. Thus we see that even with the use of color constancy, uniqueness, and the epipolar constraint, without continuity, direct pointwise stereo correspondence is still ambiguous, in that false matches may appear as good as the correct match Windowed Correlation With the use of continuity, which implies that neighboring image points will likely have similar disparities, one can pool information among neighboring points to reduce

27 1.5. A BRIEF SURVEY OF STEREO METHODS 17 ambiguity and false matches. This is the basic idea behind windowed correlation, on which many early stereo methods are based. Classical windowed correlation consists of comparing a fixed-size window of pixels, rather than individual pixels, and choosing the disparity that yields the best match over the whole window. Windowed correlation is simple, efficient, and effective at reducing false matches. For example, let us reconsider the aforementioned randomdot stereogram. Whereas matching individual pixels gave a false match rate of 50%, with a 5 5 window of pixels for correlation, the probability of an exact false match would be reduced to 2 25, or under 1 in 33 million. However, for a window to match exactly, the disparity within the window must be constant. Otherwise, no disparity will yield a perfect match, and the algorithm will pick whichever disparity gives the smallest mismatch, which may or may not be the disparity at the center of the window. In other words, windowed correlation methods depend on the implicit assumption that disparities are locally constant; these methods work best where that is indeed the case. The meaning of locally above is determined by the size and shape of the correlation window. However, choosing the configuration of the window is not easy. On the one hand, if the window is too small, spurious matches will remain, and many incorrect matches will look good ; larger windows are better at reducing ambiguity by minimizing false matches. On the other hand, if the window is too large, it will be unlikely to contain a single disparity value, and even the correct match will look bad ; larger windows are also less likely to contain only a single disparity value, and thus more likely to reject true matches. To some extent, this problem of choosing a window size can be alleviated by using adaptive windows, which shift and/or shrink to avoid depth discontinuities, while remaining larger away from discontinuities. Kanade and Okutomi [44] use explicit, adaptive, rectangular windows. Hirschmüller [35] uses adaptive, piecewise square windows. Scharstein and Szeliski [63] use implicit windows formed by iterative, nonlinear diffusion. In practice, windowed correlation techniques work fairly well within smooth, textured regions, but tend to blur across any discontinuities. Moreover, they generally

28 18 CHAPTER 1. INTRODUCTION perform poorly in textureless regions, because they do not specifically penalize discontinuities in the recovered depth map. That is, although these methods assume continuity by their use of windows, they do not directly encourage continuity in the case of ambiguous matches Regularization One way to enforce continuity, even in the presence of ambiguous matches, is through the use of regularization. Generically, regularization is a technique for stabilizing inverse problems by explicitly quantifying smoothness and adding it as one more simultaneous goal to optimize. Applied to stereo, it treats disparity as a realvalued function of image location, defines a functional measuring the smoothness of such a disparity function, and tries to maximize that functional while simultaneously maximizing color constancy. Such smoothness can be quantified in many ways, but in general, nearby image locations should have similar disparities. Horn and Schunck [38] popularized both the use of regularization on otherwise underconstrained image correspondence problems, and the use of variational methods to solve the resulting energy minimization problems in a continuous domain. Horn [37], Poggio et al. [56], and Barnard [5] suggested several formulae for quantifying smoothness, all of which impose uniform smoothness and forbid discontinuities. Terzopoulos [77] and Lee and Pavlidis [49] investigated regularization with discontinuities. Rivera and Marroquín [58] formulated a higher-order, edge-preserving method that does not penalize constant, non-zero slopes. Computationally, regularization tends to yield challenging nonlinear optimization problems that, without fairly sophisticated optimization algorithms, can be highly dependent on good initial conditions. Often, multiscale or multigrid methods are needed [76], but Akgul et al. [1] presented an alternate method of ensuring reliable convergence, by starting with two initial conditions, and evolving them cooperatively until they coincide. Allowing discontinuities further complicates the optimization process; Blake and Zisserman [16] propose a method for optimizing certain specific models of regularization with discontinuities.

29 1.5. A BRIEF SURVEY OF STEREO METHODS 19 Aside from optimization challenges, the primary weakness of regularization methods is that they do not readily allow occlusions to be represented. Every image point is forced to have some disparity; no point can remain unmatched, as would be required for proper occlusions under the constraint of uniqueness Cooperative Methods We have seen that neither windowed correlation nor regularization methods support proper occlusions: they try to find unique disparity values for one reference image, but without checking for collisions in the other image. In effect, uniqueness, although assumed, is not enforced; only one-way uniqueness is applied Inspired by biological nervous systems, cooperative methods directly implement the assumptions of continuity and two-way uniqueness in an iterative, locally connected, massively parallel system. These techniques operate directly in the space of correspondences (refered to as the matching score volume by [85], and as the disparity-space image, or DSI, by [17, 40, 65]), rather than in image space, evolving a 3D lattice of continuous-valued weights via mutual excitation and inhibition. This space of possible correspondences can be parameterized in several ways. Typically, (x, y, d) is used, with (x, y) representing position in the chosen reference image, and d representing disparity. Assuming rectified input images, however, an alternate, symmetric parameterization is (x l, x r, y). Qualitatively, a weight at (x l, x r, y) in such a coordinate system represents the likelihood that (x l, y) in the left image and (x r, y) in the right image correspond to one another (i.e., are images of the same, physical world point). Initially, this matching score volume is populated with local similarity measures, typically obtained via correlation with small windows. Subsequently, the weights are updated in parallel as follows: if a weight at (x l, x r, y) is large, then for uniqueness, weights at (x, x r, y) and (x l, x, y) are inhibited, and for continuity, weights at any other, non-inhibited points near (x l, x r, y) are excited. Upon convergence of this relaxation algorithm, these real-valued weights are compared with one another and thresholded to determine final correspondences (or the lack thereof).

30 20 CHAPTER 1. INTRODUCTION Different cooperative algorithms use different models of excitation corresponding to different models of smooth surfaces. Marr and Poggio [52] use a fixed, 2D excitation region for Constant surfaces; moreover, their algorithm is only defined for binaryvalued (e.g., black and white) input images. Zitnick and Kanade [86] use a fixed, 3D excitation region for Discrete surfaces; their algorithm is designed for real-valued images. In practice, [86] can give very good results, but because it uses a fixed window for excitation, boundaries can be rounded or blurred (analogous to classical windowed correlation). To improve boundary localization, Zhang and Kambhamettu [85] use a variable, 3D excitation region that is dependent on an initial color segmentation of the input images; the idea is that depth discontinuities will likely correlate well with monocular color edges. Regarding convergence, because the cooperative update is local in nature, accurate results depend upon good initialization. In particular, although a limited number of false matches can start with a good initial score, true matches must start with a good initial score. These methods support discontinuities with non-convex penalties; two-way uniqueness is encouraged, generally asymmetrically Dynamic Programming Like cooperative methods, dynamic programming methods also operate in a discretized disparity space in order to encourage bidirectional uniqueness along with continuity. However, while cooperative methods are iterative and find a locally optimal set of real-valued weights that must then be thresholded, dynamic programming is non-iterative and finds a globally optimal set of binary weights that directly translate into the presence or absence of each candidate correspondence. Thus, dynamic programming methods for stereo are much faster, and apparently also more principled, than their cooperative counterparts. However, the downsides to dynamic programming are twofold. First, dynamic programming can only optimize one scanline at a time. Many desired interactions among scanlines, such as that required for continuity between scanlines, requires less principled, ad hoc post-processing, usually with no guarantee of optimality. Second,

31 1.5. A BRIEF SURVEY OF STEREO METHODS 21 dynamic programming depends upon the validity of the ordering constraint, which states that in each pair of corresponding scanlines, corresponding pixels appear in the same order in the left and right scanlines. Because the ordering constraint is a generalization that is not always true [28], and because optimizing scanlines independently can be rather prone to noise, dynamic programming is better suited for applications where speed is an important consideration. Various dynamic programming approaches differ in their treatment of continuity. Baker and Binford [2] and Ohta and Kanade [54] impose continuity simply and directly, by first matching edges, then interpolating over the untextured regions. Unfortunately, such interpolation does not preserve sharp discontinuities. Taking the opposite approach, Intille and Bobick [17, 40] do not use continuity at all, relying upon ground control points and the ordering constraint to obviate the need for any external smoothness constraints. Their asymmetric method uses neither intra- nor inter-scanline smoothness, and treats each scanline independently. The method of Geiger, Ladendorf, and Yuille [30] also treats scanlines independently, but supposes that disparities are piecewise constant along each scanline, and symmetrically enforces a strict correspondence between discontinuities in one image and occlusions in the other. In contrast, both Cox et al. [24] and Belhumeur and Mumford [8] impose 2D continuity through inter-scanline constraints. Cox et al. [24] count the total number of depth discontinuities (horizontal plus vertical), and specify that this number should be minimized as a subordinate goal; they suggest either one or two passes of dynamic programming as efficient methods for approximating this minimization. Belhumeur and Mumford [8] also require the minimization of the number of pixels at which discontinuities are present, but Belhumeur [6, 7] generalizes the notion of discontinuity, counting both step edges and crease edges. Belhumeur formulates a symmetric energy functional that incorporates this count, and proposes that it be minimized with iterated stochastic dynamic programming. Aside from depending upon the ordering constraint, all of these methods have discrete, pixelized approaches to continuity that are at most one-dimensional. These limitations are the primary weaknesses of dynamic programming for stereo.

32 22 CHAPTER 1. INTRODUCTION Graph-Based Methods As do dynamic programming methods, graph-based methods leverage combinatorial optimization techniques for their power, but unlike dynamic programming methods, graph-based methods are able to optimize continuity over the entire 2D image, instead of only along individual 1D scanlines. Graph-based methods are based upon efficient algorithms [32, 46] for calculating the minimum-cost cut (or, equivalently, the maximum flow [29]) through a network graph. There are two general flavors of graph-based stereo methods. One flavor computes the global minimum of a convex energy functional with a single minimum-cost cut; typically, the cost of a discontinuity is a linear function of its size. Roy and Cox [60] propose one such method, which discards the ordering constraint used in dynamic programming in favor of a local coherence constraint. Their method uses an undirected graph built upon a chosen reference image, and find exactly one disparity for each pixel therein. Ishikawa and Geiger [41] propose another such method, which retains the ordering constraint, and furthermore distinguishes among ordinary, edge, and junction pixels. Their method uses a directed graph, and symmetrically enforces two-way uniqueness. Both of these methods tend to produce discontinuities that are somewhat blurred, because they are incapable of using non-convex, sub-linear penalties for discontinuities. The other flavor of graph-based methods computes a strong local minimum of a non-convex energy functional with iterated minimum-cost cuts. Boykov, Veksler, and Zabih [18] developed one such optimization technique that is applicable to an extremely wide variety of non-convex energies. Boykov et al. [19] subsequently developed another such optimization technique that is somewhat less widely applicable, but which produces results that are provably within a constant factor of being globally optimal [79]. Boykov et al. [20, 21] apply these techniques to stereo, again using an undirected graph built upon a chosen reference image, and finding exactly one disparity for each pixel therein. Kolmogorov and Zabih [47] build more complex graphs; their method enforces symmetric, two-way uniqueness, but is limited to constant-disparity continuity.

33 1.5. A BRIEF SURVEY OF STEREO METHODS 23 In general, graph-based methods are not only quite powerful, but also fairly efficient for what they do. For our purposes, their main weakness is their restriction to computing discrete-valued disparities, due to their inherently combinatorial nature Layered Methods Like regularization with discontinuities, layered models [25, 26, 80] estimate realvalued disparities while allowing discontinuities, producing piecewise-smooth surface reconstructions. However, while the former methods represent all surface patches together with a single function mapping image location to disparity value, layered methods separately represent each surface patch with its own such function, and combine them into a single depth map through the use of support maps, which define the image regions in which each surface is active. The primary consequence of this representational enrichment is that when combining support maps, there need not be exactly one active surface per pixel. In particular, it is trivial to represent pixels at which no surface is active. This lets layered methods readily model occlusion regions, enabling them to consider two-way uniqueness. Another consequence of modeling surfaces with layers is that each surface gets its own identity, independent of its support map. This allows image regions to be grouped semantically, rather than merely topologically. In other words, layered models have the advantage of being able to model hidden connections among visible surface patches that are separated by occluding objects. For example, consider a scene seen through chain link fence. With standard regularization, either the fence must be ignored, or the remainder of the scene must be cut into many independent pieces. With a layered model, the fence and the background can both survive intact. Baker, Szeliski, and Anandan [3] developed a layered method for stereo reconstruction that is based upon minimizing the resynthesis error obtained by comparing input images warped according to the recovered depth map. Their method models the disparity of each surface patch as a plane with small, local deviations. Their theory includes transparency, but their implementation uses asymmetric, two-way

34 24 CHAPTER 1. INTRODUCTION uniqueness. Like windowed correlation methods, they achieve some degree of continuity by spatially blurring the match likelihood. Birchfield and Tomasi [11] developed a method that models each surface as a connected, slanted plane. They estimate the assignment of pixels to surfaces with the graph-based techniques of [18], and favor placing boundaries along intensity edges, again yielding exactly one disparity value for each image location. Among prior work, their algorithm is the most similar to ours. 1.6 Our Proposed Approach In Section 1.4, we proposed a three-axis categorization of binocular stereo algorithms according to their treatment of continuity and uniqueness. In the remainder of this dissertation, we propose an algorithm that simultaneously lies in the most preferable category along all three axes: real-valued disparities, non-convex discontinuity penalties, and symmetric two-way occlusions. To the author s knowledge, ours is the first such algorithm for binocular stereo. We contend that, for scenes consisting of smooth surfaces, our algorithm improves upon the current state of the art, achieving both more accurate localization in depth of surface interiors via subpixel disparity estimation, and more accurate localization in the image plane of surface boundaries via the symmetric treatment of images with proper handling of occluded regions. 1.7 Outline of Dissertation In Chapter 2, we describe our mathematical model of the stereo problem and solutions thereof. In Chapters 3 and 4, we describe surface fitting and boundary localization, respectively. In Chapter 5, we describe the interaction between surface fitting and boundary localization, and give the overall optimization algorithm. In Chapter 6, we present some promising qualitative and quantitative experimental results. Finally, in Chapter 7, we offer a few concluding remarks.

35 Chapter 2 Preliminaries In this chapter, we develop a mathematical abstraction of the stereo problem. This abstract formulation is defined within a continuous domain; discretization of the problem for computational feasiblity will be discussed in subsequent chapters. 2.1 Design Principles Because the stereo problem is so sensitive to perturbations, in order to get best results, it is especially important that the algorithm be designed to minimize the unnecessary introduction and propagation of errors. To this end, we follow two guiding principles: least commitment, and least discretization. In the computation of our final answer, we would like to make the best possible use of all available data. This means that, at any particular stage in the computation, we would like to be the least committed possible to any particular interpretation of the data. Because of this, it is better to match images directly, instead of matching only extracted features (such as edges and corners): we don t want to discard the dense image data so early in the process. Similarly, it is better to directly estimate subpixel disparities, rather than fit smooth surfaces to pre-calculated, integer-only disparities, for the same reason. In a similar spirit, we also would like to avoid rounding errors as much as possible, 25

36 26 CHAPTER 2. PRELIMINARIES so our computations are done in a continuous space as much as possible. Most basically, our algorithm estimates floating point disparity values defined on a continuous domain; these disparity values are only discretized in our implementation by finite machine precision. In addition, since we are trying to recover subpixel (non-integer) disparity values, we need to match image appearance at inter-pixel image positions. This means that we must define image appearance at inter-pixel image positions; that is, we must interpolate the input images. We describe how we do this in Appendix A. 2.2 Mathematical Abstraction As motivated in Section 1.5, in order to place in the most preferable category along each of our three proposed axes, we use a layered model [25, 26, 80] to represent possible solutions to the stereo problem. Our stereo algorithm follows the common practice of assuming that input images have been normalized with respect to both photometric and geometric calibration. In particular, we assume that the images are rectified. Let I = {p = (x, y, t)} = (R R { left, right }) be the space of image locations, and let I : I R m be the given input image pair. Typically, m = 3 (with the components of R m representing red, green, and blue) for color images, and m = 1 for grayscale images, but our algorithm does not depend on the semantic interpretation of R m ; any feature space can be used. Note that the image space is defined to be continuous, not discrete; we discuss this matter further in Appendix A. Our abstract model of a hypothesized solution consists of a labeling (or segmentation) f, which assigns each point of the two input images to zero or one of n surfaces, plus n disparity maps d[k], each of which assigns a disparity value to each point of

37 2.2. MATHEMATICAL ABSTRACTION 27 the two input images: [segmentation] f : I {0, 1,..., N} [disparity map] d[k] : I R for k in {1,2,...,N} In other words, these functions are the independent unknowns that are to be estimated. The segmentation function f specifies to which one of n surfaces, if any, each image location belongs. We take belonging to mean the existence of a world point which (a) projects to the image location in question, and (b) is visible in both images. For each surface, the signed disparity function d[k] defines the correspondence (or matching) function m[k] between image locations: m[k] : I I m[k](x, y, t) = ( x + d[k](x, y, t), y, t ) where left = right and vice versa. That is, for each surface k, m[k] maps each location in one image to the corresponding location in the other image. Note that, for all k, d[k] and m[k] are both defined for all (x, y, t), regardless of the value of f(x, y, t). Furthermore, for standard camera configurations, d[k] will generally be positive in the right image and negative in the left image, if it represents a real surface. Thus, the interpretation of this model is: for all p: f(p) = k with k > 0 p corresponds to m[k](p) f(p) = 0 p corresponds to no location in the other image That is, a hypothesized solution specifies a set of correspondences between left and right image locations, where each image location is a member of at most one correspondence.

38 28 CHAPTER 2. PRELIMINARIES 2.3 Desired Properties Given this abstract representation of a solution, how can we evaluate any particular hypothesized solution? What are some conditions that characterize a good solution? We propose three desired properties: consistency, smoothness, and non-triviality Consistency Correspondence of image locations should be bidirectional. In other words, if points p and q are images of the same world point, then each corresponds to the other; otherwise, neither corresponds to the other. If a hypothesized solution were to say that p corresponds to q but that q does not correspond to p, it would make no sense; we call such a solution inconsistent. Within each surface, this translates into a constraint on m[k] that: for all k, p: m[k](m[k](p)) = p (2.1) which is equivalent to a constraint on each d[k]. In particular, for each k, given one of d[k](,, left ) or d[k](,, right ), the other is uniquely determined. This reflects the notion that d[k](x, y, left ) and d[k](x, y, right ) are two representations of the same surface. Regarding segmentation, we also have the constraint on f that for all p: f(p) = k with k > 0 f(m[k](p)) = k (2.2) Ideally, these consistency constraints should be satisfied exactly, but for computational purposes, we merely attempt to maximize consistency Smoothness Continuity dictates that a recovered disparity map should be piecewise smooth, consisting of smooth surface patches separated by cleanly defined, smooth boundaries.

39 2.3. DESIRED PROPERTIES 29 Thus, in trying to estimate the best reconstruction, we would like to maximize the smoothness, both of the surface shapes defined by d[k], and of the boundaries defined by f. Because the disparity maps d[k] are continuous-valued functions, they are amenable to the usual meaning of smoothness. We take smoothness of d[k] to mean differentiability, with the magnitude of higher derivatives being relatively small. Because the segmentation function f can only take on the integer values 0... N, it is piecewise constant, with line-like boundaries separating those pieces. We take smoothness of f to mean simplicity of these boundaries, with the total boundary length being relatively small Non-triviality Good solutions should conform to, and explain, rather than ignore, the input data as much as possible. For example, any two input images could be interpreted as views of two painted, planar surfaces, each presented to one camera. Such a trivial interpretation, yielding no correspondence for any image location, would be valid but undesirable. In general, we expect that a correspondence exists for most image locations; i.e., we expect that the segmentation function f is mostly non-zero: for most p: f(p) > 0 Moreover, although color constancy is sometimes violated (e.g., due to specularities), and smoothness and consistency are needed to fill in the gap, a solution that supposes a perfectly smooth and consistent surface, at the expense of violating color constancy everywhere, is also not desirable. In other words, we expect that color constancy holds for most image locations: for most p where f(p) > 0: I ( m[f(p)](p) ) = I(p) Intuitively, using the language of differential equations, consistency and smoothness provide the homogeneous terms that result in the general solution,

40 30 CHAPTER 2. PRELIMINARIES disparity maps segmentation non-triviality E match I E unassigned smoothness E smooth d E smooth f consistency E match d E match f Table 2.1: Contributions to energy. while non-triviality provides the non-homogeneous terms that result in the particular solution. 2.4 Energy Minimization Now that we have defined the form of a solution, and stated its desired properties, how do we find the best solution? We formalize the stereo problem in the framework of energy minimization. In general, energy minimization approaches split a problem into two parts: defining the cost, or energy, of all hypothesized solutions [31], and finding the best solution by minimizing that energy. This separation is advantageous because it facilitates the use of general-purpose minimization techniques, enabling more focus upon the unique aspects of the specific application. For our application, we formulate six energy terms, corresponding to each of the three desired properties, applied to both surface interiors and surface boundaries (see Table 2.1). These terms are developed in the next two chapters; total energy is the sum of these terms.

41 Chapter 3 Surface Fitting In this chapter, we consider a restricted subproblem. Rather than simultaneously estimating both the 3D shape (given by d[k]) and the 2D support (given by f) of each surface, we consider the problem of estimating 3D shape, given 2D support. That is, supposing that the segmentation f is known, how can the disparity maps d[k] be found? Using this context, we explain our model of smooth surfaces; formulate the three energy terms that encourage surface non-triviality, smoothness, and consistency; and discuss the minimization of these energy terms. 3.1 Defining Surface Smoothness Fitting a smooth surface to sparse and/or noisy data is a classic mathematical problem. Sometimes there are obvious gaps in a data set, and one would like to fill them in; other times, the data set is complete but is a mixture of signal and noise, and one would like extract the signal. In either of these cases, the key detail to determining the solution is the exact specification of the smoothness that one expects to find. What are some ways to define, and subsequently impose, surface smoothness? Perhaps the simplest way to impose surface smoothness is to decree that the surface belong to some pre-defined class of known smooth surfaces, for example planar or quadric surfaces. This approach is extremely efficient: because qualified surfaces can be described by a small number of parameters (e.g., horizontal tilt, 31

42 32 CHAPTER 3. SURFACE FITTING vertical tilt, and perpendicular distance for planar surfaces), there are only a few degrees of freedom, and it is relatively easy to find the optimal surface. Moreover, this approach can also be quite robust and tolerant of perturbations, for the same reason. Thus, when they adequately approximate true scene geometry, and effeciency is of concern, global parametric models can be a good choice (e.g., [3, 11]). Unfortunately, true scene geometry often contains local details that cannot be represented by a global parametric model. Low-order parametric models simply lack sufficient degrees of freedom, and generally smooth over all but the coarsest details. High-order global parametric models are less well-behaved, and can produce reconstructions with large-amplitude ringing. Thus, when the reconstruction of accurate details is required, global parametric models are generally unsuitable. At the other end of the spectrum are regularized, pixel-level lattices, where the complete disparity map is specified not by a few global parameters, but instead by the individual disparity values at every pixel. Each of these individual disparities is a separate degree of freedom, so fine detail can be represented faithfully, but some degree of interdependence between them is imposed, to encourge solutions that are regular, or smooth, in some sense. This is generally accomplished by defining a quantitative measure of departure from smoothness, and weakly minimizing it while simultaneously achieving the primary goal (typically color constancy). By varying the type and the strength of the regularization, one can usually balance the competing requirements for accurate details and overall smoothness. However, a pixel-level lattice alone is obviously discrete, specifying disparity values only at its grid points, while the underlying surface has a value at every point. In order to determine the disparity at subpixel positions, one must therefore interpolate between pixel positions. Moreover, the method of interpolation should be smooth enough not to interfere with the chosen form of regularization; for example, one would not use nearest-neighbor interpolation while minimizing quadratic variation. On the other hand, smoothly interpolating between pixels amounts to fitting a spline surface to the grid of data points, with one control point per pixel, and with regularization acting upon the control points. We generalize this idea of a regularized spline by allowing the spacing of the

43 3.2. SURFACES AS 2D SPLINES 33 control point grid to vary; for example, there could be one spline control point at every n-th pixel in each direction. This model subsumes both global parametric models and pixel-level lattices: the former correspond to splines where one patch covers the entire image, and the latter correspond to splines where there is a control point at every pixel. 3.2 Surfaces as 2D Splines We model the disparity map of each surface as a bicubic B-spline. This gives us the flexibility to represent a wide range of slanted or curved surfaces with subpixel disparity precision, while ensuring that disparity values and gradients vary smoothly over the surface. In addition, splines define analytic expressions for disparity values and gradients everywhere, and in particular at subpixel positions, as required by our abstract mathematical model. The control points of the bicubic B-spline are placed on a regular grid with fixed image coordinates (but variable disparity value). The resulting spline surface can be thought of as a linear combination of shifted basis functions, with shifts constrained to the grid. Mathematically, we restrict each d[k] to take the form of a bicubic spline with control points on a fairly coarse, uniform rectangular grid: d[k](x, y, t) = i,j ( D[k][i, j, t] b(x in, y jn) ) (3.1) where b is the bicubic basis function, D is the lattice of control points, and n is the spacing thereof. In general, the spacing of the grid of spline control points should be fine enough so that surface shape details can be recovered, but not much finer than that so that computational costs are not inflated unnecessarily. In our experiments, for each view (left and right) of each hypothesized surface, we use a fixed, 5 5 grid of control points, giving a 2 2 grid of spline patches that divide the image into equal quadrants.

44 34 CHAPTER 3. SURFACE FITTING 3.3 Surface Non-triviality This energy term, often called the data term in other literature, expresses the basic assumption of color constancy, penalizing any deviation therefrom. There are many possible ways to quantify deviation from color constancy; commonly used measures include the absolute difference, squared difference, truncated quadratic, and other robust norms [14, 34, 39]. For simplicity with differentiability, we use a scaled sum of squared differences: E match I = g ( I(m[k](p)) I(p); A(p) ) if f(p) = k with k > 0, (3.2) p 0 otherwise, where g(v; A) = v T A v, and where A(p) is a space-variant measure of certainty defined in Appendix A. Qualitatively, A(p) normalizes for the local contrast of I around p. Note that, ideally, E match I would be defined as an integral over all p I. However, for computational convenience, we approximate the integral with a finite sum over discrete pixel positions, for this and other energy terms. This is a reasonable approximation if the summand is spatially smooth. 3.4 Surface Smoothness Since we expect smooth surfaces to be more likely to occur, we would like to quantify and penalize any deviation from smoothness. Although our spline model already ensures some degree of surface smoothness, this inherent smoothness is limited to a spatial scale not much larger than that of the spline control point grid. On the other hand, we would like the option to encourage additional smoothness on a more global scale; hence we impose an additional energy term. We take the class of perfectly smooth surfaces to be the set of planar surfaces (including both fronto-parallel and slanted planes). The usual measures of deviation

45 3.5. SURFACE CONSISTENCY 35 from planarity are the squared Laplacian: E = (u xx + u yy ) 2 and the quadratic variation, or thin plate spline bending energy [16, 33, 77]: E = u 2 xx + 2u2 xy + u2 yy However, these measures have the disadvantage of using second derivatives, which can be susceptible to noise, and which tend to place too much emphasis on highfrequency, local deviations, relative to low-frequency, global deviations. Instead, in addition to restricting d[k] to take the form of a spline, we add an energy term which, loosely speaking, is proportional to the global variance of the surface slope: E smooth d[k] = λ smooth d d[k](p) mean( d[k]) 2 where the summation and the mean are both taken over all discrete pixel positions p, independent of the segmentation f. This energy term attempts to quantify deviations from global planarity. In our experiments, this energy term is given a very small weight, and mainly serves to accelerate the convergence of numerical optimization by shrinking the nullspace of the total energy function. This term does not prevent surfaces from being non-planar. p 3.5 Surface Consistency For perfect consistency, a surface should have left and right views that coincide exactly with one another, as specified in Equation (2.1). In some prior work, this constraint has been enforced directly through specified update equations, without an explicit energy term being given [51]. However, with left and right views simultaneously constrained each to have the form of Equation (3.1), exact coincidence is generally no longer possible. That is, a surface shape that conforms to Equation (3.1)

46 36 CHAPTER 3. SURFACE FITTING in one view, will no longer necessarily conform to Equation (3.1) when warped to the other view. Therefore, to allow but discourage any non-coincidence, we propose the energy term E match d[k] = λ match d ( ) 2 m[k](m[k](p)) p or equivalently, E match d[k] = λ match d ( ) 2 d[k](p) + d[k](m[k](p)) which, intuitively, measures the distance between the surfaces defined by the left and right views. Again, the summation is taken over all discrete pixel positions p, independent of the segmentation f. p p 3.6 Surface Optimization Given a particular k, this chapter s subproblem is to minimize (or reduce) total energy by varying d[k], while holding f and the remaining d[j] constant. Total energy is a sum of six terms; in this chapter, three of them were shown to depend smoothly on d[k]. In Chapter 4, the two terms E unassigned and E smooth f are shown to depend only on f, and thus can be considered constant for the present subproblem. In Chapter 5, the remaining term E match f is shown to depend smoothly on d[k]. Therefore, the total energy as a function of d[k] is differentiable, and can be minimized with standard gradient-based numerical methods. For convenience, we use Matlab s optimization toolbox. The specific algorithm chosen is a trust region method with a 2D quadratic subproblem. This is a greedy descent method; at each step, it minimizes a quadratic model within a 2D trust region spanned by the gradient and the Newton direction. Experimentally, this algorithm exhibited more reliable convergence than the quasi-newton methods with line

47 3.6. SURFACE OPTIMIZATION 37 searches, and although it requires the calculation of the Hessian, in our implementation, that expense is relatively small compared to the total computational requirements of solving the stereo problem. In this chapter, we have shown how to minimize total energy by varying each d[k] individually. As long as f remains fixed, there is nothing to be gained by varying all d[k] simultaneously. For each k > 0, we call minimizing over d[k] a surface-fitting step, and consider it to be a building block towards a complete algorithm for minimizing total energy.

48 Chapter 4 Segmentation In this chapter, we consider a restricted subproblem. Rather than simultaneously estimating both the 3D shape (given by d[k]) and the 2D support (given by f) of each surface, we consider the problem of estimating 2D support, given 3D shape. That is, supposing that the disparity maps d[k] are known for all surfaces, how can the segmentation f be found? Using this context, we explain our model of segmentation; formulate the three energy terms that encourage segmentation non-triviality, smoothness, and consistency; and discuss the minimization of these energy terms. 4.1 Segmentation by Graph Cuts Boykov, Veksler, and Zabih [21] showed that certain labeling problems can be formulated as energy minimization problems and solved efficiently by repeatedly using maximum flow techniques to find minimum-cost cuts of associated network graphs. Generally speaking, such problems seek to assign one of a given set of labels to each of a given set of items, simultaneously optimizing for not only items individual preferences for each label, but also interactions between pairs of items, where interacting items additionally prefer to be assigned similar labels, for some measure of similarity. Formally, let L be a finite set of labels, P be a finite set of items, and N P P be the set of interacting pairs of items. The methods of [21] find a labeling f that assigns exactly one label f p L to each item p P, subject to the constraint that 38

49 4.1. SEGMENTATION BY GRAPH CUTS 39 an energy function of the form E(f) = V p,q (f p, f q ) + D p (f p ) (4.1) (p,q) N p P be minimized. Individual energies D p should be nonnegative but can otherwise be arbitrary; interaction energies V p,q should be either semi-metric or metric, where V is a semi-metric if it is symmetric and positive strictly for distinct labels, and where V is a metric if it additionally satisfies the triangle inequality. Kolmogorov and Zabih [48] generalize these results, deriving necessary and sufficient conditions on the form of the energy E in order for it to be minimizable with graph cut methods. Given an energy function in the form of Equation (4.1) satisfying the relevent conditions, the methods of [21] are extremely effective at finding a minimizing labeling, both in terms of computational complexity (which does grow with the sizes of both L and P) and in terms of the optimality of the final solution. For this reason, we have chosen to use these methods to solve our segmentation subproblem. This generic formulation of an energy-minimizing labeling problem maps to our formulation of the stereo problem as follows: the labels are the integers 0... N that are the possible values of the segmentation function f, and the items are the pixels of each input image. This is in contrast to [21], in which the items are the pixels of a single reference image, and to [47], in which the items are pairs of potentially corresponding pixels. In our formulation, the individual preferences stem from testing color constancy at varying disparities, and the interactions stem from the expectations of smoothness and consistency. Although graph cut methods could concievably be used to estimate a layered model with transparency, in such a case, each image point could receive contributions from any subset of the set of all layers, making the number of possible labels exponential in the number of layers. Unfortunately, this would make computational costs prohibitively high for any significant number of layers, so for feasibility, our algorithm assumes that all objects are restricted piecewise to be either completely opaque or completely invisible. Similarly, as implied by our principle of least discretization, ideally we would be

50 40 CHAPTER 4. SEGMENTATION able to model the location of surface boundaries to arbitrary precision. However, because graph cut methods treat each item as being indivisible, any increase in precision would require corresponding increases in the number of items and hence in computational complexity. In fact, our algorithm should require only minor modifications in order to estimate boundary locations with any given (but fixed) subpixel precision. However, the usefulness of high precision is questionable, since accuracy would nonetheless be limited by the pixelization of the input images. Because of both this and the computational costs, in our algorithm, pixels are prohibited from being split spatially among several surfaces, but instead are constrained to be indivisible, forcing surface boundaries to lie on pixel boundaries. In representing the continuous-domain segmentation function f with a finite number of unknowns on a discrete grid of pixels, we essentially perform nearestneighbor interpolation: f(x, y, t) = F (round(x), round(y), t) where F is defined on an integer lattice. We now further explain the preferences of individual pixels and the nature of pairwise interactions, and define the corresponding energy terms to be minimized. 4.2 Segmentation Non-triviality The primary goal of the segmentation subproblem is to assign each pixel to the surface it fits best. This is accomplished by minimizing the deviation from color constancy, E match I = g ( I(m[k](p)) I(p); A(p) ) if f(p) = k with k > 0, p 0 otherwise, This is identical to Equation (3.2), only now, we consider it as a functional of f with m[k] being constant, instead of vice versa.

51 4.3. SEGMENTATION SMOOTHNESS 41 Note that we allow f(p) = 0 for some p, to model occluded pixels which should remain unmatched and thus unassigned to any surface. However, the astute reader will notice that since g( ) is nonnegative, E match I is trivially minimized by f(p) 0. To discourage solutions with a large number of unassigned pixels, we add a fixed penalty for each unassigned pixel in order to try to minimize the total area of unassigned regions: E unassigned = λ unassigned if f(p) = 0, p 0 otherwise, where λ unassigned is a constant. While it is not uncommon among stereo algorithms to have an occlusion penalty such as this one, it should be noted that this term is not solely for handling occlusions; for example, it also limits the influence of gross outliers in the input image data. Thus, the underlying segmentation problem, for the moment ignoring smoothness and consistency, is to find the labeling f that minimizes the sum E match I + E unassigned. Put into the form of Equation (4.1), this corresponds to the following definition of individual pixel preferences: D p (f p ) = g ( I(m[k](p)) I(p); A(p) ) for f p > 0, D p (0) = λ unassigned. 4.3 Segmentation Smoothness In addition to minimizing pointwise costs, we would also like to encourage a simple segmentation with smooth boundaries of surface extents. There are several attributes which can be used to formalize this notion, including boundary length and curvature [16, 53]. We choose to minimize boundary length without separate regard for boundary curvature, because it is simpler to optimize, and works fairly well in practice.

52 42 CHAPTER 4. SEGMENTATION In addition to this a priori expectation of simple boundaries, there is also an expectation that boundaries will generally be correlated with monocular image features (called static cues in the terminology of [21]). That is, when viewed in a twodimensional image, the boundary of a surface will statistically look more like an edge than average. Thus, we would like to reward the placement of boundaries at edge-like image locations. Again, there are many ways to estimate edge likelihood, ranging from thresholded gradient magnitudes to color distribution distances [61]; we use a function of gradients and local contrast. This measure of edge likelihood at each point is then used to adjust the cost per unit length of boundaries passing through that point. There is one more issue to consider: which boundaries do we want to minimize? Intuitively, minimizing the length of the boundary of any particular region will tend to shorten or remove any protrusions or indentations that are long and thin. This makes sense for regions that correspond to surfaces, because such narrow structures are generally less common than wider ones. However, occlusion regions are very typically long and thin; minimizing their boundary length would greatly hinder their accurate recovery. To encourage a simple segmentation, we therefore would like to minimize the total length of the boundaries of each surface, with adjustments made to consider monocular cues. We define this energy term for each surface k > 0 accordingly: E smooth f [k] = p adjacent to q w s (p, q) if f(p) = k xor f(q) = k, 0 otherwise, where adjacency is according to 4-connectedness within each image. The value of the weighting function w s (p, q) should decrease as monocular cues become more indicative of an edge, but the minimum and maximum values of w s (p, q) should not be too far apart, since monocular cues should not override the goal of minimizing boundary length. We define the weighting function as follows: w s (p, q) = λ smooth f ( ) 1 + e ( I T A I )/τ

53 4.4. SEGMENTATION CONSISTENCY 43 for p adjacent to q, where λ smooth f and τ are constants, and I and A are both evaluated at the subpixel position (p + q)/2. Put into the form of Equation (4.1), E smooth f [k] corresponds to this penalty function for intra-image, pairwise interactions: V p,q (f p, f q ) = w s (p, q) T ( f p = k xor f q = k ) k>0 0 if f p = f q = w s (p, q) 1 if f p f q with f p = 0 or f q = 0 2 if f p f q with f p > 0 and f q > 0 for p adjacent to q, where T ( ) equals 1 if its argument is true, and equals 0 otherwise. 4.4 Segmentation Consistency The energy terms we have so far given in this chapter together encourage color constancy and continuity. However, two-way uniqueness has yet to be enforced. For perfect consistency, the segmentation f should satisfy Equation (2.2). To quantify and discourage any segmentation inconsistencies, we add an energy term for each surface k > 0: E match f [k] λ match f if f(p) = k xor f(m[k](p)) = k, p 0 otherwise, which approximates the area of inconsistent regions, where (2.2) does not hold. As before, this term should ideally be defined with an integral, but in this case, a naive finite sum is not an adequate substitute, as shall be explained in Chapter 5. Put into the form of Equation (4.1), E match f [k] corresponds to this penalty function for inter-image, pairwise interactions: V p,q (f p, f q ) = k>0 w c [k](p, q) T ( f p = k xor f q = k ),

54 44 CHAPTER 4. SEGMENTATION where, naively, w c [k](p, q) λ match f ( T ( m[k](p) = q ) + T ( m[k](q) = p )), for p and q in corresponding scanlines. 4.5 Segmentation Optimization This chapter s subproblem is to minimize (or reduce) total energy by varying f, while holding all d[k] constant. Total energy is a sum of six terms, two of which (E smooth d and E match d) are independent of f. In this chapter, the remaining four terms are written in the form of Equation (4.1); moreover, the penalty functions V p,q can be verified to be metrics. Therefore, the total energy as a function of f can be optimized with graph cut methods. We use a modified version of the expansion algorithm of [21]. This greedy algorithm is built from expansion moves, and gets its power from the generality of such moves: an expansion move on a label k finds the best configuration reachable by relabeling any subset of pixels with k. Our modification is to precede each expansion with a contraction of the same label, which strictly enlarges the set of reachable configurations. We call such a contraction-expansion pair on any one label, a segmentation step, and consider it to be a building block towards a complete algorithm for minimizing total energy.

55 Chapter 5 Integration In this chapter, we consider the general problem of simultaneously determining surface shape in the form of disparity maps, and surface support in the form of segmentation, when both are initially unknown. First, we consider the interaction between surface fitting in a continuous domain and segmentation in a discrete domain, and discuss the integration of the two into a single, well-behaved energy function. Then, we describe a complete algorithm for minimizing the total energy. 5.1 Segmentation Consistency, Revisited Let us take another look at the issue of segmentation consistency, and how it causes an interdependence between the disparity maps d[k] and the segmentation f. Consider one pair of corresponding scanlines l and r, and one particular surface k. How are the labelings f(l) and f(r) and the disparity maps d[k](l) and d[k](r) of these scanlines related to one another? For simplicity, suppose that (x x l ) ( f(x, y 0, left ) = k ), (x x r ) ( f(x, y 0, right ) = k ) ; that is, the boundary of surface k intersects the corresponding scanlines once each, 45

56 46 CHAPTER 5. INTEGRATION with the same orientation. Let us call x r x l the boundary disparity d b : it is the disparity between the left and right labeling boundaries. Let us call d[k](x l, y 0, left ) the surface disparity d s at the boundary: it is the disparity given by the surface disparity map. Then it follows from Equation (2.2) that x l x r = d[k](x r, y 0, right ), d b = x r x l = d[k](x l, y 0, left ) = d s. In other words, the segmentation consistency constraint requires that the boundary disparity and surface disparity be equal. To achieve this equality in an energy minimization framework, let us define E 0 = h 0 (d b d s ) where h 0 ( d) = d. Then, if arbitrary disparities are allowed, exact agreement between boundary disparity and surface disparity follows from minimizing this energy. This is in fact a special case of minimizing the area of the inconsistent region as described in Section 4.4. However, because of pixel-wise segmentation, arbitrary boundary disparities are in fact not allowed; only integer boundary disparities are possible. If exact agreement were still enforced, this would imply that surface disparities at the boundary would also be restricted to be integral. Such a restriction would be undesirable, because one should not expect the position of objects in the world to be correlated with an arbitrary discretization of images into pixels. Instead of exact agreement, then, only nearest-integer agreement between boundary disparity and surface disparity should be encouraged. That is, since for any surface disparity d s, the closest possible boundary disparity is d b = round(d s ), it follows that any surface disparity within ± 1 pixel of a given boundary disparity 2 should be considered equally good. This can be accomplished with a modified energy function: E = h(d b d s ) where h( d) = max( 1 2, d ).

57 5.2. OVERALL OPTIMIZATION 47 This energy for an isolated boundary is in turn generalized for arbitrary segmentations as E match f [k] = λ match f ĥ( m[k](p) q ) if f(p) = k xor f(q) = k, p,q 0 otherwise, where p and q are on conjugate epipolar lines, and where 1 for d 1, 2 2 ĥ( d) = 3 d for 1 < d < 3, for d 3. 2 Our implementation modifies ĥ by rounding its corners (at d = 1 and d = 3) 2 2 so that total energy remains differentiable with respect to d[k]. Put into the form of Equation (4.1), this new E match f [k] again corresponds to V p,q (f p, f q ) = w c [k](p, q) T ( f p = k xor f q = k ) k>0 as specified in Section 4.4, but now with ) w c [k](p, q) = λ match f (ĥ( m[k](p) q + ĥ ( m[k](q) p )) again for p and q in corresponding scanlines. 5.2 Overall Optimization We have now defined each of the six energy terms in Table 2.1; total energy is the sum of these terms. We have also defined surface-fitting steps and segmentation steps, which are building blocks for the minimization of total energy. How are these building blocks put together to form a complete algorithm for overall optimization? We list the overall algorithm in Table 5.1, and explain its details in the remainder of this section.

58 48 CHAPTER 5. INTEGRATION 1. Initialize hypothesis with fronto-parallel surfaces at integer disparity; set f Repeat: (a) Alternately apply segmentation and surface-fitting steps until progress becomes negligible. (b) For each hypothesized surface: Attempt to merge it. until either some merge succeeds or all merges fail. until all merges fail. 3. Optionally post-process to fill in unmatched regions. Table 5.1: Our overall optimization algorithm Iterative Descent Total energy is a multi-variable functional of the unknowns {f, d[1],..., d[n]} (each of which is a function itself); each building block attempts to reduce total energy by changing one of these unknowns. As commonly occurs in multi-variable optimization, these unknowns are coupled. In particular, the segmentation f is coupled with the disparity maps {d[1],..., d[n]}, in that if total energy is sequentially optimized, first with respect to f, then with respect to {d[1],..., d[n]}, the end result will generally not remain optimal with respect to f. This coupling between disparity maps and segmentation is caused by the requirement for segmentation consistency: as explained in the previous section, there must be agreement between surface disparity as specified by d[k], and boundary disparity as specified by f. To elaborate, let us again consider the previous section s example of a single boundary on a pair of scanlines. Suppose that, for this boundary, d s = d b = d 0 for some d 0. Then, in order to maintain consistency, adjusting either d[k] or f alone must leave this disparity unchanged; only by simultaneously adjusting

59 5.2. OVERALL OPTIMIZATION 49 both d[k] and f can this disparity be changed without violating consistency. Unfortunately, neither surface-fitting steps nor segmentation steps allow such simultaneous adjustment. Because of this coupling, small violations of segmentation consistency must temporarily be allowed as the solution evolves toward an optimum. This is accomplished by ensuring that λ match f is not too large. In this case, however, segmentation consistency nonetheless hinders evolution at boundaries, as surface disparities and boundary disparities are constrained to remain close to one another while being restricted to move only one at a time. Thus it is necessary to alternate iteratively between surface fitting and segmentation, to allow boundaries to evolve satisfactorily. With this in mind, our general strategy for overall optimization is simple: given an initial hypothesis, repeatedly try all possible building blocks for the reduction of total energy, until none gives any further improvement. Using the surface-fitting and segmentation steps defined in Chapters 3 and 4 as our only building blocks, this works well in practice as long as the initial hypothesis is somewhat close to the final solution, with the surfaces of each roughly in one-to-one correspondence. Otherwise, if the initial hypothesis does not match reality, several scenarios can occur Merging Surfaces If the initial hypothesis contains enough distinct surfaces, those initial surfaces will generally select among themselves in the process of competing for representation of the actual surfaces, with the stronger ones (with better initial fit) naturally pushing away the weaker ones (with poorer initial fit). It is possible that this situation will end with one hypothesized surface on each actual surface, with the extra hypothesized surfaces having been naturally driven to extinction (i.e., their support in image space becomes the empty set); this outcome yields the correct solution. It is also possible that two or more hypothesized surfaces will end in a deadlock over a single actual surface. This corresponds to a local minimum of the energy functional. Because of this possibility for deadlock, we consider another building block, called a merge step. A merge step must first be preceded by the saving of a checkpoint of the current state. A merge step then begins with the forceful extinction of a

60 50 CHAPTER 5. INTEGRATION selected, hypothesized surface. This involves relabeling all of that surface s supporting pixels with f = 0, and removing its disparity map from all subsequent calculations. At this point, the total energy will mostly likely have increased drastically, due to E unassigned. The orphaned pixels are then immediately redistributed among the remaining surfaces by a series of segmentation steps. Further surface-fitting and segmentation steps are subsequently taken, until either the total energy falls below that of the saved checkpoint, in which case the merge succeeds and the checkpoint is discarded, or the total energy plateaus above that of the checkpoint, in which case the merge fails and the checkpoint is restored. Merge steps are taken whenever no significant progress can be achieved through surface-fitting and segmentation steps alone, in order to reduce the chances of getting trapped in a poor local minimum. However, because merge steps generally require a large amount of speculative computation, this is the only time they are taken. If the initial hypothesis contains too few surfaces, another scenario can arise. If one hypothesized surface comes to span several actual surfaces, and there are no extra hypothesized surfaces to take over support, it is possible that none of our building blocks will be able to remedy the situation. In such a scenario, our algorithm will produce an incorrect answer Initialization In light of this vulnerability, it would be desirable to choose an initial hypothesis where no hypothesized surface spans more than one actual surface. This could be achieved by a sufficiently dense seeding over the areas of the left and right images, where the initial disparity of a seed can be determined by, say, windowed correlation. Density is sufficient if every actual surface gets at least one seed; otherwise, any unseeded surfaces would immediately grab a hypothesized surface, leading straight to the described vulnerability. Unfortunately, if there any small objects in the scene, sufficient density by this standard would require a prohibitively large number of surfaces. Because of this difficulty, our chosen method of initialization aims only to ensure that every actual surface be covered by some initial surface, regardless of whether

61 5.2. OVERALL OPTIMIZATION Take result of main, energy-minimization algorithm. 2. Repeat a few times: (a) Find all pixels in violation of segmentation consistency. Reassign them to f = 0. (b) For all pixels where f = 0: i. Look at the labels of the nearest pixels to the left and right where f > 0. ii. Reassign to whichever label corresponds to the smaller disparity. separated by: (a) Re-estimate disparity maps from segmentation. (b) Re-estimate segmentation from disparity maps. using modified parameters λ match f = 0 and λ unassigned 1. Table 5.2: Our post-processing algorithm. that initial surface also covers a different actual surface. Our algorithm requires that, alongside the input image pair, a range of possible disparities also be specified; the initial hypothesis is then formed by placing one fronto-parallel surface at every integer disparity within that range, with all pixels initially unassigned (f 0). This strategy works in practice because of two observations. First, left and right image regions will generally appear to match one another within a disparity range of ±1 around the true disparity, in that such a match would have lower energy than remaining unmatched. Second, given a stereo pair of images, the set of possible disparities for any object therein is usually relatively small Post-Processing For some applications, it might be desirable to estimate a disparity value for every pixel, whether occluded in the opposite image or not. For such occasions, we give

62 52 CHAPTER 5. INTEGRATION an ad hoc method of post-processing to assign a disparity value to any pixels that are marked as occluded by our main algorithm. This post-processing algorithm is listed in Table 5.2. This procedure is a departure from the energy minimization framework, and in general will not converge if iterated indefinitely. However, in our experiments, using two or three iterations seemed to produce reasonably accurate results in practice.

63 Chapter 6 Experimental Results In Chapters 2 through 5, we presented a new algorithm for binocular stereopsis, and compared portions of its design to those of other stereo algorithms. We have implemented this algorithm using a combination of Matlab and C, and tested it on several non-synthetic stereo pairs available online [12, 50, 64]. In this chapter, we present the results of our experiments on these images, which together span a range of attributes (color and grayscale; well textured and untextured) and contain a variety of features (slanted planes and curved surfaces; small step edges, large step edges, and crease edges), testing the generality of our algorithm. We compare our results both quantitatively and qualitatively to those achieved by other algorithms. 6.1 Quantitative Evaluation Metric Before we can make a quantitative comparison, we need a method for evaluating the accuracy of a stereo reconstruction. Szeliski and Zabih [73] propose two such methods. If ground truth is available, one can directly compare estimated and true disparity maps. If ground truth is not available, one can instead use the estimated disparity map to warp a reference image of the scene into a novel view, and compare the resulting image to the actual image from the novel viewpoint. The latter method is useful in such applications as image-based rendering, where small errors in textureless 53

64 54 CHAPTER 6. EXPERIMENTAL RESULTS regions are relatively unimportant, but the former method is more appropriate when accuracy in the scene structure itself is the desired goal. Adopting the method of direct comparison with ground truth, Scharstein and Szeliski [64, 65] provide sample stereo pairs with ground truth, propose a metric for comparing results with ground truth, and tabulate results for twenty algorithms. To evaluate the accuracy of our algorithm, we use their method and data as well, to facilitate comparison with other algorithms. We note that their ground truth disparities are based on depth and not necessarily correspondence, for they include values for the entire reference (left) image, including at positions corresponding to regions not visible in the other (right) image. Scharstein and Szeliski [65] evaluate results by measuring the percentage of bad pixels within various subsets of the entire reference image, where a bad pixel is one whose estimated and ground truth disparities differ by more than a given threshold. Some of the various subsets focus on particularly challenging image regions, including the set of pixels near any discontinuity, and the set of pixels where texture is weak or nonexistent; these allow one to evaluate the performance of an algorithm specifically in such difficult situations. To evaluate overall accuracy, however, one should include in the comparison as much of the reference image as feasible. The favored overall measure in [65] does indeed correspond to their most comprehensive image subset, which contains all pixels except border pixels within ten pixels of the image edge, and occluded pixels not visible in the right image according to the given ground truth. Scharstein and Szeliski explain the exclusion of occluded pixels as follows [65]: We exclude the occluded regions for now since few of the algorithms in this study explicitly model occlusions, and most perform quite poorly in these regions. As algorithms get better at matching occluded regions, however, we will likely focus more on the total matching error.... Given that estimating depth where no correspondence exists is fundamentally a different problem from estimating an existing correspondence, we agree that finding the correct disparity in the absence of correspondence is in some sense less critical than doing so in the presence thereof. However, we believe that it is nonetheless

65 6.2. QUANTITATIVE RESULTS 55 undesirable for a stereo algorithm to return incorrect disparity values in such an occlusion region, as such a result could easily convey a misleading impression of reality. Instead, it would be best if an algorithm reported such occlusion regions as being without correspondence, or as having an unknown or undefined disparity. Unfortunately, appropriately handling such results with holes would require a more sophisticated, and more difficult to interpret, evaluation metric. Therefore, in our application of Scharstein and Szeliski s evaluation metric, we choose to include occluded and non-occluded pixels equally, and exclude only border pixels. Scharstein and Szeliski exclusively use a fixed disparity error threshold of plus or minus one pixel to classify disparity estimates as good or bad. This makes sense with the selection of algorithms in their study, because few of them compute subpixel disparities. However, since we do compute subpixel disparities, and would like to evaluate the accuracy thereof, we consider a range of thresholds instead. In Section 6.2, we plot the fraction of bad pixels as a function of this threshold to give a somewhat more complete picture of the accuracy of any particular result. These plots are essentially cumulative histograms of disparity error. The described error measure evaluates dense depth maps that have exactly one disparity value per pixel. It does not handle pixels without an assigned disparity value, nor does it pay any attention to labeled discontinuities, both of which our algorithm produces. Regarding the latter, we shall resort to qualitative observations, but regarding the former, we apply some simple post-processing (described in Subsection 5.2.4) to fill in the missing disparities, before evaluating the results with this error measure. 6.2 Quantitative Results We tested our algorithm on the four stereo pairs used in Scharstein and Szeliski [65], available online at [64]. For these four stereo pairs, we obtained results for our algorithm using two sets of parameters. One set prefers a more coarse segmentation, by giving a larger weight to

66 56 CHAPTER 6. EXPERIMENTAL RESULTS (empty) left input image right input image ground truth disparity with occlusions (left image; grayscale) ground truth disparity without occlusions (left image; grayscale) disparity difference without occlusions (left image; colored) estimated disparity with occlusions (left image; grayscale) estimated disparity without occlusions (left image; grayscale) estimated segmentation without occlusions (left image; grayscale) estimated disparity with occlusions (right image; colored) estimated disparity without occlusions (right image; colored) estimated segmentation without occlusions (right image; grayscale) Table 6.1: Layout for figures of complete results; see text for descriptions. E smooth f relative to the other energy terms; the other prefers a more fine segmentation, by giving a smaller weight to E smooth f. All other parameters are unchanged between the two sets. Due to space considerations, we show complete results for only one of the two sets of parameters for each stereo pair, in Figures 6.1, 6.3, 6.5, and 6.7. Table 6.1 shows the layout of these figures. In this table, estimated results with occlusions refer to those obtained by the main, energy minimization algorithm without any post-processing, while estimated results without occlusions refer to those obtained after post-processing. Ground truth disparities without occlusions refer to the orginal disparity maps provided by [64], while ground truth disparities with occlusions are masked by the binary occlusion maps also provided by [64]. Grayscale disparity maps use the same scaling used in [64]. Color-coded disparity maps use a hue-modulated color map, to highlight isocontours and smaller disparity differences. Each hue cycle corresponds to a disparity increment of one pixel; as with the grayscale maps, lighter shades of any particular hue are closer to the viewer. Finally, the color-coded disparity difference images show positive and negative errors in red and blue, respectively. However, in the graphs of bad pixels versus disparity error threshold

67 6.2. QUANTITATIVE RESULTS 57 (Figures 6.2, 6.4, 6.6, and 6.8), we summarize results for both sets of parameters (labeled coarse and fine ). We also compare our algorithm with the four algorithms which appear to be the most accurate among the nineteen remaining algorithms tabulated in [65] Map This grayscale stereo pair (Figure 6.1) shows two highly textured, moderately slanted, planar surfaces; a very simple boundary separates the two. The disparity difference between the two surfaces is relatively large, resulting in a significant occlusion region on either side of the foreground surface. Because texture is present throughout the images, and because the structure of the scene is so simple, it is relatively easy to determine the correct correspondence for image points for which a correspondence exists. However, the large occlusion region emphasizes any errors an algorithm might make in treating image points for which a correspondence does not exist. This is in fact what is observed for many of the algorithms in [65]: results are computed with quite good accuracy for most of the image, but errors in the occlusion region severely hurt overall performance. Our algorithm correctly handles the large occlusion area. Before post-processing, the energy minimization algorithm alone takes a small bite out of the left edge of the foreground surface; a closer inspection of the input images suggests that this is likely because, at this location, the right image appears slightly darker than the left. In any case, post-processing patches this gap in the foreground surface, and places the final boundary everywhere within one pixel of its correct location Venus This color stereo pair (Figure 6.3) shows five slanted planes with varying amounts of texture, including some regions with virtually no texture. Two of the surfaces are joined by a crease edge; the remaining boundaries are all step edges. Although the textureless regions of this stereo pair do cause difficulties for some of the algorithms in [65], the most frequent error seems to occur in a region where texture

68 58 CHAPTER 6. EXPERIMENTAL RESULTS Figure 6.1: Map stereo pair: results for coarse parameter set. Key: see Table 6.1.

69 6.2. QUANTITATIVE RESULTS Lin Tomasi (coarse) Lin Tomasi (fine) Kolmogorov Zabih Birchfield Tomasi (1998) Hirschmuller Shao 10 % Bad Pixels Error Threshold (pixels) Figure 6.2: Map stereo pair: error distributions for our algorithm and [9, 35, 47, 66]. is in fact present but aliased. At high magnification, it can be seen that toward the left center of the input images, the horizontal dotted lines consist of dots whose size appears to vary slightly between images. This discrepancy is apparently significant enough to overcome the continuity constraint in a majority of the algorithms tabulated in [65]. However, our algorithm is not fooled by this aliasing, most likely due to the consideration of uncertainty as described in Section A.2. Regarding disparity estimation, our algorithm does very well, as can be seen from Figure 6.4. The largest error occurs at the corner of the V-shaped depth discontinuity, where our penalty for boundary length causes the tip of the V to be missed. This type of behaviour is a typical result of minimizing boundary length without regard for boundary curvature and junctions. Regarding segmentation, our algorithm recovers only four distinct surfaces, missing the vertical crease in the sports page. This is likely because the area of the rightmost pane is too small, compared to the length of the crease; again, the penalty for boundary length dominates.

70 60 CHAPTER 6. EXPERIMENTAL RESULTS Figure 6.3: Venus stereo pair: results for coarse parameter set. Table 6.1. Key: see

71 6.2. QUANTITATIVE RESULTS Lin Tomasi (coarse) Lin Tomasi (fine) Kolmogorov Zabih Birchfield Tomasi (1999) Hirschmuller Sun Shum Zheng 10 % Bad Pixels Error Threshold (pixels) Figure 6.4: Venus stereo pair: error distributions for our algorithm and [11, 35, 47, 69] Sawtooth This color stereo pair (Figure 6.5) shows three slanted planes with varying amounts of texture; boundaries are all discontinuous in depth, and consist of many straight line segments joined by relatively sharp angles. As with the Venus stereo pair, our algorithm tends to truncate some of these angles, only more severely so for this stereo pair. Note that a large fraction of the area of these erroneous regions correspond to occlusion regions not visible in the right image. Our algorithm does not truncate the upward-pointing tips, which are completely visible in both images Tsukuba This color stereo pair (Figure 6.7), courtesy of Y. Ohta and Y. Nakamura of the University of Tsukuba, shows a lab scene consisting of various planar, smoothly curved, and non-smooth objects. Object boundaries are relatively complex, with

72 62 CHAPTER 6. EXPERIMENTAL RESULTS Figure 6.5: Sawtooth stereo pair: results for coarse parameter set. Key: see Table 6.1.

73 6.2. QUANTITATIVE RESULTS Lin Tomasi (coarse) Lin Tomasi (fine) Kolmogorov Zabih Birchfield Tomasi (1999) Hirschmuller Boykov et al. (expansion) 10 % Bad Pixels Error Threshold (pixels) Figure 6.6: Sawtooth stereo pair: error distributions for our algorithm and [11, 19, 35, 47]. several long and thin structures. These narrow structures (e.g., the tripod legs and handle, and the lamp arm and cord) are problematic for many of the algorithms in [65], tending to be lost because their area is insufficient to support their boundary length. Our algorithm also tends to over-simplify these extended boundaries, even with the parameter set that prefers a finer segmentation. However, it should be noted that the results of our main algorithm only, without post-processing, are significantly more accurate than those obtained after post-processing: it is the post-processing that causes the tripod handle to be missed and the lamp arm to be filled in. It is also notable that while the given ground truth represents all surfaces as being fronto-parallel at integer disparity, our algorithm produces curved surfaces. In particular, our algorithm models the entire head as one curved surface, with the nose and chin being closest to the camera, and the left and right sides of the head being farther by approximately one half pixel of disparity.

74 64 CHAPTER 6. EXPERIMENTAL RESULTS Figure 6.7: Tsukuba stereo pair: results for fine parameter set. Key: see Table 6.1.

75 6.3. QUALITATIVE RESULTS Lin Tomasi (coarse) Lin Tomasi (fine) Kolmogorov Zabih Sun Shum Zheng Boykov et al. (swap) Boykov et al. (expansion) % Bad Pixels Error Threshold (pixels) Figure 6.8: Tsukuba stereo pair: error distributions for our algorithm and [18, 19, 47, 69]. 6.3 Qualitative Results Among the four stereo pairs used in the benchmark by Scharstein and Szeliski, all but one are full-color. Furthermore, among the three pairs that consist of extended smooth surfaces, there is a total of one crease edge. To verify both that our algorithm can recover crease edges, and also that it does not need color, we therefore tested it on two of the grayscale stereo pairs used in Birchfield and Tomasi [11, 12]. These first two stereo pairs below, courtesy of Birchfield [12], show varying amounts of texture, and are each well approximated by five slanted planes. Most of the surface boundaries are crease edges, and those that are step edges have disparity jumps of only a few pixels, so there are relatively few occluded pixels overall. The original versions of these two stereo pairs, as they appear in [12], show both minor geometric distortion, and minor photometric variations between the left and right images. Here, we use modified versions, from which the photometric variations have been mostly removed. Because of this, our results are not necessarily directly

76 66 CHAPTER 6. EXPERIMENTAL RESULTS comparable to those presented in [11]. Note that the geometric distortion was left in place; this manifests itself in the apparent curvature of the floor, as recovered by our algorithm. Although we were unable to locate ground truth disparities for these scenes, we present the results of our algorithm for qualitative evaluation. Since ground truth is unavailable, Figures 6.9 and 6.10 for these images are laid out according to the two rightmost columns of Table Cheerios In this stereo pair (Figure 6.9), disparity edges are fairly well marked by intensity edges. Birchfield and Tomasi s multiway cut algorithm [11] does very well on these images; its primary error is the splitting of the upper-left surface of books into two nearly coplanar pieces. Our algorithm also does fairly well, but in contrast, makes essentially the opposite error: the Cheerios box is represented with only one surface. This error is analogous to that which occurred on the sports page of the Venus stereo pair Clorox In this stereo pair (Figure 6.10), disparity edges are less well marked by intensity edges; furthermore, there are distracting, strong intensity edges that do not accompany disparity edges. Birchfield and Tomasi s multiway cut algorithm [11] fares more poorly, deceived by the misleading intensity edges into misplacing the crease edges there. Our algorithm does not have this problem, producing results similar to those obtained on the Cheerios images. Our relative immunity to such deception is likely due in part to the contrast-normalized edge-weighting function described in Section Umbrella Among the six stereo pairs presented so far, only the Tsukuba set depicts a few curved surfaces, all of which are fairly small. Okutomi et al. [55] exhibit an image

77 6.3. QUALITATIVE RESULTS 67 Figure 6.9: Cheerios stereo pair. Key: see Table 6.1.

78 68 CHAPTER 6. EXPERIMENTAL RESULTS Figure 6.10: Clorox stereo pair. Key: see Table 6.1.

79 6.3. QUALITATIVE RESULTS 69 set that features a larger curved surface, but that surface is densely textured, which simplifies the recovery of its shape. To verify that our algorithm can reconstruct curved surfaces in the absence of dense texture, we therefore tested it on a stereo pair of our own creation. Although we were unable to obtain accurate ground truth disparities for this scene, we present the results of our algorithm on this scene for qualitative evaluation. This stereo pair (Figure 6.11, also laid out according to the two rightmost columns of Table 6.1) shows five surfaces. The carpeted floor has some fine-grained, lowcontrast, stochastic texture, and is planar with a large disparity gradient. Both checkerboard patterns can be considered to have high-contrast, quasi-periodic texture; like the floor, the right checkerboard is also planar, but the left checkerboard, mounted on a sheet of poster board, is slightly warped. The rear surface is a large, unmarked, virtually textureless sheet of cardboard that is more severely warped. The red and white Stanford umbrella is strongly curved, and rests on the floor, but does not contact the rear sheet of cardboard; it is composed of fairly large, virtually textureless panels joined together by high-contrast color edges that are not disparity edges. The combination of these features makes this stereo pair particularly challenging. To create this stereo pair, we manually arranged these objects, along with a third checkerboard pattern much closer to the viewer. We photographed the scene from several angles with a 4-megapixel, Bayer-pattern CCD, digital still camera, and chose two viewpoints whose images were visually in epipolar alignment. We corrected for lens distortion using intrinsic camera parameters obtained from additional photographs of a checkerboard pattern, and did stereo rectification using extrinsic camera parameters derived from the checkerboard patterns in this scene. Finally, we cropped and resized the images down to a manageable number of pixels, in the process removing the third checkerboard pattern and reducing the artifacts caused by the Bayer-pattern CCD. Although we do not have results by other algorithms for this stereo pair, we note that few of the algorithms tabulated in [65] are capable of representing smoothly curved surfaces with subpixel disparity values, and among those, fewer still readily reproduce sharp discontinuities in the disparity map.

80 70 CHAPTER 6. EXPERIMENTAL RESULTS Figure 6.11: Umbrella stereo pair. Key: see Table 6.1.

Surfaces with Occlusions from Layered Stereo

Surfaces with Occlusions from Layered Stereo Surfaces with Occlusions from Layered Stereo Michael H. Lin Carlo Tomasi Copyright c 2003 by Michael H. Lin All Rights Reserved Abstract Stereo, or the determination of 3D structure from multiple 2D images

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Depth Discontinuities by

Depth Discontinuities by Depth Discontinuities by Pixel-to-Pixel l Stereo Stan Birchfield, Carlo Tomasi Proceedings of the 1998 IEEE International Conference on Computer Vision, i Bombay, India - Introduction Cartoon artists Known

More information

COMP 558 lecture 22 Dec. 1, 2010

COMP 558 lecture 22 Dec. 1, 2010 Binocular correspondence problem Last class we discussed how to remap the pixels of two images so that corresponding points are in the same row. This is done by computing the fundamental matrix, defining

More information

Announcements. Stereo Vision Wrapup & Intro Recognition

Announcements. Stereo Vision Wrapup & Intro Recognition Announcements Stereo Vision Wrapup & Intro Introduction to Computer Vision CSE 152 Lecture 17 HW3 due date postpone to Thursday HW4 to posted by Thursday, due next Friday. Order of material we ll first

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING Yuichi Ohta Institute of Information Sciences and Electronics University of Tsukuba IBARAKI, 305, JAPAN Takeo Kanade Computer Science Department Carnegie-Mellon

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

Perception, Part 2 Gleitman et al. (2011), Chapter 5

Perception, Part 2 Gleitman et al. (2011), Chapter 5 Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17 Stereo Wrap + Motion CSE252A Lecture 17 Some Issues Ambiguity Window size Window shape Lighting Half occluded regions Problem of Occlusion Stereo Constraints CONSTRAINT BRIEF DESCRIPTION 1-D Epipolar Search

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

lecture 10 - depth from blur, binocular stereo

lecture 10 - depth from blur, binocular stereo This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Final project bits and pieces

Final project bits and pieces Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han Computer Vision 7. The Geometry of Multi Views Computer Engineering, i Sejong University i Dongil Han THE GEOMETRY OF MULTIPLE VIEWS Epipolar Geometry The Stereopsis Problem: Fusion and Reconstruction

More information

Introduction à la vision artificielle X

Introduction à la vision artificielle X Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by  ) Readings Szeliski, Chapter 10 (through 10.5) Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Irradiance Gradients. Media & Occlusions

Irradiance Gradients. Media & Occlusions Irradiance Gradients in the Presence of Media & Occlusions Wojciech Jarosz in collaboration with Matthias Zwicker and Henrik Wann Jensen University of California, San Diego June 23, 2008 Wojciech Jarosz

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Stereo Matching.

Stereo Matching. Stereo Matching Stereo Vision [1] Reduction of Searching by Epipolar Constraint [1] Photometric Constraint [1] Same world point has same intensity in both images. True for Lambertian surfaces A Lambertian

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

What is Computer Vision?

What is Computer Vision? Perceptual Grouping in Computer Vision Gérard Medioni University of Southern California What is Computer Vision? Computer Vision Attempt to emulate Human Visual System Perceive visual stimuli with cameras

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Image Based Reconstruction II

Image Based Reconstruction II Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo

More information

Stereo Vision II: Dense Stereo Matching

Stereo Vision II: Dense Stereo Matching Stereo Vision II: Dense Stereo Matching Nassir Navab Slides prepared by Christian Unger Outline. Hardware. Challenges. Taxonomy of Stereo Matching. Analysis of Different Problems. Practical Considerations.

More information

Recap from Previous Lecture

Recap from Previous Lecture Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Variational Methods II

Variational Methods II Mathematical Foundations of Computer Graphics and Vision Variational Methods II Luca Ballan Institute of Visual Computing Last Lecture If we have a topological vector space with an inner product and functionals

More information

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from? Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

COS Lecture 10 Autonomous Robot Navigation

COS Lecture 10 Autonomous Robot Navigation COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

Accomplishments and Challenges of Computer Stereo Vision

Accomplishments and Challenges of Computer Stereo Vision Accomplishments and Challenges of Computer Stereo Vision Miran Gosta 1, Mislav Grgic 2 1 Croatian Post and Electronic Communications Agency (HAKOM) Broadcast and Licensing Department, Jurisiceva 13, HR-10000

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

A The left scanline The right scanline

A The left scanline The right scanline Dense Disparity Estimation via Global and Local Matching Chun-Jen Tsai and Aggelos K. Katsaggelos Electrical and Computer Engineering Northwestern University Evanston, IL 60208-3118, USA E-mail: tsai@ece.nwu.edu,

More information

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012 Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints

More information

Stereo. Many slides adapted from Steve Seitz

Stereo. Many slides adapted from Steve Seitz Stereo Many slides adapted from Steve Seitz Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1 image 2 Dense depth map Binocular stereo Given a calibrated

More information

Segment-based Stereo Matching Using Graph Cuts

Segment-based Stereo Matching Using Graph Cuts Segment-based Stereo Matching Using Graph Cuts Li Hong George Chen Advanced System Technology San Diego Lab, STMicroelectronics, Inc. li.hong@st.com george-qian.chen@st.com Abstract In this paper, we present

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Segmentation Based Stereo. Michael Bleyer LVA Stereo Vision

Segmentation Based Stereo. Michael Bleyer LVA Stereo Vision Segmentation Based Stereo Michael Bleyer LVA Stereo Vision What happened last time? Once again, we have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > We have investigated the matching

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry

More information

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems EECS 442 Computer vision Stereo systems Stereo vision Rectification Correspondence problem Active stereo vision systems Reading: [HZ] Chapter: 11 [FP] Chapter: 11 Stereo vision P p p O 1 O 2 Goal: estimate

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

Computer Vision, Lecture 11

Computer Vision, Lecture 11 Computer Vision, Lecture 11 Professor Hager http://www.cs.jhu.edu/~hager Computational Stereo Much of geometric vision is based on information from (or more) camera locations hard to recover 3D information

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Announcements. Stereo Vision II. Midterm. Example: Helmholtz Stereo Depth + Normals + BRDF. Stereo

Announcements. Stereo Vision II. Midterm. Example: Helmholtz Stereo Depth + Normals + BRDF. Stereo Announcements Stereo Vision II Introduction to Computer Vision CSE 15 Lecture 13 Assignment 3: Due today. Extended to 5:00PM, sharp. Turn in hardcopy to my office 3101 AP&M No Discussion section this week.

More information

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER 2012 411 Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems Ricardo R. Garcia, Student

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information