Medial Visual Fragments as an Intermediate Image Representation for Segmentation and Perceptual Grouping

Size: px
Start display at page:

Download "Medial Visual Fragments as an Intermediate Image Representation for Segmentation and Perceptual Grouping"

Transcription

1 Medial Visual Fragments as an Intermediate Image Representation for Segmentation and Perceptual Grouping I. M. Anonymous M. Y. Coauthor My Department My Institute City, STATE zipcode Coauthor Department Coauthor Institute City, STATE zipcode Abstract We present a novel representation of images based on a decomposition into atomic patches which we call medial visual fragments. The medial axis/shock graph of a contour map partitions the image domain into non-overlapping regions, which together with the image information define the visual fragments. The main advantage of such a representation is that both contour and regional information are explicitly available so that in the presence of partial evidence and ambiguity in maps indicating edges and regional homogeneity, both aspects can be simultaneously used for perceptual grouping of fragments into a coherent whole. Grouping of visual fragments is represented as a set of canonical transformations of visual fragments, the gap and loop transforms. The advantage of this representation in comparison to perceptual grouping using only contour continuity or region grouping is demonstrated on synthetic and realistic examples. 1 Introduction The drive to produce complete object boundaries directly from local image features cannot succeed in the presence of occlusion and other visual variations unless suitable stable intermediate representations are formed in the process. These representations must deal with partial evidence and ambiguity whether region-based or edge-based: on the one hand, contours can have a diffused profile such that only impractically large edge operators can detect their presence; contours of low contrast but good geometric continuity are salient but can occasionally fall below operator threshold, leading to gaps, etc., see Figure 2. On the other hand, distinct regions are often merged when they are apparently similar in intensity or other attributes, while gradually changing image areas are often broken into distinct regions, Figure 2. There is a natural tradeoff between the number of false positives and missed contours/regions, such that there is considerable ambiguity in the resulting low-level description of the image. In this paper we argue that the inherent representation of images beyond this level of description must include both region-based and edge-based attributes as purely region-based or purely edge-based methods are fundamentally limiting to the capability of the segmentation process to disambiguate local ambiguities. We discuss each case in turn. Figure 1: This synthetically generated image illustrates several issues that plague region-based and contour-based representations. [A] Diffused edges, [B] Low Contrast edges, [C] Textured regions, [D]Contours are broken up by gaps, and [E] Internal Contours. The goal of region-based segmentation is to group pixels into coherent regions. The basic intermediate representation underlying this type of segmentation is a set of closed, connected, and non-overlapping regions, which we will call region fragments, such that each pixel belongs to a region fragment. Among all the partitionings of an image domain into regions, that which optimizes some measure of intraregion coherence (intensity, color, texture, etc.) and penalizes inter-region difference is selected. Region-based algorithms differ by whether they are local or global, greedy or optimal, etc, ranging from traditional region-growing to the modern graph-theoretic segmentation using normalized cuts [14] and segmentation by weighted aggregation (SWA) [13]. The goal of contour-based segmentation is to group pixels into coherent closed contours which delineate the image into groups of objects. This typically involves a progression from local edge detection to linked contour fragments, and finally a closure of these contour fragments. The ambiguity in grouping distinct edge elements into contour fragments is typically handled in two stages, by first defining an affinity between pairs of edge elements (curvilinear 1

2 Figure 2: [Left] Contours produced by a topology-based edge detector from [11] at different thresholds and aggressiveness. [Right] Regions produced by SWA Algorithm at scales 7, 8 and 9. continuity) and secondly by selecting among those grouping the one maximizing an overall measure; see [15] for a review. The contour fragments are then closed in a final step, e.g., by searching for cycles in a sparse graph representation [3, 6, 12]. A fundamental drawback in using coherent regions as an intermediate representation is that the outer perimeter of each region serves two functionally distinct roles: portions of the perimeter are contours in the sense that they separate two distinct objects while the remaining portions of the perimeter are simply delimiters of homogeneous patches. These delimiting contours are a result of the segmentation process and the competition among region fragments, not as an indicator of the intrinsic properties of the image and the underlying objects. This subtle but rather significant distinction can be illustrated by examining a region-based segmentation of the synthetic image of Figure 1 in Figure 3. Observe how the mutual boundaries of some region pairs are simply where the coherence between the two cannot be reconciled; these boundaries are spurious in that sense that they cannot possibly be indicative of apparent or internal contours of an object. While one can minimize these spurious contours, this is at the expense of losing some real contours: changes in parameters controlling the coherence measure intended to merge across such bound- Figure 3: Fundamental problem with a region-based representation: perimeter of each closed region serves two functional roles, one to denote a true contour and the other is a delimiter of the area of coherence. Especially notice how the near uniform background has been fragmented into region fragments whose shared contours (red), as delimiters of coherence, are an artifact of the segmentation algorithm and do not reflect an intrinsic image attribute. Only the region boundaries on the background have been highlighted for clarity. As one can see easily, the interior of the object is equally plagued by this problem. aries (as is usually done to deal with over-segmentation) also removes some crucial boundaries. This tradeoff between over-segmented versus over-grouped segmentation is a fundamental aspect of the region-based approach and partially derives from the dual functional roles assigned to the perimeter of each closed region. A representation that allows for a distinction between the two types of region perimeters would also allow for perceptual grouping based on both geometric continuity of the boundary as well as similarity of the regions they bound. A fundamental drawback in using closed contours is a similar one: some portions of a closed contour separate two distinct regions while other portions act as a smooth continuation for the purpose of closure and connectivity, see Figure 4. In an analogy to the region-based representation, there is a trade-off in setting parameters controlling the linking process: some parameter settings can link edges conservatively leading to a reliable but over-fragmented set of contours embedded in numerous edges, while other settings edges are aggressively linked to produce long smooth contours which can lead to erroneous linking. In either setting, when producing closed contours, the contours serve dual purposes, one as separators of distinct regions and the other as connectors for the sake of producing coherent (long, smooth, etc.) contours. Ideally, region fragments should differentiate between those portions of their perimeter that indicate image con- 2

3 Figure 4: One of the results from Figure 2 is used to illustrate a fundamental problem with contour-based representation: some contours reflect significant image contours (blue) while others are an artifact of the linking process (red). clearly, these two different types of contours serve different roles in producing a set of contour fragments. they must be used differently when grouping contour fragments to form coherent wholes. The blue contours are produced by conservative linking process wile the red contours are produced by an aggressive linking process. A Contour-based representation does not distinguish between these two types of contours. tours and those that delimit homogeneous patches. Similarly, contour fragments should be differentiated based on whether they separate distinct regions or whether they are simply connectors. This distinction has been implicit in approaches which assigns roles to both regions and contours. For example in [10], a PDE for anisotropic diffusion in regions bounded by an edge functional is coupled with a PDE for defining an edge-functional flanked by smooth regions. Our proposal here makes these dual roles explicit in a common representation. A key advantage in a common representation of region fragments and contour fragments is the increased ability to deal with partial and ambiguous information. As an example of an image area that depicts partial contour and region evidence, consider region D in Figure 1. Only certain portions of the boundary can be clearly delineated by an edge process; see Figure 5 for a realistic example. In contrast, variations of intensity not related to any geometric structure can produce spurious edge responses, (region C in Figure 1). It would require a major leap of faith to form closed regions from these edge-based local evidence alone. Similarly, a local regional homogeneity measurement indicates the existence of distinct elongated regional fragments in region D of Figure 1, but grouping into a coherent whole is beyond the capabilities of a purely region-based process. The simultaneous spatial arrangement of highly salient contour fragments supported by highly salient regional homogeneity is not represented by either the contour-based and region-based fragments alone. This deficiency motivates our proposal for a novel type of image representation: the shock graph of a set of contour fragments represents their spatial arrangement; and divides the space into regions indicated by pairs of contour fragments, Figure 6. The contribution of this paper is in presenting a novel representation for images that is based on transforming the image coordinate system to a collection of coordinate systems each defining a visual fragment. In Section 2, we formally define a fragment-based coordinate system so that each point of an image belongs to a fragment and is described in its coordinate system. This maps the image into a non-overlapping collage of image fragments. We then show in Section 3 that both edge-based and region-based visual grouping process can be represented as operations on the medial visual fragments with the clear advantage that combined grouping process is more selective in the presence of ambiguity. Figure 5: The contours on the vase are well defined. However, an edge process only produces fragmented contours. The gaps are large enough to render contour grouping impractical to bring out all the perceived regions. These fragmented contours can only be faithfully linked if the regional information between them is also used via a region continuation operation. 2 Representing Images via Medial Visual Fragments The journey from pixels to objects necessarily involves a progressive transformation of extrinsic image coordinates to match the intrinsic object coordinates. As a portion of the object is segregated from the background it must be represented as an object fragment with its own coordinate system. Such an atomic object fragment consists of boundary fragments bounding coherent region fragments. For example, the parallel strips in region D, at the bottom of Figure 1 leads to a series of broken contour fragments bounding regions that are roughly homogeneous in intensities. The view that a medial axis segment is really just a joint representation of a pair of contours suggests that the medial segment and its influence zone (defined by the burnt region 3

4 (a) (b) (c) (d) Figure 6: (a) The shock fragment is the influence zone of each shock segment. Each point P in this region has a closest contour point P + which in turn maps to a shock point P. Observe how part of the shock fragment perimeter is a real contour while the remaining portion is a delimiter of the region only. (b) A synthetic example showing a multitude of open contour fragments paired by shocks, (c) Shock fragments, (d) When the contours fragments are grouped, the shock fragments organize into visual fragments. The convention used throughout the paper is that contours are shown in blue and shocks are in red, and visual fragments are filled with a random color. in a grassfire analogy) constitute a fragment of an image. Informally, we define a visual fragment as the portion of the image in the influence zone of a shock segment arising from a pair of image contours. Formally, the shock graph of a contour map partitions the image into fragments with a well-defined transformation from each image point to a fragment and vice-versa. These shock fragments are the atomic fragments which are then grouped to form visual fragments. Figure 7: This figure illustrates the coordinate system imposed by each shock fragment. Observe that the atomic shock fragments are immune to various visual transformations such as occlusion. Definition 1 (Shock Fragment): In the grassfire analogy of Blum, the burnt region corresponding to each shock segment is a shock fragment, Figure 6(a). In other words, the shock fragment is the union of all pairs of rays (P P +, P P ) arising from all shock points P along the shock segment. Recall that the shock graph is a refinement of the medial axis resulting from a sense of shock flow. Each shock point is described by geometry (tangent and curvature) as well as dynamics (velocity and acceleration). Figure 7 shows the shock fragments of a closed curve. Observe the coordinate system for each shock fragment is an intrinsic object-based coordinate system. The proposition below shows that shock fragments when applied to non-closed curves partition the image. Proposition 1 An image with an associated contour map (a set of curve segments) is partitioned into a set of shock fragments, i.e., P (x, y) in the image, there exists a shock segment k described by a curve γ k parameterized by arclength s [0, L] with a local coordinate system of axis tangent/normal ( T (s), N(s)) and velocity v(s) such that for some t [0, r(s)], (x, y) = γ k (s) + t( 1 T v v2 1 ± N), v The proof requires developing some background from [4] so we do not present it here. Figure 6(c) illustrates the shock fragments for a contour map sketched in Figure 6(b). Observe that a shock fragment represents an atomic fragment: when a pair of longer contour with some structure is considered, the, the area between the two contours which is described by several shock fragments can be grouped, Figure 6(d), leading to the notion of a (medial) visual fragment. See also Figure 8 for additional examples. Definition 2 (Medial Visual Fragments): A visual fragment corresponding to a pair of contours is the union of all shock fragments that arise from both contours. (a) (c) Figure 8: Visual fragments formed by various arrangements of contour fragments are illustrated: (a) between a pair of open contours, (b) enclosed by a single open contour, (c) enclosed by a single closed contour and (d) enclosed by a pair of closed contours. (b) (d) 4

5 3 Reasoning with Visual Fragments In this section we show that visual fragments offer distinct advantages over region fragments and contour fragments. Recall that while region coherence is the main driving force in forming region fragments, good continuation and good form is the main driving force in forming contour fragments. The presence of ambiguity in low-level feature maps, mainly due to numerous visual transformations, requires that both cues be used to disambiguate the lowlevel evidence into a coherent whole. Figure 9 illustrates the drawbacks of using only good continuation in a contour map. The completion of the contour behind the occluder in Figure 9(a) and the gap in Figure 9(b) are consistent with (a) (b) (c) the underlying object while the completion of the identical contours in Figure 9(c) is not; the situation in Figure 9(d) requires additional information. Clearly, completion solely based on the contour map can be misleading. What is lacking is the notion that not a contour but an object fragment needs to be continued and matched on the other side of the occluder. This would imply a pair of continuations. In Figure 9(a-b) one of the contours is interrupted while the second on is intact; in Figure 9(e-f) both contours need to be simultaneously completed. This form of joint contour contiuity, however, does not prevent a cross over in the completion contour, Figure 9(g), thus motivating the notion of skeletal continuity. We are proposing that skeletal continuity captures object fragment continuity better than individual contour continuity. A second aspect of object fragment continuity that is not captured by contour continuity is the good continuation of surface cues. Consider Figures 9(i-j), where the object fragment on the left can be geometrically continued equally well to either of the object fragments on the right. Clearly, good continuation of the region properties is the dominant factor in deciding the grouping of fragments in this case. This is precisely what region-based segmentation would do if a mechanism for crossing the occluder was somehow included. (d) (e) (f) (g) (h) (i) Figure 10: Gradient Descent perceptual Grouping. From [7]. (j) (k) (l) Figure 9: (a),(b) The use of contours as an intermediate level representation allows for grouping of edges across occlusions or gaps based on good continuation. However, while this is effective in these examples, when significant ambiguity is present, using good continuation of contours is not sufficient. Rather, in matching object fragments, both boundaries must be successfully completed depicting good silhouette continuity, as in (a,b,e,f), but not in (i,j). In addition they must satisfy surface cue continuity, or good continuation of the interior object properties (g,h,i). Shape continuity consists of both silhouette continuity and surface cue continuity, and can be captured by shock segments representing each visual fragment (j,k,l), and is used to disambiguate grouping of visual fragments in (g-i). Visual fragments are capable of representing both good skeletal continuity and surface continuity as both, pair of contours and the region between them, are represented, Figure 9(j-l). This is more specifically pursued below where we consider several canonical situations and describe the grouping process as a transformation of the underlying visual fragments. This is an extension of the approach presented in [7] which only considered transformations of the shock graph affecting only the contour map. In this approach, the completion of a gap is cast as a well-defined transformation of the underlying shock graph (gap transform), while the removal of a spurious edge element is another transform (loop transform). A gradient descent approach selects that transformation of the shock graph that optimizes a move towards good form. Figure 10 illustrates 5

6 Figure 12: Loops form around internal contours in three different ways: (a) around a open contour (b) around a closed contour and (c) around a boundary between two fragments. [Bottom] Shows the resulting grouped fragment after a loop transform. A loop transform moves them into another layer. They are not removed. Any texture would likewise be moved into another layer and is attached to the shape fragment as a surface property Figure 11: [Top] Schematic illustration: Gaps form in the contour as a result of edge detection, see Region B in Figure 1. Gap transform considers both the contour continuity as well as the region continuity. [Middle] Grayscale example from region B in Figure 1, and its visual fragments. The salience of this transform as a grouping derives from both contour continuity as well as regional continuity. [Bottom] Transformed visual fragments together with average intensity pasted on each fragment. this process by showing several samples along the transformation sequence. Ideally, all transformation sequences, or some viable subset, must be searched to select the optimal sequence, but this is an issue that is not the focus of this paper. Here, we show that transformations of the visual fragments can integrate both contour continuity and regional coherence,a nd thus serve as a substrate for more powerful grouping. Gap Transform: Consider a contour which is broken into two contour fragments C 1 and C 2 as in Figure 11. The notion of a visual fragment allows for the inclusion of both contour continuity and surface continuation in the completion process: (i) the completion of the gap between C 1 and C 2 must satisfy good contour continuity, e.g., as defined via Elastica [2] or the Euler Spiral [8]; (ii) the completion of the gap between C 1 and C 2 requires that the region fragments B,C,D,E and F be merged on the one side, and A,G,H,I and J be merged on the other side of the contour. A measure of regional continuity can be based on region-based segmen- tation methods such as segmentation based on weighted aggregation(swa) [13]. Considering the inherent ambiguity in perceptual grouping the addition of regional continuity to contour continuity should provide a powerful constraint for disambiguation of possible continuations. Other realistic examples are shown in Figure 16. Loop Transform: The flip side of completing across a missing contour is the removal of a spurious contour. Spurious contours arise from texture elements, internal contours, noise, among other factors. They are only spurious in the sense that they are not likely to be a part of the boundary of the object fragment. A rather frequent example arises from a slight surface protrusion which leads to a single, nonclosed ridge contour in the image [9]. The removal of such contours is in fact separating it from the existing contour and moving it to another layer. This layer is a map attached to each visual fragment that forms after removing the spurious contour. In this way, regular structural texture can be detected and represented in this layer. The visual fragment representation allows for an integration of both contour continuity and regional continuity in the process of determining the salience of a spurious edge. The cue for spuriousness of an edge is (i) poor continuation with neighboring edge elements (ii) good surface continuity transversal to the spurious contour. Both measures are computable from the visual fragments which arise from this contour, Figures 12 and 13. The transformation pertaining to the removal of this contour is to propagate waves from the shock loop representing it so as to complete the shocks corresponding to the contour map without the spurious element. The intensity in these regions is filled in using 6

7 Figure 13: Examples of loop transforms applied to Figure 1 at various locations. [Left] (a) around a open contour but signals that the contour is significant (b) around a closed contour where the regions around the contour can be grouped by pushing the closed region onto another layer and (c) around a boundary between two fragments of similar texture produced by a region-based algorithm. This loop suggests that the regions can be merged by removing the common boundary. Figure 15: [Top Row] An occluded torus image and its edges. [Middle Row] [Left] Visual Fragments produced from the edgemap. [Right] Visual fragments produced after the occluder is removed onto another layer. The remaining transforms are gaps transforms to link the individual visual fragments into a coherent whole. [Bottom Row] [Left] Since there is more evidence to link up the outer edges of the torus, this will happen first. The grouping of the contour fragments of the outer edge will produce a grouping of the visual fragments into larger ones as illustrated. [Right] After the outer edges are grouped, there is more evidence for the inner contour fragments to link up thus producing a pair of closed contours. This defines the fragment into the shape of torus, as illustrated. Figure 14: Two types of occlusion transforms: [Left] One that has support from the complementary contour and [Right] another that needs to be jointly completed via skeletal continuity. The removal of the occluding object is equivalent to a loop transform. The loop is shown in yellow. Once the occluding object is removed, it reduces to a gap scenario. Gap transform closes the gap and fills in the texture from the participating visual fragments. 7 the recent exemplar-based filling-in process [1]. The loop transform is not restricted solely to the removal of spurious contours. When certain object fragments have formed, they can be moved to another layer as they have been occluding another object. For example, in Figure 14, once the occluding object is segregated it can be moved to another layer, and complete the region fragment in the area behind it by a loop transform of the visual fragments, leaving behind gaps in the process. This is based on the observation that perceptually the occluding edge belongs to the occluder [5].

8 they resemble object fragments and describe object parts and their 3D structure. Finally, whole objects are segregated by reasoning about its parts. This paper has proposed an intermediate representation of the image which spans the gap from low-level image descriptors to high-level object parts. The proposed visual fragment encode both edge-based and region-based properties, thus enabling a grouping process to simultaneously take advantage of both cues, which can potentially disambiguate grouping ambiguities. References (a) (e) (i) (b) (f) (j) (c) (g) (k) (d) (h) (l) Figure 16: Some examples of the visual fragment transforms applied to the fruits basket image [Top]. [Left] Gap Transforms example1 [Middle] Gap Transform Example2. [Right] Loop transform example. 4 Conclusion The perceptual organization of an image from pixels to objects uses many intermediate representations. At a first stage a local grouping of pixels are described by low-level feature maps consisting of edge and regions. At at second stage, these form visual fragments which are initially numerous atomic shock fragments but as grouping proceeds, [1] A. A. Efros and T. K. Leung. Texture synthesis by nonparametric sampling. In IEEE International Conference on Computer Vision, pages , Corfu, Greece, September [2] R. B. Eitan Sharon, Achi Brandt. Completion energies and scale. PAMI, 22(10): , [3] J. Elder and S. Zucker. Local scale control for edge detection and blur estimation. In ECCV96, pages II:57 69, [4] [5] B. Gillam. New evidence for closure in perception. Perception and Psychophysics, 17(5): , [6] D. W. Jacobs. Robust and efficient detection of salient convex groups. IEEE Trans. Pattern Analysis and Machine Intelligence, 18(1):23 37, [7] [8] [9] J. J. Koenderink and A. J. van Doorn. The shape of smooth objects and the way contours end. Perception, 11: , [10] M. Proesmans, E. Pauwels, and L. V. Gool. Coupled geometry-driven diffusion equations for low-level vision. In Geometry-Driven Diffusion in Computer Vision. Kluwer, [11] C. Rothwell, J. Mundy, W. Hoffman, and V.-D. Nguyen. Driving vision by topology. In IEEE Intl. Symosium on Computer Vision, pages , [12] K. T. S. Mahamud, L.R. Williams and K. Xu. Segmentation of multiple salient closed contours from real images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(4): , [13] E. Sharon, A. Brandt, and R. Basri. Segmentation and Boundary Detection Using Multiscale Intensity Measurements. In IEEE Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pages , [14] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on PAMI, 22(8): , [15] L. Williams and K. Thornber. A comparison of measures for detecting natural shapes in cluttered backgrounds. IJCV, 34(2-3):81 96, November

Combining Top-down and Bottom-up Segmentation

Combining Top-down and Bottom-up Segmentation Combining Top-down and Bottom-up Segmentation Authors: Eran Borenstein, Eitan Sharon, Shimon Ullman Presenter: Collin McCarthy Introduction Goal Separate object from background Problems Inaccuracies Top-down

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Optimal Grouping of Line Segments into Convex Sets 1

Optimal Grouping of Line Segments into Convex Sets 1 Optimal Grouping of Line Segments into Convex Sets 1 B. Parvin and S. Viswanathan Imaging and Distributed Computing Group Information and Computing Sciences Division Lawrence Berkeley National Laboratory,

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Convex Grouping Combining Boundary and Region Information

Convex Grouping Combining Boundary and Region Information Convex Grouping Combining Boundary and Region Information Joachim S. Stahl and Song Wang Department of Computer Science and Engineering University of South Carolina, Columbia, SC 29208 stahlj@cse.sc.edu,

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Edge Grouping Combining Boundary and Region Information

Edge Grouping Combining Boundary and Region Information University of South Carolina Scholar Commons Faculty Publications Computer Science and Engineering, Department of 10-1-2007 Edge Grouping Combining Boundary and Region Information Joachim S. Stahl Song

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Extracting the Canonical Set of Closed Contours Using the Best-First Search Algorithm

Extracting the Canonical Set of Closed Contours Using the Best-First Search Algorithm Extracting the Canonical Set of Closed Contours Using the Best-First Search Algorithm Siniša Šegvić, Zoran Kalafatić and Slobodan Ribarić Faculty of Electrical Engineering and Computing Unska 3, 1 Zagreb,

More information

A Singular Example for the Averaged Mean Curvature Flow

A Singular Example for the Averaged Mean Curvature Flow To appear in Experimental Mathematics Preprint Vol. No. () pp. 3 7 February 9, A Singular Example for the Averaged Mean Curvature Flow Uwe F. Mayer Abstract An embedded curve is presented which under numerical

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Variational Methods II

Variational Methods II Mathematical Foundations of Computer Graphics and Vision Variational Methods II Luca Ballan Institute of Visual Computing Last Lecture If we have a topological vector space with an inner product and functionals

More information

Normalized cuts and image segmentation

Normalized cuts and image segmentation Normalized cuts and image segmentation Department of EE University of Washington Yeping Su Xiaodan Song Normalized Cuts and Image Segmentation, IEEE Trans. PAMI, August 2000 5/20/2003 1 Outline 1. Image

More information

What is Computer Vision?

What is Computer Vision? Perceptual Grouping in Computer Vision Gérard Medioni University of Southern California What is Computer Vision? Computer Vision Attempt to emulate Human Visual System Perceive visual stimuli with cameras

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Spring 2005 Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Where are we? Image Formation Human vision Cameras Geometric Camera

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Medial Features for Superpixel Segmentation

Medial Features for Superpixel Segmentation Medial Features for Superpixel Segmentation David Engel Luciano Spinello Rudolph Triebel Roland Siegwart Heinrich H. Bülthoff Cristóbal Curio Max Planck Institute for Biological Cybernetics Spemannstr.

More information

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Lightmaps, as do other precomputed lighting methods, provide an efficient and pleasing solution for lighting and shadowing of relatively static

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

Edges and Binary Images

Edges and Binary Images CS 699: Intro to Computer Vision Edges and Binary Images Prof. Adriana Kovashka University of Pittsburgh September 5, 205 Plan for today Edge detection Binary image analysis Homework Due on 9/22, :59pm

More information

2D image segmentation based on spatial coherence

2D image segmentation based on spatial coherence 2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics

More information

Medial Scaffolds for 3D data modelling: status and challenges. Frederic Fol Leymarie

Medial Scaffolds for 3D data modelling: status and challenges. Frederic Fol Leymarie Medial Scaffolds for 3D data modelling: status and challenges Frederic Fol Leymarie Outline Background Method and some algorithmic details Applications Shape representation: From the Medial Axis to the

More information

3D Shape Registration using Regularized Medial Scaffolds

3D Shape Registration using Regularized Medial Scaffolds 3D Shape Registration using Regularized Medial Scaffolds 3DPVT 2004 Thessaloniki, Greece Sep. 6-9, 2004 Ming-Ching Chang Frederic F. Leymarie Benjamin B. Kimia LEMS, Division of Engineering, Brown University

More information

Logical Templates for Feature Extraction in Fingerprint Images

Logical Templates for Feature Extraction in Fingerprint Images Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE

OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE Wenju He, Marc Jäger, and Olaf Hellwich Berlin University of Technology FR3-1, Franklinstr. 28, 10587 Berlin, Germany {wenjuhe, jaeger,

More information

Multi-Scale Free-Form Surface Description

Multi-Scale Free-Form Surface Description Multi-Scale Free-Form Surface Description Farzin Mokhtarian, Nasser Khalili and Peter Yuen Centre for Vision Speech and Signal Processing Dept. of Electronic and Electrical Engineering University of Surrey,

More information

Detecting Object Instances Without Discriminative Features

Detecting Object Instances Without Discriminative Features Detecting Object Instances Without Discriminative Features Edward Hsiao June 19, 2013 Thesis Committee: Martial Hebert, Chair Alexei Efros Takeo Kanade Andrew Zisserman, University of Oxford 1 Object Instance

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Continuous Multi-View Tracking using Tensor Voting

Continuous Multi-View Tracking using Tensor Voting Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Fingerprint Classification Using Orientation Field Flow Curves

Fingerprint Classification Using Orientation Field Flow Curves Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification

More information

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University Edge and Texture CS 554 Computer Vision Pinar Duygulu Bilkent University Filters for features Previously, thinking of filtering as a way to remove or reduce noise Now, consider how filters will allow us

More information

Review on Image Segmentation Techniques and its Types

Review on Image Segmentation Techniques and its Types 1 Review on Image Segmentation Techniques and its Types Ritu Sharma 1, Rajesh Sharma 2 Research Scholar 1 Assistant Professor 2 CT Group of Institutions, Jalandhar. 1 rits_243@yahoo.in, 2 rajeshsharma1234@gmail.com

More information

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah Image Segmentation Ross Whitaker SCI Institute, School of Computing University of Utah What is Segmentation? Partitioning images/volumes into meaningful pieces Partitioning problem Labels Isolating a specific

More information

CS229 Final Project One Click: Object Removal

CS229 Final Project One Click: Object Removal CS229 Final Project One Click: Object Removal Ming Jiang Nicolas Meunier December 12, 2008 1 Introduction In this project, our goal is to come up with an algorithm that can automatically detect the contour

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Shape Matching and Object Recognition using Low Distortion Correspondences

Shape Matching and Object Recognition using Low Distortion Correspondences Shape Matching and Object Recognition using Low Distortion Correspondences Authors: Alexander C. Berg, Tamara L. Berg, Jitendra Malik Presenter: Kalyan Chandra Chintalapati Flow of the Presentation Introduction

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Cell Clustering Using Shape and Cell Context. Descriptor

Cell Clustering Using Shape and Cell Context. Descriptor Cell Clustering Using Shape and Cell Context Descriptor Allison Mok: 55596627 F. Park E. Esser UC Irvine August 11, 2011 Abstract Given a set of boundary points from a 2-D image, the shape context captures

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

A General Framework for Contour

A General Framework for Contour Chapter 8 A General Framework for Contour Extraction In the previous chapter, we described an efficient search framework for convex contour extraction. The framework is based on a simple measure of affinity

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Generic Object Recognition via Shock Patch Fragments

Generic Object Recognition via Shock Patch Fragments Generic Object Recognition via Shock Patch Fragments Özge C. Özcanlı and Benjamin B. Kimia Division of Engineering Brown University, Providence, RI, USA {ozge, kimia}@lems.brown.edu Abstract We propose

More information

Scale-invariant shape features for recognition of object categories

Scale-invariant shape features for recognition of object categories Scale-invariant shape features for recognition of object categories Frédéric Jurie and Cordelia Schmid GRAVIR, INRIA-CNRS, 655 avenue de l Europe, Montbonnot 38330, France {Frederic.Jurie, Cordelia.Schmid}@inrialpes.fr,

More information

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #11 Image Segmentation Prof. Dan Huttenlocher Fall 2003 Image Segmentation Find regions of image that are coherent Dual of edge detection Regions vs. boundaries Related to clustering problems

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Shyjan Mahamud Karvel K. Thornber Lance R. Williams. Carnegie Mellon University 4 Independence Way University of New Mexico

Shyjan Mahamud Karvel K. Thornber Lance R. Williams. Carnegie Mellon University 4 Independence Way University of New Mexico Segmentation of Salient Closed Contours from Real Images Shyjan Mahamud Karvel K. Thornber Lance R. Williams Dept. of Computer Science NEC Research Institute, Inc. Dept. of Computer Science Carnegie Mellon

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

Salient Boundary Detection using Ratio Contour

Salient Boundary Detection using Ratio Contour Salient Boundary Detection using Ratio Contour Song Wang, Toshiro Kubota Dept. Computer Science & Engineering University of South Carolina Columbia, SC 29208 {songwang kubota}@cse.sc.edu Jeffrey Mark Siskind

More information

Content-based Image and Video Retrieval. Image Segmentation

Content-based Image and Video Retrieval. Image Segmentation Content-based Image and Video Retrieval Vorlesung, SS 2011 Image Segmentation 2.5.2011 / 9.5.2011 Image Segmentation One of the key problem in computer vision Identification of homogenous region in the

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Translation Symmetry Detection: A Repetitive Pattern Analysis Approach Yunliang Cai and George Baciu GAMA Lab, Department of Computing

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Digital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer

Digital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer Wilhelm Burger Mark J. Burge Digital Image Processing An algorithmic introduction using Java Second Edition ERRATA Springer Berlin Heidelberg NewYork Hong Kong London Milano Paris Tokyo 12.1 RGB Color

More information

Segmentation & Clustering

Segmentation & Clustering EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,

More information

Review of Filtering. Filtering in frequency domain

Review of Filtering. Filtering in frequency domain Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect of filter Algorithm: 1. Convert image and filter to fft (fft2

More information

Parameterization with Manifolds

Parameterization with Manifolds Parameterization with Manifolds Manifold What they are Why they re difficult to use When a mesh isn t good enough Problem areas besides surface models A simple manifold Sphere, torus, plane, etc. Using

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

Motion Segmentation with Accurate Boundaries - A Tensor Voting Approach

Motion Segmentation with Accurate Boundaries - A Tensor Voting Approach Motion Segmentation with Accurate Boundaries - A Tensor Voting Approach Mircea Nicolescu and Gérard Medioni Integrated Media Systems Center University of Southern California Los Angeles, CA 90089-0273

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

3D Object Scanning to Support Computer-Aided Conceptual Design

3D Object Scanning to Support Computer-Aided Conceptual Design ABSTRACT 3D Object Scanning to Support Computer-Aided Conceptual Design J.S.M. Vergeest and I. Horváth Delft University of Technology Faculty of Design, Engineering and Production Jaffalaan 9, NL-2628

More information

Snakes, level sets and graphcuts. (Deformable models)

Snakes, level sets and graphcuts. (Deformable models) INSTITUTE OF INFORMATION AND COMMUNICATION TECHNOLOGIES BULGARIAN ACADEMY OF SCIENCE Snakes, level sets and graphcuts (Deformable models) Centro de Visión por Computador, Departament de Matemàtica Aplicada

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Algorithms for 3D Isometric Shape Correspondence

Algorithms for 3D Isometric Shape Correspondence Algorithms for 3D Isometric Shape Correspondence Yusuf Sahillioğlu Computer Eng. Dept., Koç University, Istanbul, Turkey (PhD) Computer Eng. Dept., METU, Ankara, Turkey (Asst. Prof.) 2 / 53 Problem Definition

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

People Tracking and Segmentation Using Efficient Shape Sequences Matching

People Tracking and Segmentation Using Efficient Shape Sequences Matching People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,

More information

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Hyunghoon Cho and David Wu December 10, 2010 1 Introduction Given its performance in recent years' PASCAL Visual

More information

A SYNTAX FOR IMAGE UNDERSTANDING

A SYNTAX FOR IMAGE UNDERSTANDING A SYNTAX FOR IMAGE UNDERSTANDING Narendra Ahuja University of Illinois at Urbana-Champaign May 21, 2009 Work Done with. Sinisa Todorovic, Mark Tabb, Himanshu Arora, Varsha. Hedau, Bernard Ghanem, Tim Cheng.

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Edge Grouping for Detecting Salient Boundaries with Sharp Corners

Edge Grouping for Detecting Salient Boundaries with Sharp Corners Edge Grouping for Detecting Salient Boundaries with Sharp Corners Joachim S. Stahl Department of Computer Science Clarkson University, Potsdam, NY 13676 jstahl@clarkson.edu Abstract. The Gestalt law of

More information

Efficient Computation of Closed Contours using Modified Baum-Welch Updating

Efficient Computation of Closed Contours using Modified Baum-Welch Updating Efficient Computation of Closed Contours using Modified Baum-Welch Updating Leigh. A. Johnston and James. H. Elder Centre for Vision Research York University, Toronto, Canada Abstract We address the problem

More information