Medial Visual Fragments as an Intermediate Image Representation for Segmentation and Perceptual Grouping

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Medial Visual Fragments as an Intermediate Image Representation for Segmentation and Perceptual Grouping"

Transcription

1 Medial Visual Fragments as an Intermediate Image Representation for Segmentation and Perceptual Grouping I. M. Anonymous M. Y. Coauthor My Department My Institute City, STATE zipcode Coauthor Department Coauthor Institute City, STATE zipcode Abstract We present a novel representation of images based on a decomposition into atomic patches which we call medial visual fragments. The medial axis/shock graph of a contour map partitions the image domain into non-overlapping regions, which together with the image information define the visual fragments. The main advantage of such a representation is that both contour and regional information are explicitly available so that in the presence of partial evidence and ambiguity in maps indicating edges and regional homogeneity, both aspects can be simultaneously used for perceptual grouping of fragments into a coherent whole. Grouping of visual fragments is represented as a set of canonical transformations of visual fragments, the gap and loop transforms. The advantage of this representation in comparison to perceptual grouping using only contour continuity or region grouping is demonstrated on synthetic and realistic examples. 1 Introduction The drive to produce complete object boundaries directly from local image features cannot succeed in the presence of occlusion and other visual variations unless suitable stable intermediate representations are formed in the process. These representations must deal with partial evidence and ambiguity whether region-based or edge-based: on the one hand, contours can have a diffused profile such that only impractically large edge operators can detect their presence; contours of low contrast but good geometric continuity are salient but can occasionally fall below operator threshold, leading to gaps, etc., see Figure 2. On the other hand, distinct regions are often merged when they are apparently similar in intensity or other attributes, while gradually changing image areas are often broken into distinct regions, Figure 2. There is a natural tradeoff between the number of false positives and missed contours/regions, such that there is considerable ambiguity in the resulting low-level description of the image. In this paper we argue that the inherent representation of images beyond this level of description must include both region-based and edge-based attributes as purely region-based or purely edge-based methods are fundamentally limiting to the capability of the segmentation process to disambiguate local ambiguities. We discuss each case in turn. Figure 1: This synthetically generated image illustrates several issues that plague region-based and contour-based representations. [A] Diffused edges, [B] Low Contrast edges, [C] Textured regions, [D]Contours are broken up by gaps, and [E] Internal Contours. The goal of region-based segmentation is to group pixels into coherent regions. The basic intermediate representation underlying this type of segmentation is a set of closed, connected, and non-overlapping regions, which we will call region fragments, such that each pixel belongs to a region fragment. Among all the partitionings of an image domain into regions, that which optimizes some measure of intraregion coherence (intensity, color, texture, etc.) and penalizes inter-region difference is selected. Region-based algorithms differ by whether they are local or global, greedy or optimal, etc, ranging from traditional region-growing to the modern graph-theoretic segmentation using normalized cuts [14] and segmentation by weighted aggregation (SWA) [13]. The goal of contour-based segmentation is to group pixels into coherent closed contours which delineate the image into groups of objects. This typically involves a progression from local edge detection to linked contour fragments, and finally a closure of these contour fragments. The ambiguity in grouping distinct edge elements into contour fragments is typically handled in two stages, by first defining an affinity between pairs of edge elements (curvilinear 1

2 Figure 2: [Left] Contours produced by a topology-based edge detector from [11] at different thresholds and aggressiveness. [Right] Regions produced by SWA Algorithm at scales 7, 8 and 9. continuity) and secondly by selecting among those grouping the one maximizing an overall measure; see [15] for a review. The contour fragments are then closed in a final step, e.g., by searching for cycles in a sparse graph representation [3, 6, 12]. A fundamental drawback in using coherent regions as an intermediate representation is that the outer perimeter of each region serves two functionally distinct roles: portions of the perimeter are contours in the sense that they separate two distinct objects while the remaining portions of the perimeter are simply delimiters of homogeneous patches. These delimiting contours are a result of the segmentation process and the competition among region fragments, not as an indicator of the intrinsic properties of the image and the underlying objects. This subtle but rather significant distinction can be illustrated by examining a region-based segmentation of the synthetic image of Figure 1 in Figure 3. Observe how the mutual boundaries of some region pairs are simply where the coherence between the two cannot be reconciled; these boundaries are spurious in that sense that they cannot possibly be indicative of apparent or internal contours of an object. While one can minimize these spurious contours, this is at the expense of losing some real contours: changes in parameters controlling the coherence measure intended to merge across such bound- Figure 3: Fundamental problem with a region-based representation: perimeter of each closed region serves two functional roles, one to denote a true contour and the other is a delimiter of the area of coherence. Especially notice how the near uniform background has been fragmented into region fragments whose shared contours (red), as delimiters of coherence, are an artifact of the segmentation algorithm and do not reflect an intrinsic image attribute. Only the region boundaries on the background have been highlighted for clarity. As one can see easily, the interior of the object is equally plagued by this problem. aries (as is usually done to deal with over-segmentation) also removes some crucial boundaries. This tradeoff between over-segmented versus over-grouped segmentation is a fundamental aspect of the region-based approach and partially derives from the dual functional roles assigned to the perimeter of each closed region. A representation that allows for a distinction between the two types of region perimeters would also allow for perceptual grouping based on both geometric continuity of the boundary as well as similarity of the regions they bound. A fundamental drawback in using closed contours is a similar one: some portions of a closed contour separate two distinct regions while other portions act as a smooth continuation for the purpose of closure and connectivity, see Figure 4. In an analogy to the region-based representation, there is a trade-off in setting parameters controlling the linking process: some parameter settings can link edges conservatively leading to a reliable but over-fragmented set of contours embedded in numerous edges, while other settings edges are aggressively linked to produce long smooth contours which can lead to erroneous linking. In either setting, when producing closed contours, the contours serve dual purposes, one as separators of distinct regions and the other as connectors for the sake of producing coherent (long, smooth, etc.) contours. Ideally, region fragments should differentiate between those portions of their perimeter that indicate image con- 2

3 Figure 4: One of the results from Figure 2 is used to illustrate a fundamental problem with contour-based representation: some contours reflect significant image contours (blue) while others are an artifact of the linking process (red). clearly, these two different types of contours serve different roles in producing a set of contour fragments. they must be used differently when grouping contour fragments to form coherent wholes. The blue contours are produced by conservative linking process wile the red contours are produced by an aggressive linking process. A Contour-based representation does not distinguish between these two types of contours. tours and those that delimit homogeneous patches. Similarly, contour fragments should be differentiated based on whether they separate distinct regions or whether they are simply connectors. This distinction has been implicit in approaches which assigns roles to both regions and contours. For example in [10], a PDE for anisotropic diffusion in regions bounded by an edge functional is coupled with a PDE for defining an edge-functional flanked by smooth regions. Our proposal here makes these dual roles explicit in a common representation. A key advantage in a common representation of region fragments and contour fragments is the increased ability to deal with partial and ambiguous information. As an example of an image area that depicts partial contour and region evidence, consider region D in Figure 1. Only certain portions of the boundary can be clearly delineated by an edge process; see Figure 5 for a realistic example. In contrast, variations of intensity not related to any geometric structure can produce spurious edge responses, (region C in Figure 1). It would require a major leap of faith to form closed regions from these edge-based local evidence alone. Similarly, a local regional homogeneity measurement indicates the existence of distinct elongated regional fragments in region D of Figure 1, but grouping into a coherent whole is beyond the capabilities of a purely region-based process. The simultaneous spatial arrangement of highly salient contour fragments supported by highly salient regional homogeneity is not represented by either the contour-based and region-based fragments alone. This deficiency motivates our proposal for a novel type of image representation: the shock graph of a set of contour fragments represents their spatial arrangement; and divides the space into regions indicated by pairs of contour fragments, Figure 6. The contribution of this paper is in presenting a novel representation for images that is based on transforming the image coordinate system to a collection of coordinate systems each defining a visual fragment. In Section 2, we formally define a fragment-based coordinate system so that each point of an image belongs to a fragment and is described in its coordinate system. This maps the image into a non-overlapping collage of image fragments. We then show in Section 3 that both edge-based and region-based visual grouping process can be represented as operations on the medial visual fragments with the clear advantage that combined grouping process is more selective in the presence of ambiguity. Figure 5: The contours on the vase are well defined. However, an edge process only produces fragmented contours. The gaps are large enough to render contour grouping impractical to bring out all the perceived regions. These fragmented contours can only be faithfully linked if the regional information between them is also used via a region continuation operation. 2 Representing Images via Medial Visual Fragments The journey from pixels to objects necessarily involves a progressive transformation of extrinsic image coordinates to match the intrinsic object coordinates. As a portion of the object is segregated from the background it must be represented as an object fragment with its own coordinate system. Such an atomic object fragment consists of boundary fragments bounding coherent region fragments. For example, the parallel strips in region D, at the bottom of Figure 1 leads to a series of broken contour fragments bounding regions that are roughly homogeneous in intensities. The view that a medial axis segment is really just a joint representation of a pair of contours suggests that the medial segment and its influence zone (defined by the burnt region 3

4 (a) (b) (c) (d) Figure 6: (a) The shock fragment is the influence zone of each shock segment. Each point P in this region has a closest contour point P + which in turn maps to a shock point P. Observe how part of the shock fragment perimeter is a real contour while the remaining portion is a delimiter of the region only. (b) A synthetic example showing a multitude of open contour fragments paired by shocks, (c) Shock fragments, (d) When the contours fragments are grouped, the shock fragments organize into visual fragments. The convention used throughout the paper is that contours are shown in blue and shocks are in red, and visual fragments are filled with a random color. in a grassfire analogy) constitute a fragment of an image. Informally, we define a visual fragment as the portion of the image in the influence zone of a shock segment arising from a pair of image contours. Formally, the shock graph of a contour map partitions the image into fragments with a well-defined transformation from each image point to a fragment and vice-versa. These shock fragments are the atomic fragments which are then grouped to form visual fragments. Figure 7: This figure illustrates the coordinate system imposed by each shock fragment. Observe that the atomic shock fragments are immune to various visual transformations such as occlusion. Definition 1 (Shock Fragment): In the grassfire analogy of Blum, the burnt region corresponding to each shock segment is a shock fragment, Figure 6(a). In other words, the shock fragment is the union of all pairs of rays (P P +, P P ) arising from all shock points P along the shock segment. Recall that the shock graph is a refinement of the medial axis resulting from a sense of shock flow. Each shock point is described by geometry (tangent and curvature) as well as dynamics (velocity and acceleration). Figure 7 shows the shock fragments of a closed curve. Observe the coordinate system for each shock fragment is an intrinsic object-based coordinate system. The proposition below shows that shock fragments when applied to non-closed curves partition the image. Proposition 1 An image with an associated contour map (a set of curve segments) is partitioned into a set of shock fragments, i.e., P (x, y) in the image, there exists a shock segment k described by a curve γ k parameterized by arclength s [0, L] with a local coordinate system of axis tangent/normal ( T (s), N(s)) and velocity v(s) such that for some t [0, r(s)], (x, y) = γ k (s) + t( 1 T v v2 1 ± N), v The proof requires developing some background from [4] so we do not present it here. Figure 6(c) illustrates the shock fragments for a contour map sketched in Figure 6(b). Observe that a shock fragment represents an atomic fragment: when a pair of longer contour with some structure is considered, the, the area between the two contours which is described by several shock fragments can be grouped, Figure 6(d), leading to the notion of a (medial) visual fragment. See also Figure 8 for additional examples. Definition 2 (Medial Visual Fragments): A visual fragment corresponding to a pair of contours is the union of all shock fragments that arise from both contours. (a) (c) Figure 8: Visual fragments formed by various arrangements of contour fragments are illustrated: (a) between a pair of open contours, (b) enclosed by a single open contour, (c) enclosed by a single closed contour and (d) enclosed by a pair of closed contours. (b) (d) 4

5 3 Reasoning with Visual Fragments In this section we show that visual fragments offer distinct advantages over region fragments and contour fragments. Recall that while region coherence is the main driving force in forming region fragments, good continuation and good form is the main driving force in forming contour fragments. The presence of ambiguity in low-level feature maps, mainly due to numerous visual transformations, requires that both cues be used to disambiguate the lowlevel evidence into a coherent whole. Figure 9 illustrates the drawbacks of using only good continuation in a contour map. The completion of the contour behind the occluder in Figure 9(a) and the gap in Figure 9(b) are consistent with (a) (b) (c) the underlying object while the completion of the identical contours in Figure 9(c) is not; the situation in Figure 9(d) requires additional information. Clearly, completion solely based on the contour map can be misleading. What is lacking is the notion that not a contour but an object fragment needs to be continued and matched on the other side of the occluder. This would imply a pair of continuations. In Figure 9(a-b) one of the contours is interrupted while the second on is intact; in Figure 9(e-f) both contours need to be simultaneously completed. This form of joint contour contiuity, however, does not prevent a cross over in the completion contour, Figure 9(g), thus motivating the notion of skeletal continuity. We are proposing that skeletal continuity captures object fragment continuity better than individual contour continuity. A second aspect of object fragment continuity that is not captured by contour continuity is the good continuation of surface cues. Consider Figures 9(i-j), where the object fragment on the left can be geometrically continued equally well to either of the object fragments on the right. Clearly, good continuation of the region properties is the dominant factor in deciding the grouping of fragments in this case. This is precisely what region-based segmentation would do if a mechanism for crossing the occluder was somehow included. (d) (e) (f) (g) (h) (i) Figure 10: Gradient Descent perceptual Grouping. From [7]. (j) (k) (l) Figure 9: (a),(b) The use of contours as an intermediate level representation allows for grouping of edges across occlusions or gaps based on good continuation. However, while this is effective in these examples, when significant ambiguity is present, using good continuation of contours is not sufficient. Rather, in matching object fragments, both boundaries must be successfully completed depicting good silhouette continuity, as in (a,b,e,f), but not in (i,j). In addition they must satisfy surface cue continuity, or good continuation of the interior object properties (g,h,i). Shape continuity consists of both silhouette continuity and surface cue continuity, and can be captured by shock segments representing each visual fragment (j,k,l), and is used to disambiguate grouping of visual fragments in (g-i). Visual fragments are capable of representing both good skeletal continuity and surface continuity as both, pair of contours and the region between them, are represented, Figure 9(j-l). This is more specifically pursued below where we consider several canonical situations and describe the grouping process as a transformation of the underlying visual fragments. This is an extension of the approach presented in [7] which only considered transformations of the shock graph affecting only the contour map. In this approach, the completion of a gap is cast as a well-defined transformation of the underlying shock graph (gap transform), while the removal of a spurious edge element is another transform (loop transform). A gradient descent approach selects that transformation of the shock graph that optimizes a move towards good form. Figure 10 illustrates 5

6 Figure 12: Loops form around internal contours in three different ways: (a) around a open contour (b) around a closed contour and (c) around a boundary between two fragments. [Bottom] Shows the resulting grouped fragment after a loop transform. A loop transform moves them into another layer. They are not removed. Any texture would likewise be moved into another layer and is attached to the shape fragment as a surface property Figure 11: [Top] Schematic illustration: Gaps form in the contour as a result of edge detection, see Region B in Figure 1. Gap transform considers both the contour continuity as well as the region continuity. [Middle] Grayscale example from region B in Figure 1, and its visual fragments. The salience of this transform as a grouping derives from both contour continuity as well as regional continuity. [Bottom] Transformed visual fragments together with average intensity pasted on each fragment. this process by showing several samples along the transformation sequence. Ideally, all transformation sequences, or some viable subset, must be searched to select the optimal sequence, but this is an issue that is not the focus of this paper. Here, we show that transformations of the visual fragments can integrate both contour continuity and regional coherence,a nd thus serve as a substrate for more powerful grouping. Gap Transform: Consider a contour which is broken into two contour fragments C 1 and C 2 as in Figure 11. The notion of a visual fragment allows for the inclusion of both contour continuity and surface continuation in the completion process: (i) the completion of the gap between C 1 and C 2 must satisfy good contour continuity, e.g., as defined via Elastica [2] or the Euler Spiral [8]; (ii) the completion of the gap between C 1 and C 2 requires that the region fragments B,C,D,E and F be merged on the one side, and A,G,H,I and J be merged on the other side of the contour. A measure of regional continuity can be based on region-based segmen- tation methods such as segmentation based on weighted aggregation(swa) [13]. Considering the inherent ambiguity in perceptual grouping the addition of regional continuity to contour continuity should provide a powerful constraint for disambiguation of possible continuations. Other realistic examples are shown in Figure 16. Loop Transform: The flip side of completing across a missing contour is the removal of a spurious contour. Spurious contours arise from texture elements, internal contours, noise, among other factors. They are only spurious in the sense that they are not likely to be a part of the boundary of the object fragment. A rather frequent example arises from a slight surface protrusion which leads to a single, nonclosed ridge contour in the image [9]. The removal of such contours is in fact separating it from the existing contour and moving it to another layer. This layer is a map attached to each visual fragment that forms after removing the spurious contour. In this way, regular structural texture can be detected and represented in this layer. The visual fragment representation allows for an integration of both contour continuity and regional continuity in the process of determining the salience of a spurious edge. The cue for spuriousness of an edge is (i) poor continuation with neighboring edge elements (ii) good surface continuity transversal to the spurious contour. Both measures are computable from the visual fragments which arise from this contour, Figures 12 and 13. The transformation pertaining to the removal of this contour is to propagate waves from the shock loop representing it so as to complete the shocks corresponding to the contour map without the spurious element. The intensity in these regions is filled in using 6

7 Figure 13: Examples of loop transforms applied to Figure 1 at various locations. [Left] (a) around a open contour but signals that the contour is significant (b) around a closed contour where the regions around the contour can be grouped by pushing the closed region onto another layer and (c) around a boundary between two fragments of similar texture produced by a region-based algorithm. This loop suggests that the regions can be merged by removing the common boundary. Figure 15: [Top Row] An occluded torus image and its edges. [Middle Row] [Left] Visual Fragments produced from the edgemap. [Right] Visual fragments produced after the occluder is removed onto another layer. The remaining transforms are gaps transforms to link the individual visual fragments into a coherent whole. [Bottom Row] [Left] Since there is more evidence to link up the outer edges of the torus, this will happen first. The grouping of the contour fragments of the outer edge will produce a grouping of the visual fragments into larger ones as illustrated. [Right] After the outer edges are grouped, there is more evidence for the inner contour fragments to link up thus producing a pair of closed contours. This defines the fragment into the shape of torus, as illustrated. Figure 14: Two types of occlusion transforms: [Left] One that has support from the complementary contour and [Right] another that needs to be jointly completed via skeletal continuity. The removal of the occluding object is equivalent to a loop transform. The loop is shown in yellow. Once the occluding object is removed, it reduces to a gap scenario. Gap transform closes the gap and fills in the texture from the participating visual fragments. 7 the recent exemplar-based filling-in process [1]. The loop transform is not restricted solely to the removal of spurious contours. When certain object fragments have formed, they can be moved to another layer as they have been occluding another object. For example, in Figure 14, once the occluding object is segregated it can be moved to another layer, and complete the region fragment in the area behind it by a loop transform of the visual fragments, leaving behind gaps in the process. This is based on the observation that perceptually the occluding edge belongs to the occluder [5].

8 they resemble object fragments and describe object parts and their 3D structure. Finally, whole objects are segregated by reasoning about its parts. This paper has proposed an intermediate representation of the image which spans the gap from low-level image descriptors to high-level object parts. The proposed visual fragment encode both edge-based and region-based properties, thus enabling a grouping process to simultaneously take advantage of both cues, which can potentially disambiguate grouping ambiguities. References (a) (e) (i) (b) (f) (j) (c) (g) (k) (d) (h) (l) Figure 16: Some examples of the visual fragment transforms applied to the fruits basket image [Top]. [Left] Gap Transforms example1 [Middle] Gap Transform Example2. [Right] Loop transform example. 4 Conclusion The perceptual organization of an image from pixels to objects uses many intermediate representations. At a first stage a local grouping of pixels are described by low-level feature maps consisting of edge and regions. At at second stage, these form visual fragments which are initially numerous atomic shock fragments but as grouping proceeds, [1] A. A. Efros and T. K. Leung. Texture synthesis by nonparametric sampling. In IEEE International Conference on Computer Vision, pages , Corfu, Greece, September [2] R. B. Eitan Sharon, Achi Brandt. Completion energies and scale. PAMI, 22(10): , [3] J. Elder and S. Zucker. Local scale control for edge detection and blur estimation. In ECCV96, pages II:57 69, [4] [5] B. Gillam. New evidence for closure in perception. Perception and Psychophysics, 17(5): , [6] D. W. Jacobs. Robust and efficient detection of salient convex groups. IEEE Trans. Pattern Analysis and Machine Intelligence, 18(1):23 37, [7] [8] [9] J. J. Koenderink and A. J. van Doorn. The shape of smooth objects and the way contours end. Perception, 11: , [10] M. Proesmans, E. Pauwels, and L. V. Gool. Coupled geometry-driven diffusion equations for low-level vision. In Geometry-Driven Diffusion in Computer Vision. Kluwer, [11] C. Rothwell, J. Mundy, W. Hoffman, and V.-D. Nguyen. Driving vision by topology. In IEEE Intl. Symosium on Computer Vision, pages , [12] K. T. S. Mahamud, L.R. Williams and K. Xu. Segmentation of multiple salient closed contours from real images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(4): , [13] E. Sharon, A. Brandt, and R. Basri. Segmentation and Boundary Detection Using Multiscale Intensity Measurements. In IEEE Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pages , [14] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on PAMI, 22(8): , [15] L. Williams and K. Thornber. A comparison of measures for detecting natural shapes in cluttered backgrounds. IJCV, 34(2-3):81 96, November

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Normalized cuts and image segmentation

Normalized cuts and image segmentation Normalized cuts and image segmentation Department of EE University of Washington Yeping Su Xiaodan Song Normalized Cuts and Image Segmentation, IEEE Trans. PAMI, August 2000 5/20/2003 1 Outline 1. Image

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Lightmaps, as do other precomputed lighting methods, provide an efficient and pleasing solution for lighting and shadowing of relatively static

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Medial Features for Superpixel Segmentation

Medial Features for Superpixel Segmentation Medial Features for Superpixel Segmentation David Engel Luciano Spinello Rudolph Triebel Roland Siegwart Heinrich H. Bülthoff Cristóbal Curio Max Planck Institute for Biological Cybernetics Spemannstr.

More information

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah Image Segmentation Ross Whitaker SCI Institute, School of Computing University of Utah What is Segmentation? Partitioning images/volumes into meaningful pieces Partitioning problem Labels Isolating a specific

More information

Review on Image Segmentation Techniques and its Types

Review on Image Segmentation Techniques and its Types 1 Review on Image Segmentation Techniques and its Types Ritu Sharma 1, Rajesh Sharma 2 Research Scholar 1 Assistant Professor 2 CT Group of Institutions, Jalandhar. 1 rits_243@yahoo.in, 2 rajeshsharma1234@gmail.com

More information

Logical Templates for Feature Extraction in Fingerprint Images

Logical Templates for Feature Extraction in Fingerprint Images Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

Content-based Image and Video Retrieval. Image Segmentation

Content-based Image and Video Retrieval. Image Segmentation Content-based Image and Video Retrieval Vorlesung, SS 2011 Image Segmentation 2.5.2011 / 9.5.2011 Image Segmentation One of the key problem in computer vision Identification of homogenous region in the

More information

Fingerprint Classification Using Orientation Field Flow Curves

Fingerprint Classification Using Orientation Field Flow Curves Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Salient Boundary Detection using Ratio Contour

Salient Boundary Detection using Ratio Contour Salient Boundary Detection using Ratio Contour Song Wang, Toshiro Kubota Dept. Computer Science & Engineering University of South Carolina Columbia, SC 29208 {songwang kubota}@cse.sc.edu Jeffrey Mark Siskind

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Motion Segmentation with Accurate Boundaries - A Tensor Voting Approach

Motion Segmentation with Accurate Boundaries - A Tensor Voting Approach Motion Segmentation with Accurate Boundaries - A Tensor Voting Approach Mircea Nicolescu and Gérard Medioni Integrated Media Systems Center University of Southern California Los Angeles, CA 90089-0273

More information

Review of Filtering. Filtering in frequency domain

Review of Filtering. Filtering in frequency domain Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect of filter Algorithm: 1. Convert image and filter to fft (fft2

More information

Shyjan Mahamud Karvel K. Thornber Lance R. Williams. Carnegie Mellon University 4 Independence Way University of New Mexico

Shyjan Mahamud Karvel K. Thornber Lance R. Williams. Carnegie Mellon University 4 Independence Way University of New Mexico Segmentation of Salient Closed Contours from Real Images Shyjan Mahamud Karvel K. Thornber Lance R. Williams Dept. of Computer Science NEC Research Institute, Inc. Dept. of Computer Science Carnegie Mellon

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

EDGE BASED REGION GROWING

EDGE BASED REGION GROWING EDGE BASED REGION GROWING Rupinder Singh, Jarnail Singh Preetkamal Sharma, Sudhir Sharma Abstract Image segmentation is a decomposition of scene into its components. It is a key step in image analysis.

More information

Fluent User Services Center

Fluent User Services Center Solver Settings 5-1 Using the Solver Setting Solver Parameters Convergence Definition Monitoring Stability Accelerating Convergence Accuracy Grid Independence Adaption Appendix: Background Finite Volume

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Snakes, level sets and graphcuts. (Deformable models)

Snakes, level sets and graphcuts. (Deformable models) INSTITUTE OF INFORMATION AND COMMUNICATION TECHNOLOGIES BULGARIAN ACADEMY OF SCIENCE Snakes, level sets and graphcuts (Deformable models) Centro de Visión por Computador, Departament de Matemàtica Aplicada

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

City Research Online. Permanent City Research Online URL:

City Research Online. Permanent City Research Online URL: Slabaugh, G.G., Unal, G.B., Fang, T., Rossignac, J. & Whited, B. Variational Skinning of an Ordered Set of Discrete D Balls. Lecture Notes in Computer Science, 4975(008), pp. 450-461. doi: 10.1007/978-3-540-7946-8_34

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary EDGES AND TEXTURES The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

Feature Matching and Robust Fitting

Feature Matching and Robust Fitting Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Outline Task-Driven Sensing Roles of Sensor Nodes and Utilities Information-Based Sensor Tasking Joint Routing and Information Aggregation Summary Introduction To efficiently

More information

Line Segment Based Watershed Segmentation

Line Segment Based Watershed Segmentation Line Segment Based Watershed Segmentation Johan De Bock 1 and Wilfried Philips Dep. TELIN/TW07, Ghent University Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium jdebock@telin.ugent.be Abstract. In this

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah

Image Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah Image Segmentation Ross Whitaker SCI Institute, School of Computing University of Utah What is Segmentation? Partitioning images/volumes into meaningful pieces Partitioning problem Labels Isolating a specific

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Finding 2D Shapes and the Hough Transform

Finding 2D Shapes and the Hough Transform CS 4495 Computer Vision Finding 2D Shapes and the Aaron Bobick School of Interactive Computing Administrivia Today: Modeling Lines and Finding them CS4495: Problem set 1 is still posted. Please read the

More information

Fitting: The Hough transform

Fitting: The Hough transform Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

CS 4495 Computer Vision. Segmentation. Aaron Bobick (slides by Tucker Hermans) School of Interactive Computing. Segmentation

CS 4495 Computer Vision. Segmentation. Aaron Bobick (slides by Tucker Hermans) School of Interactive Computing. Segmentation CS 4495 Computer Vision Aaron Bobick (slides by Tucker Hermans) School of Interactive Computing Administrivia PS 4: Out but I was a bit late so due date pushed back to Oct 29. OpenCV now has real SIFT

More information

STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES

STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES 25-29 JATIT. All rights reserved. STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES DR.S.V.KASMIR RAJA, 2 A.SHAIK ABDUL KHADIR, 3 DR.S.S.RIAZ AHAMED. Dean (Research),

More information

Use of Shape Deformation to Seamlessly Stitch Historical Document Images

Use of Shape Deformation to Seamlessly Stitch Historical Document Images Use of Shape Deformation to Seamlessly Stitch Historical Document Images Wei Liu Wei Fan Li Chen Jun Sun Satoshi Naoi In China, efforts are being made to preserve historical documents in the form of digital

More information

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015 Object-Based Classification & ecognition Zutao Ouyang 11/17/2015 What is Object-Based Classification The object based image analysis approach delineates segments of homogeneous image areas (i.e., objects)

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Recognizing Apples by Piecing Together the Segmentation Puzzle

Recognizing Apples by Piecing Together the Segmentation Puzzle Recognizing Apples by Piecing Together the Segmentation Puzzle Kyle Wilshusen 1 and Stephen Nuske 2 Abstract This paper presents a system that can provide yield estimates in apple orchards. This is done

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

3D Object Scanning to Support Computer-Aided Conceptual Design

3D Object Scanning to Support Computer-Aided Conceptual Design ABSTRACT 3D Object Scanning to Support Computer-Aided Conceptual Design J.S.M. Vergeest and I. Horváth Delft University of Technology Faculty of Design, Engineering and Production Jaffalaan 9, NL-2628

More information

A Min-Cover Approach for Finding Salient Curves

A Min-Cover Approach for Finding Salient Curves A Min-Cover Approach for Finding Salient Curves Pedro Felzenszwalb University of Chicago pff@cs.uchicago.edu David McAllester TTI at Chicago mcallester@tti-c.org Abstract We consider the problem of deriving

More information

A New Algorithm for Detecting Text Line in Handwritten Documents

A New Algorithm for Detecting Text Line in Handwritten Documents A New Algorithm for Detecting Text Line in Handwritten Documents Yi Li 1, Yefeng Zheng 2, David Doermann 1, and Stefan Jaeger 1 1 Laboratory for Language and Media Processing Institute for Advanced Computer

More information

More Texture Mapping. Texture Mapping 1/46

More Texture Mapping. Texture Mapping 1/46 More Texture Mapping Texture Mapping 1/46 Perturbing Normals Texture Mapping 2/46 Perturbing Normals Instead of fetching a texture for color, fetch a new perturbed normal vector Creates the appearance

More information

Saliency Detection in Aerial Imagery

Saliency Detection in Aerial Imagery Saliency Detection in Aerial Imagery using Multi-scale SLIC Segmentation Samir Sahli 1, Daniel A. Lavigne 2 and Yunlong Sheng 1 1- COPL, Image Science group, Laval University, Quebec, Canada 2- Defence

More information

Part 3: Image Processing

Part 3: Image Processing Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation

More information

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided

More information

Scaling and Power Spectra of Natural Images

Scaling and Power Spectra of Natural Images Scaling and Power Spectra of Natural Images R. P. Millane, S. Alzaidi and W. H. Hsiao Department of Electrical and Computer Engineering University of Canterbury Private Bag 4800, Christchurch, New Zealand

More information

Does everyone have an override code?

Does everyone have an override code? Does everyone have an override code? Project 1 due Friday 9pm Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Machine learning Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class:

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

Image-Based Competitive Printed Circuit Board Analysis

Image-Based Competitive Printed Circuit Board Analysis Image-Based Competitive Printed Circuit Board Analysis Simon Basilico Department of Electrical Engineering Stanford University Stanford, CA basilico@stanford.edu Ford Rylander Department of Electrical

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR) Introduction Illustrative rendering is also often called non-photorealistic rendering (NPR) we shall use these terms here interchangeably NPR offers many opportunities for visualization that conventional

More information

Adaptive Local Thresholding for Fluorescence Cell Micrographs

Adaptive Local Thresholding for Fluorescence Cell Micrographs TR-IIS-09-008 Adaptive Local Thresholding for Fluorescence Cell Micrographs Jyh-Ying Peng and Chun-Nan Hsu November 11, 2009 Technical Report No. TR-IIS-09-008 http://www.iis.sinica.edu.tw/page/library/lib/techreport/tr2009/tr09.html

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information

6. Object Identification L AK S H M O U. E D U

6. Object Identification L AK S H M O U. E D U 6. Object Identification L AK S H M AN @ O U. E D U Objects Information extracted from spatial grids often need to be associated with objects not just an individual pixel Group of pixels that form a real-world

More information

Planes Intersecting Cones: Static Hypertext Version

Planes Intersecting Cones: Static Hypertext Version Page 1 of 12 Planes Intersecting Cones: Static Hypertext Version On this page, we develop some of the details of the plane-slicing-cone picture discussed in the introduction. The relationship between the

More information

Shape representation by skeletonization. Shape. Shape. modular machine vision system. Feature extraction shape representation. Shape representation

Shape representation by skeletonization. Shape. Shape. modular machine vision system. Feature extraction shape representation. Shape representation Shape representation by skeletonization Kálmán Palágyi Shape It is a fundamental concept in computer vision. It can be regarded as the basis for high-level image processing stages concentrating on scene

More information

OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION *

OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION * OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION * L. CAPODIFERRO, M. GRILLI, F. IACOLUCCI Fondazione Ugo Bordoni Via B. Castiglione 59, 00142 Rome, Italy E-mail: licia@fub.it A. LAURENTI, G. JACOVITTI

More information

Object Removal Using Exemplar-Based Inpainting

Object Removal Using Exemplar-Based Inpainting CS766 Prof. Dyer Object Removal Using Exemplar-Based Inpainting Ye Hong University of Wisconsin-Madison Fall, 2004 Abstract Two commonly used approaches to fill the gaps after objects are removed from

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

Lecturer 2: Spatial Concepts and Data Models

Lecturer 2: Spatial Concepts and Data Models Lecturer 2: Spatial Concepts and Data Models 2.1 Introduction 2.2 Models of Spatial Information 2.3 Three-Step Database Design 2.4 Extending ER with Spatial Concepts 2.5 Summary Learning Objectives Learning

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

AUTOMATIC BLASTOMERE DETECTION IN DAY 1 TO DAY 2 HUMAN EMBRYO IMAGES USING PARTITIONED GRAPHS AND ELLIPSOIDS

AUTOMATIC BLASTOMERE DETECTION IN DAY 1 TO DAY 2 HUMAN EMBRYO IMAGES USING PARTITIONED GRAPHS AND ELLIPSOIDS AUTOMATIC BLASTOMERE DETECTION IN DAY 1 TO DAY 2 HUMAN EMBRYO IMAGES USING PARTITIONED GRAPHS AND ELLIPSOIDS Amarjot Singh 1, John Buonassisi 1, Parvaneh Saeedi 1, Jon Havelock 2 1. School of Engineering

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Isophote-Based Interpolation

Isophote-Based Interpolation Isophote-Based Interpolation Bryan S. Morse and Duane Schwartzwald Department of Computer Science, Brigham Young University 3361 TMCB, Provo, UT 84602 {morse,duane}@cs.byu.edu Abstract Standard methods

More information

Image Segmentation Based on Watershed and Edge Detection Techniques

Image Segmentation Based on Watershed and Edge Detection Techniques 0 The International Arab Journal of Information Technology, Vol., No., April 00 Image Segmentation Based on Watershed and Edge Detection Techniques Nassir Salman Computer Science Department, Zarqa Private

More information

Shape Classification Using Regional Descriptors and Tangent Function

Shape Classification Using Regional Descriptors and Tangent Function Shape Classification Using Regional Descriptors and Tangent Function Meetal Kalantri meetalkalantri4@gmail.com Rahul Dhuture Amit Fulsunge Abstract In this paper three novel hybrid regional descriptor

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Previously. Edge detection. Today. Thresholding. Gradients -> edges 2/1/2011. Edges and Binary Image Analysis

Previously. Edge detection. Today. Thresholding. Gradients -> edges 2/1/2011. Edges and Binary Image Analysis 2//20 Previously Edges and Binary Image Analysis Mon, Jan 3 Prof. Kristen Grauman UT-Austin Filters allow local image neighborhood to influence our description and features Smoothing to reduce noise Derivatives

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information