Efficient image region and shape detection by perceptual contour grouping
|
|
- Lynette Elliott
- 5 years ago
- Views:
Transcription
1 Efficient image region and shape detection by perceptual contour grouping Huiqiong Chen Faculty of Computer Science Dalhousie University, Halifax, Canada Qigang Gao Faculty of Computer Science Dalhousie University, Dalhousie University Abstract - Image region detection aims to extract meaningful regions from image. This task may be achieved equivalently by finding the interior or boundaries of regions. The advantage of the second strategy is that once a closure is detected not only its shape information is available, but also the interior property can be estimated with a minimum effort. In this paper, we present a novel method that detects region though region contour grouping based on Generic Edge Token (GET). GETs are a set of perceptually distinguishable edge segment types including linear and non-linear features. In our method, an image is first transformed into GET space on the fly represented by a GET graph. A GET graph presents perceptual organization of GET associations. Two types of perceptual closures, basic contour closure and object contour closure, based upon which all meaningful regions are conducted, are defined and then detected. The detection is achieved by tracking approximate adjacent edges along the GET graph to group the contour closures. Because of the descriptive nature of GET representation, the perceptual structure of detected region shape can be estimated easily based its contour GET types. By using our method, all and only perceptual closures can be extracted quickly. The proposed method is useful for image analysis applications especially real time systems like robot navigation and other vision based automation tasks. Experiments are provided to demonstrate the concept and potential of the method. Index Terms region detection, perceptual closure, GET data I. Introduction An image region is defined as a set of connected pixels with homogenous properties like color or texture, etc, which has obvious contrast to its surroundings. Although region detection plays a significant role in many image analysis applications, it is a challenging task to efficiently obtain meaningful and accurate regions from images, particularly for real-time applications. Region detection methods can be mainly classified into region-based and boundary-based methods [1]. Region-based methods are based on merging similar pixels together to form coherent regions. Region membership decision, under/over segmentation and inaccurate boundaries are their common problems. In [2], detection groups regions for over segmented image by properties of continuity, region ration, etc. To reduce over-segmentation in watershed method, Gaussian convolution with different deviations is used in [3]. Edge information can be used here for refinement, such as seed placement selection, homogeneity criterion control, accuracy improvement [4]. Region-based methods usually treat region as a set of pixels therefore it is difficult to get accurate region shape or object structure from image through these methods. Boundary-based methods employ edge information. Proper edge pixels are grouped into region boundaries. The advantage of these methods is that once a closure forms both region boundaries and its interior properties can be get easily. However, when edge data is provided on pixel level, the method effect will be largely affected by the existence of edge gaps and boundary pixel recognition strategy. Perceptual origination can be used in detection to find perceptual regions/objects [5] [6] but and it always suffers from intensive computation and detecting arbitrary shape regions. In this paper we present a novel method that can detect meaningful regions with arbitrary shape efficiently and speedily. By using innovative perceptual edge features called Generic Edge Token (GET), meaningful regions can be detected promptly through perceptual region contour detection. Perceptual region contour closures are defined first to represent meaningful regions and then be detected through closure detection in GET space. Because of the simplicity of GET representation, and inherent structure information carried by each feature, the perceptual region shape can be obtained easily from region as well as region boundary and interior attributes. Therefore, our method is suitable for image analysis applications especially for real-time systems. This paper is organized as follows: Section II introduces Generic Edge Tokens. Section III presents perceptual region hierarchy and perceptual region representation. Contour closure detection method is provided in section IV. Experiments are given in section V. Section VI provides conclusions and future work. II. Generic Edge Tokens Generic Edge Tokens (GETs) are perceptually significant image primitives which represent classes of qualitatively equivalent structure elements. A complete set of GETs includes both Generic Segments (GS) and curve partition points (CPP), Each GS is perceptually distinguishable edge segment with linear or nonlinear features while each CPP is junction of GSs. Gao and Wang proposed a curve partition method in a very generic form which performs partition following the similar objective in human visual perception. The image is scanned horizontally and vertically with an interval to find strong edge pixels as tracking starting pixels then edge pixels on the traces are selected according to the GS definitions. This method is robust with low time expense. Detail explanation of GET detection can be found in [7]. Fig. 1 (a) shows curve partition examples.
2 Fig. 1. Curve partition examples. A GS is a segment in image with monotonic increasing or decreasing. GSs can be perceptually classified into eight categories as Fig. 1 (b) shows. Each type of GSs should satisfy properties described by the monotonicities of tangent function y= f(x) and its inverse function x = ϕ(y), as Table 1 illustrates. The monotonicity is checked along each GS. A CPP is a perceptually significant point where adjacent GSs meet and curve turning takes place [8]. No intersection among GSs exists besides CPPs. CPP can group GSs into perceptual structures. There are four basic categories of CPPs according to the conjunction types as Fig. 1 (c) shows. The intrinsic perceptual features of GETs will facilitate further detection of perceptual closures formed by GETs and ease the extraction of perceptual shape from regions described by GET closures. III. Perceptual region hierarchy and representation According to MPEG-7 standard, region or object shape can be described by Contour Shape descriptors based on its contour [9]. Therefore contour closure, formed by region boundaries, is desirable representation for meaningful regions. It possesses both low-level features and perceptual concepts: closure interior pixels contain low level attributes; closure boundary contains structure/shape information at higher level. A. Perceptual region concept hierarchy From perceptual view, meaningful regions arise from objects, and then can be represented by object outline and object basic inner parts (called basic regions). An object is constituted by one or multiple basic regions. As Fig. 2 shows, object A is built by 3 basic regions, whose contours can be represented by basic contour closures e 1 e 2 e 3 e 4, e 2 e 5 e 6 e 7, e 3 e 7 e 8 e 9 respectively, all of which are formed by GETs. Other closures are constituted by basic contour closures, among which e 1 e 5 e 6 e 8 e 9 e 4 represents the object outline. Therefore all closures in image formed by GETs can be classified into three types according to region contour properties: basic contour closure which represents the contour of basic region; object contour closure which represents the contour of object outline or separated object component outline; other composite closure. All meaningful regions in an object can be represented by first two types of GET closures while GET closures can be described by GET organizations. As Fig. 2 illustrates, object B has a separate hole inside, which can be regarded as a separate object component constituted by one TABLE 1. Definitions of GS types; (M+ and M- represent monotonic increasing and decreasing properties while c mean a constant) GS type Properties LS1 LS2 LS3 LS4 CS1 CS2 CS3 CS4 f(x) c n/a M+ M- M+ M- M- M+ ϕ(y) n/a c M+ M+ M+ M- M- M- f (x) 0 c c M- M- M+ M+ ϕ (y) 0 c c M+ M+ M- M- single basic region. The remaining part of B other than the hole is another basic region, whose contour can be described by the basic contour closure e 1 e 2 e 3 e 4 along with the inside object contour closure e 5 e 6 e 7 e 8, which represents outline of the object component hole. From analysis above we can conclude that, basic contour closures and object contour closures contain most perceptual information of meaningful regions while other composite closures do not have much meaning. Definition 1: Meaningful regions in image can be represented by two types of perceptual contour closures: basic contour closures and object contour closures. A basic contour closure is a basic GET closure which can not be split into other closures; an object contour closure is a GET closure representing the outline of an object or object individual component. (Assume no occlusion occurs). This definition has several advantages. First, it leads to a hierarchical descriptor for image content: Image content can be represented by objects, while an object can be described by the hierarchy in Fig. 2. Second, both types of perceptual closures are supported in MPEG-7 frame by contour shape descriptors [10]. Last, it makes sure that useful perceptual closures will be picked up in detection and meaningless composite closures will not be detected anymore. A region/closure concept hierarchy is constructed in Fig. 2. In this hierarchy, GETs can be organized perceptually into contour closures, which can build meaningful regions in objects. To find perceptual contour closures from all GET data, we use a GET graph to code perceptual organization of GET associations in image. Fig. 2. Perceptual region/closure concept hierarchy
3 B. Perceptual organization representation by GET graph A graph named GET graph can be derived from preextracted GET data to code the perceptual structure of image. An image is transformed into GET space represented by a special graph named GET graph. Therefore, perceptual contour closures, i.e., basic contour closures and object contour closures, can be detected by perceptual cycle search in GET graph when exploring the correspondence between the GET closures and graph cycles [11]. For the purpose of real time processing, GET graph is reduced and search starting edges are selected before search to avoid duplicate and reduce computation burden. Starting from selected edges, novel perceptual contour closure detection method is applied to get all perceptual closures from GET graph then results are classified into perceptual types. Taking advantage of the perceptual GET class data, the image can be transformed into GET space represented by a GET graph. With all GETs extracted from image, GET graph can be constructed by converting GSs and CPPs to graph edges and vertices respectively. Each edge in GET graph presents a perceptually distinguishable edge segment in image while the organization of GET graph represents the perceptual structures in image. If no occlusion occures, each connected component in GET graph corresponds to an object/individual object component in image constituted by basic inner parts, which can be represented by basic contour closures. Therefore basic contour closures in image one-to-one correspond to basic cycles in GET graph while object contour closures oneto-one correspond to outlines of graph connected components. For convenience sometimes we do not distinguish between the terms in the context of the paper. GET graph has some unique characters: (1) Besides representing graph structure, the edges and vertices in GET graph also represent real segments and their junction points in image. (2) According to GET properties, no intersection among edges should exist besides edge endpoints. GET graph can be constructed as follows: let G= (V, E) be an undirected graph, where V is the set of vertices and E is the set of edges in G. Initially V= and E=. For each GS i (0 i< m, m is number of GSs in image), insert it into E as edge e i. For each CPP j (0 j < n, where n is number of CPPs in image), insert it to V as vertex v j. In practice, there always exist edge gaps in image as Fig. 3 (a) shows. The gaps may be caused by image noise or broken edges derived from edge detector. To bridge possible gaps, we amend the concept of adjacent for GET graph and regard close edges within a small common region as adjacent: For edges e i and e j, if e i s endpoint v m is close to e j s endpoint v n, normalized distance between v m and v n is defined l( vm) + l( vn) dist (vm, vn ) = d( v, ) 2 m v (1) n [ l( vm) + l( vn )] + δ where l(v m ), l(v n ) are length of longest edges among all edges connecting to v m, v n respectively; δ is mean of length of all edges E, d(v m, v n ) is the distance between v m and v n. If dist(v m, v n ) threshold Τ, the gap of e i and e j should be bridged. A virtual vertex m is used as combination of real vertices v m, v n. m is virtual intersection of e i and e j obtained by extending endpoints v m and v n from e i and e j respectively. Therefore, m is a substitute for v m, v n as endpoint of e i and e j. Definition 2: Graph vertex v j (no matter v j is a real vertex or virtual vertex) is incident with edge e i iff: v j is e i s endpoint. Edges e i, e j are adjacent iff e i and e j : (a) share a real vertex as common endpoint; or (b) share a virtual vertex as common endpoint within threshold T. we denote this as e i ~ e j. The degree of vertex v j, denoted as deg(v j ), is the number of edges incident with v j within T. Fig. 3 (b) illustrates GET graph construction. If dist (v 0, v 1 ) T, the edge gap between e 1 and e 10 /e 12 is so small that it can be bridged by virtual vertex m 1. e 1, e 10, e 12 are adjacent by m 1, deg(m 1 )= 3; otherwise, this gap can not be ignored. e 1 is not adjacent to e 10 /e 12. deg(v 0 )=2 and deg(v 1 )= 1. Similarly, gap between e 4 and e 5 can be bridged by m 2 if dist(v 5, v 6 ) T. GET space is reduced by graph reduction, which remove noise edges not belonging to graph cycle. It decreases future search burden and false closure. An edge e i with endpoints v m and v n is a noise edge iff: (a) deg (v m ) = 1; or (b) besides e i, v m is incident with only noise edges; or (c) deg (v n ) =1; or (d) besides e i, v n is incident with only noise edges. Fig. 4(b) shows a GET graph and (c) shows graph reduction. IV. Perceptual contour closure detection Perceptual closure detection can be carried out using GETs pre-extracted from image, as Fig. 5 shows. After graph reduction, every edge in GET graph must be element of some closures. A perceptual closure in image can be detected by finding corresponding cycle in GET graph. Fig. 4 (a) Original image; (b) GET graph; (c) GET graph reduction Closure Definition Image GET Detection GET graph Construction & Reduction Closure Detection Perceptual GET closures (regions) (a) GETs in image with edge gaps; (b) GET graph with virtual vertices Fig. 3. Construct graph in GET space using GET data. Search starting edge selection Closure detection by cycle search Fig. 5. Perceptual contour closure detection architecture Result closure classification
4 Staring from one endpoint of an edge then tracking along adjacent edges, a graph cycle can be formed if a path back to starting point is found in graph. All closures may be detected from GET graph by this method when starting from closure edges. However, this method has two problems. First, proper starting edges should be selected instead of starting search from every graph edge. This can reduce computation cost and duplicate, avoid time expense increasing greatly with graph edge number. Second, search strategy is needed in tracking to guarantee the results be perceptual closures. Spanning trees are employed for the first problems. For the second problem, a new perceptual search strategy is proposed. A. Starting edge selection Spanning tree is a connected sub-graph with tree structure including all vertices of a graph connected component. Lemma 1: all perceptual closures can be obtained by tracking in graph starting from edges not belonging to spanning trees. That is, edges not belonging to spanning trees should be selected as starting edges. Proof of Lemma 1: Let G i =(V i, E i ) be the ith connected component of GET graph, (V i, E i ) be set of vertex and edges for the spanning tree of G i, N i = E i - E i. We need to prove (a) e N i, e must be element of a basic contour closure; (b) each basic contour closure in G i must have at least one edge e such that e N i ; (c) the object contour closure corresponding to component outline must have at least one edge e so that N i. (a) As spanning tree, V i =V i ; (V i, E i ) contains no cycle. e N i with endpoints v x v y, there have v x V i, v y V i and one single path p between (v x, v y ) in the tree. If e is added to the tree, e combines with path p forming a cycle. This cycle either is a basic cycle or is built by basic cycles. e belongs to one of basic cycles, i.e., e belongs to a basic contour closure. (b) Since spanning tree contains no cycle, any basic cycle in (V i, E i ) must have at least one edge e excluded from (V i, E i ) to avoid forming this cycle, i.e., e N i. (c) The proof of (c) is similar with (b). From (a)-(c), we can conclude that Lemma 1 is true. B. Perceptual Cycle search algorithm In connected component i, basic cycle is cycle without sub-cycle inside and component outline is a cycle without any cycle outside. Starting from e i N i, which must belong to some basic cycle of G i, basic contour closure can be formed by basic cycle search in graph. The search tracks along adjacent edges by always selecting the innermost edge in given direction (anticlockwise/clockwise) from all candidates in each step of path selection. If e i is on component outline, object contour closure can also be extracted by selecting outmost edge in given direction (which can be considered as selecting innermost edge in the other direction) in each step. A new closure detection method is proposed to find all perceptual closures by perceptual cycle search algorithm. This algorithm is based on the following hypothesis which will be proven by lemma 2: e i N i with endpoints v x and v y, starting from v x of e i, a perceptual closure (object contour closure or basic contour closure) can be found by cycle search in graph, which always selects innermost edge in some direction from all adjacent edges as next edge in each step of selection. Since innermost edge is direction sensitive, the cycle search must keep the same direction in each step. Fig. 6 (a) illustrates cycle search. If a clockwise search starts from v 7 of e 7, e 7 is current edge while v 7 is current vertex. e 5, e 6 ~ e 7 by current vertex v 7. As the clockwise innermost edge for e 7, e 5 is selected to be next cycle edge. In next step, e 5 is current edge while e 6 s another endpoint m 2, is the current vertex. e 4 ~ e 5 by m 2 so e 4 is selected. Search stops when new selected e 10 meets e 7 at v 9 and basic contour closure e 7 e 5 e 4 e 3 e 2 e 1 e 10 forms. Another basic contour closure e 7 e 6 e 8 e 9 can also be formed by anticlockwise search starting from v 7 of e 7. Object contour closure can be extracted as well. Starting from v 10 of object contour edge e 11, object contour closure e 11 e 8 e 6 e 5 e 4 e 3 e 2 e 1 e 12, which is constituted by clockwise outermost edges, can be got by anticlockwise cycle search. Algorithm 1: let e i with endpoints v x, v y be starting edge, the cycle search algorithm in direction d can be presented as: Step 1: initially, current edge ce:=e i, current vertex cv:=v x. Let C be set of closure edges, C={e i }. Step 2: if ce= e i and cv=v y, search stops and perceptual closure forms; otherwise perform step 3~4; Step 3: for each edge e j adjacent to ce by cv, calculate the included angle from cv to e j in direction d. Select the adjacent edge with minimal included angle as new selected edge ne, C=C {ne}; Step 4: update cv and ce; cv:= ne s another endpoint which is not incident with ce; ce:=ne; go to step 2. Lemma 2: the closure extracted using perceptual cycle search algorithm must be a perceptual closure. Proof of Lemma 2: let C be the cycle search result in direction d starting from e i. Here we prove C must represent either a basic contour closure or an object contour closure. (1) If e i is not on component outline, C can not be an object contour closure. There must exist more than one basic cycles sharing e i ; assume that C is not a basic cycle, there must exist basic cycles C inside C and C outside of C sharing e i with C. If C C and C have identity edges e i ~ e j at first several steps of search, let e m, e n and e l be next edges selected in C, C and C respectively. (a) If e m e n and e l are different edges, e m should be located inside the area between e j and e m. Thus e m can not be the innermost edge against e j in whatever search direction, which contradicts the fact that edges in C should be the innermost edge in direction d. (b) If (a) Detect perceptual closures (b) Simulate closure by polygon Fig. 6. (a) shows the process of perceptual closures detection. Dashes in (a) are starting edges and arrows indicate search direction. Polygon in (b) simulates the closure formed by clockwise search starting from v 7 of e 7. Dashes in (b) indicate polygon edges different from graph edges.
5 e m is same to e n but different with e l ; let e k be the last identical edge for C and C, e p and e q be next edges for C and C. e q is inside C and e l is outside C. No matter d is clockwise or anticlockwise, either e p is not the innermost edge against e k (e q is the innermost in stead) or e m is not the innermost edge against e j (e n is the innermost). This contradicts to previous fact. (c) If e m = e l but different with e n ; proof is similar to (b). Summarizing (a)~(c), if e i is on component outline, the prior assumption can not be true. C must be a basic contour closure. (2) If e i is on component outline; proof is similar to (1). From (1)-(2), we can conclude that Lemma 2 is true. C. Perceptual closure detection via cycle search Perceptual closure detection can be described as follows: Step 1: Construct GET graph and remove noise graph edges. Step 2: Find all connected components in the GET graph. For each component, perform step 3. Step 3: Extract a spanning tree. Starting form each edge e i not in the tree, perform step 4. Step 4: perform both clockwise and anticlockwise closure searches by applying perceptual cycle search algorithm. It can be easily concluded from lemma 2 that using this detection method, two closures can be extracted by clockwise and anticlockwise searches starting from a given edge e, and there are two possibilities for extracted closures: (1) If e is not on component outline, both of the detected closures are basic contour closures; (2) If e is on component outline, the two closures have one basic contour closure and one object contour closure. Lemma 3: the perceptual closure detection method is valid because by using this method, (1) all extracted closures are perceptual closures; (2) all perceptual closures are extracted. Proof of lemma 3: (1) has been represented in lemma 2. (2) First we prove that object contour closure must be detected. Based on lemma 1, there must have an object contour closure edge e j N i as starting edge. Starting from this edge, the two results must have one object contour closure. Second we prove that all basic contour closures must also be extracted. Assume there is a basic contour closure C that cannot be detected by our method, C must have an edge e j N i. If e j is not on the component outline, C must share e j with two other basic contour closures C and C, both of which are detected by cycle search starting from e j. No intersection exist among the three closures, so C must be outside the area of (C C ) while e j is an edge inside the area of (C C ). This contradicts with the fact that e j is an edge of C. If e j is on component outline; C shares e j with basic contour closure C and object contour closure C. The areas of C and C do no intersect while both of them are inside C. Therefore C should be inside the area of (C -C ), in which e j can not be included. This also conflicts with previous fact. Thus this kind of C can not exist no matter e j s type. D. Closure classification Perceptual closure detection method above finds all perceptual closures without concerning closure types. The result closures should be classified into two types: basic contour closure and object contour closure. The classification can be achieved by summing up all included angles of a polygon which simulates the extracted closure. The polygon can be formed during cycle search by simulating the selected edge in each step as a straight line between its two endpoints. Included angles between simulation lines with search direction are recorded. Fig. 6 (b) shows an example: in clockwise search starting from v 7 of e 7, the first selected edge is e 5 so the clockwise included angle from e 7 to e 5 is recorded as θ. The next v7v9 v7m2 selected edge e 4 is simulated by line from m 2 to v 4 so θ is recorded. The polygon forms when θ v9m1 v9v7 is recorded. m2v7 m2v4 For an n-edge closure, let θ Total be sum of all n included angles recorded during simulation. If the closure is basic contour closure, recorded included angles should be interior angles of polygon since the simulation passes inside closure in each step, as arrows in Fig. 6(b) shows. θ Total = (all interior angles of polygon) = 180*(n-2). If the closure is object contour closure, the included angles should be exterior angles of polygon thus θ Total = 180*(n+2). Therefore we conclude: If θ Total = 180*(n-2), the closure is basic contour closure; and If θ Total = 180*(n+2), the closure is an object contour closure. where n is the number of edges in the closure. Fig. 7 illustrates the process of closures detection and classification. V. Experiment Results To evaluate validity of our method, algorithms are implemented using C++. Sun Fire 4800, a multi-user UNIX server running Solaris 8 with 16 GB RAM, 4 UltraSPARC-III processors at 900 MHz, is used in test. Server load average = 2.64 during test. The test images, whose sizes range from 256*256 to 512*512 pixels, include both synthetic image and real images. The experiment first extract GETs from image; then detect regions based on GET data. In our test, average times of GET extraction and region detection are 418 and 447 millisecond. The number of perceptual closures in image and correct/error/ missing in the results are determined manually with human perception. Tiny regions are omitted as noises. Experiment results are listed in Table 2. Average detection correctness = 91.13%. Experiments show that our method can get arbitrary region shapes with high correctness and short process time, which exhibits its potential in image analysis applications. Fig. 8 gives detection examples. The error/missing may be caused by false edge selection in search: (1) although normalized threshold is used to bridge gaps between closure edges, it can not distinguish edge gaps between relevant edges with cracks in irrelevant edges by perceptual view. Larger threshold, although bridges larger gaps, raises the risk of taking unrelated edges as adjacent, thus increases false detection. (2) Detection depends heavily on GET data got from GET tracker. It can not find a region contour if one contour edge is missing detected in GET tracking, i.e., it can not recovery lost data by knowledge in closure detection. (3) The GETs derived from image noises may also mix up with valid GETs thus mislead search.
6 Fig. 7. Closure extraction and classification for Fig. 4(a); left-most figure shows all contours; other small figures show all basic contour closures. Image No Fig. 8. Detection result examples. Each row from left to right: original image; extracted GETs; region contour formed by GETs; filled regions. TABLE 2. Experiment results. Average correctness= 91.13%. correct Error missing closures in image Percentage of correct % % % % % % % VI. Conclusions and Future Work In this paper, we propose a GET based method for extracting meaningful regions from image through contour closure detection. Perceptual closures are defined to represent meaningful regions and then be extracted from GET data. The proposed method has following advantages. (1) It provides real-time region segmentation in which region shape structure can be extracted as well. (2) It achieves high accuracy of detected regions with low computation cost. (3) The method is suitable for segmenting objects with arbitrary shapes. (4) It can detect both object contours and their components, based upon which all meaningful regions and objects can be conducted hierarchically as Fig. 2 shows. The possible extension of this method may include the follows. Other features, such as color and texture, can be added in the process to increase the robustness of detection in order to cover a wider range of applications. Meanwhile the added features would provide with a complete set of region descriptions. A region may be encoded with all attributes, i.e. shape, color, and texture. The segmentation technique may support various application domains include robot navigation, surveillance motion analysis, image retrieval, etc. References [1] A. Narendra, On Detection and Structure Representation of Multiscale Low-Level Image ACM Computmg Surveys, vol. 27, no. 3, pp , 1995 [2] T. Tamaki, T. Yamamura, and N. Ohniski, Image Segmentation and Object Extraction based on Geometric features of Regions, SPIE Conf. on Visual communication & image processing, vol. 3653, pp , 1999 [3] J. M. Gauch, Image Segmentation and Analysis via Multi-scale gradient watershed hierarchies, IEEE Trans. on Image Processing, vol. 8, No. 1, pp , 1999 [4] J. Freixenet, X. Munoz, D. Raba, J. Mart, and X. Cuf, Yet Another Survey on Image Segmentation: Region and Boundary Information Integration, Proc. of the 7th European Conf. on Computer Vision, pp , 2002 [5] A. L. Ralesc., and J. G. Shananhan, Pereptual Organisation for inferring Object Boundaries in an Image, Pattern Recognition, Vol. 32, pp. 1923~1933, 1999 [6] S.Sarkar and K. Boyer, Perceptual Organization in Computer Vision: A review and a proposal for classificatory structure, IEEE Trans. on Systems, Man & Cybernetics, Vol.23, No. 2, pp. 382~398, 1993 [7] Q. Gao, and A. Wong, Curve Detection based on Perceptual Organization Pattern Recognition, Vol. 26, No. 1, pp. 1039~ 1046, 1993 [8] X. Zheng, and Q. Gao, Generic Edge Tokens, Representation, Segmentation and Grouping, Proc. of the 16th International Conf. on Vision Interface, pp , 2003 [9] MPEG-7 Overview, appears in /standards/mpeg-7/mpeg-7.htm, Mar, 2003, access at Oct, [10] T. Sikora, the MPEG-7 Visual Standard for Content Description - An Overview, IEEE TRANS. on Circuits and Systems for Video Technology, Vol. 11, No. 6, pp. 716~719, 2001 [11] Q. Iqbal and J. K., Retrieval by Classification of Images containing large Manmade Objects using Perceptual Grouping, Pattern Recognition, Vol. 35, No. 7, pp , 2002
CS 534: Computer Vision Segmentation and Perceptual Grouping
CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation
More informationColor-Texture Segmentation of Medical Images Based on Local Contrast Information
Color-Texture Segmentation of Medical Images Based on Local Contrast Information Yu-Chou Chang Department of ECEn, Brigham Young University, Provo, Utah, 84602 USA ycchang@et.byu.edu Dah-Jye Lee Department
More informationVessel Junction Detection From Retinal Images
Vessel Junction Detection From Retinal Images Yuexiong Tao Faculty of Computer Science Dalhousie University Halifax, Nova Scotia, CA B3H 1W5 E-mail: yuexiong@cs.dal.ca Qigang Gao Faculty of Computer Science
More information2D image segmentation based on spatial coherence
2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics
More informationImage Segmentation Techniques for Object-Based Coding
Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationSegmentation of Images
Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a
More informationCS 534: Computer Vision Segmentation and Perceptual Grouping
CS 534: Computer Vision Segmentation and Perceptual Grouping Spring 2005 Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Where are we? Image Formation Human vision Cameras Geometric Camera
More informationScene Text Detection Using Machine Learning Classifiers
601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department
More informationDirection-Length Code (DLC) To Represent Binary Objects
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 2, Ver. I (Mar-Apr. 2016), PP 29-35 www.iosrjournals.org Direction-Length Code (DLC) To Represent Binary
More informationCS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves
CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation
More informationHidden Loop Recovery for Handwriting Recognition
Hidden Loop Recovery for Handwriting Recognition David Doermann Institute of Advanced Computer Studies, University of Maryland, College Park, USA E-mail: doermann@cfar.umd.edu Nathan Intrator School of
More informationCORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM
CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar
More informationTowards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images
Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images Hae Yeoun Lee* Wonkyu Park** Heung-Kyu Lee* Tak-gon Kim*** * Dept. of Computer Science, Korea Advanced Institute of Science
More informationTexture Segmentation by Windowed Projection
Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw
More informationEdge Grouping Combining Boundary and Region Information
University of South Carolina Scholar Commons Faculty Publications Computer Science and Engineering, Department of 10-1-2007 Edge Grouping Combining Boundary and Region Information Joachim S. Stahl Song
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationEDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT
EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT Fan ZHANG*, Xianfeng HUANG, Xiaoguang CHENG, Deren LI State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing,
More informationCombining Top-down and Bottom-up Segmentation
Combining Top-down and Bottom-up Segmentation Authors: Eran Borenstein, Eitan Sharon, Shimon Ullman Presenter: Collin McCarthy Introduction Goal Separate object from background Problems Inaccuracies Top-down
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationTriangular Mesh Segmentation Based On Surface Normal
ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January 2002, Melbourne, Australia. Triangular Mesh Segmentation Based On Surface Normal Dong Hwan Kim School of Electrical Eng. Seoul Nat
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and
More informationMotion Detection. Final project by. Neta Sokolovsky
Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing
More informationA Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection
International Journal of Research Studies in Science, Engineering and Technology Volume 2, Issue 5, May 2015, PP 49-57 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Robust Method for Circle / Ellipse
More informationImage retrieval based on region shape similarity
Image retrieval based on region shape similarity Cheng Chang Liu Wenyin Hongjiang Zhang Microsoft Research China, 49 Zhichun Road, Beijing 8, China {wyliu, hjzhang}@microsoft.com ABSTRACT This paper presents
More informationBipartite Graph Partitioning and Content-based Image Clustering
Bipartite Graph Partitioning and Content-based Image Clustering Guoping Qiu School of Computer Science The University of Nottingham qiu @ cs.nott.ac.uk Abstract This paper presents a method to model the
More informationImage Segmentation Based on Watershed and Edge Detection Techniques
0 The International Arab Journal of Information Technology, Vol., No., April 00 Image Segmentation Based on Watershed and Edge Detection Techniques Nassir Salman Computer Science Department, Zarqa Private
More informationGENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES
GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES Karl W. Ulmer and John P. Basart Center for Nondestructive Evaluation Department of Electrical and Computer Engineering Iowa State University
More informationShape from Texture: Surface Recovery Through Texture-Element Extraction
Shape from Texture: Surface Recovery Through Texture-Element Extraction Vincent Levesque 1 Abstract Various visual cues are used by humans to recover 3D information from D images. One such cue is the distortion
More informationA Robust Wipe Detection Algorithm
A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,
More informationSalient Boundary Detection using Ratio Contour
Salient Boundary Detection using Ratio Contour Song Wang, Toshiro Kubota Dept. Computer Science & Engineering University of South Carolina Columbia, SC 29208 {songwang kubota}@cse.sc.edu Jeffrey Mark Siskind
More informationAn Introduction to Content Based Image Retrieval
CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and
More informationA Reduction of Conway s Thrackle Conjecture
A Reduction of Conway s Thrackle Conjecture Wei Li, Karen Daniels, and Konstantin Rybnikov Department of Computer Science and Department of Mathematical Sciences University of Massachusetts, Lowell 01854
More informationNoise Reduction in Image Sequences using an Effective Fuzzy Algorithm
Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm Mahmoud Saeid Khadijeh Saeid Mahmoud Khaleghi Abstract In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise
More informationOptimal Grouping of Line Segments into Convex Sets 1
Optimal Grouping of Line Segments into Convex Sets 1 B. Parvin and S. Viswanathan Imaging and Distributed Computing Group Information and Computing Sciences Division Lawrence Berkeley National Laboratory,
More informationIntroduction to Medical Imaging (5XSA0) Module 5
Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed
More informationEDGE BASED REGION GROWING
EDGE BASED REGION GROWING Rupinder Singh, Jarnail Singh Preetkamal Sharma, Sudhir Sharma Abstract Image segmentation is a decomposition of scene into its components. It is a key step in image analysis.
More informationCPS 102: Discrete Mathematics. Quiz 3 Date: Wednesday November 30, Instructor: Bruce Maggs NAME: Prob # Score. Total 60
CPS 102: Discrete Mathematics Instructor: Bruce Maggs Quiz 3 Date: Wednesday November 30, 2011 NAME: Prob # Score Max Score 1 10 2 10 3 10 4 10 5 10 6 10 Total 60 1 Problem 1 [10 points] Find a minimum-cost
More informationAutomatic Logo Detection and Removal
Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationA New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval
A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval Farzin Mokhtarian, Sadegh Abbasi and Josef Kittler Centre for Vision Speech and Signal Processing Department
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationAs a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9.
II.4 Surface Simplification 37 II.4 Surface Simplification In applications it is often necessary to simplify the data or its representation. One reason is measurement noise, which we would like to eliminate,
More informationRepositorio Institucional de la Universidad Autónoma de Madrid.
Repositorio Institucional de la Universidad Autónoma de Madrid https://repositorio.uam.es Esta es la versión de autor de la comunicación de congreso publicada en: This is an author produced version of
More informationI. INTRODUCTION. Figure-1 Basic block of text analysis
ISSN: 2349-7637 (Online) (RHIMRJ) Research Paper Available online at: www.rhimrj.com Detection and Localization of Texts from Natural Scene Images: A Hybrid Approach Priyanka Muchhadiya Post Graduate Fellow,
More informationText Information Extraction And Analysis From Images Using Digital Image Processing Techniques
Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques Partha Sarathi Giri Department of Electronics and Communication, M.E.M.S, Balasore, Odisha Abstract Text data
More informationCHAPTER 6 QUANTITATIVE PERFORMANCE ANALYSIS OF THE PROPOSED COLOR TEXTURE SEGMENTATION ALGORITHMS
145 CHAPTER 6 QUANTITATIVE PERFORMANCE ANALYSIS OF THE PROPOSED COLOR TEXTURE SEGMENTATION ALGORITHMS 6.1 INTRODUCTION This chapter analyzes the performance of the three proposed colortexture segmentation
More informationOverview. Original. The context of the problem Nearest related work Our contributions The intuition behind the algorithm. 3.
Overview Page 1 of 19 The context of the problem Nearest related work Our contributions The intuition behind the algorithm Some details Qualtitative, quantitative results and proofs Conclusion Original
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationDYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song
DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao
More informationUlrik Söderström 16 Feb Image Processing. Segmentation
Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background
More informationMa/CS 6b Class 26: Art Galleries and Politicians
Ma/CS 6b Class 26: Art Galleries and Politicians By Adam Sheffer The Art Gallery Problem Problem. We wish to place security cameras at a gallery, such that they cover it completely. Every camera can cover
More informationAn Approach for Real Time Moving Object Extraction based on Edge Region Determination
An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationA reversible data hiding based on adaptive prediction technique and histogram shifting
A reversible data hiding based on adaptive prediction technique and histogram shifting Rui Liu, Rongrong Ni, Yao Zhao Institute of Information Science Beijing Jiaotong University E-mail: rrni@bjtu.edu.cn
More informationTexture Image Segmentation using FCM
Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M
More informationLogical Templates for Feature Extraction in Fingerprint Images
Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:
More informationLayout Segmentation of Scanned Newspaper Documents
, pp-05-10 Layout Segmentation of Scanned Newspaper Documents A.Bandyopadhyay, A. Ganguly and U.Pal CVPR Unit, Indian Statistical Institute 203 B T Road, Kolkata, India. Abstract: Layout segmentation algorithms
More informationObject Classification Using Tripod Operators
Object Classification Using Tripod Operators David Bonanno, Frank Pipitone, G. Charmaine Gilbreath, Kristen Nock, Carlos A. Font, and Chadwick T. Hawley US Naval Research Laboratory, 4555 Overlook Ave.
More informationN.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction
Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text
More informationChapter 3. Sukhwinder Singh
Chapter 3 Sukhwinder Singh PIXEL ADDRESSING AND OBJECT GEOMETRY Object descriptions are given in a world reference frame, chosen to suit a particular application, and input world coordinates are ultimately
More informationFeature-level Fusion for Effective Palmprint Authentication
Feature-level Fusion for Effective Palmprint Authentication Adams Wai-Kin Kong 1, 2 and David Zhang 1 1 Biometric Research Center, Department of Computing The Hong Kong Polytechnic University, Kowloon,
More informationSilhouette Coherence for Camera Calibration under Circular Motion
Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE
More informationMULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES
MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada
More informationImage Edge Detection
K. Vikram 1, Niraj Upashyaya 2, Kavuri Roshan 3 & A. Govardhan 4 1 CSE Department, Medak College of Engineering & Technology, Siddipet Medak (D), 2&3 JBIET, Mpoinabad, Hyderabad, Indi & 4 CSE Dept., JNTUH,
More informationReview on Image Segmentation Techniques and its Types
1 Review on Image Segmentation Techniques and its Types Ritu Sharma 1, Rajesh Sharma 2 Research Scholar 1 Assistant Professor 2 CT Group of Institutions, Jalandhar. 1 rits_243@yahoo.in, 2 rajeshsharma1234@gmail.com
More informationLatest development in image feature representation and extraction
International Journal of Advanced Research and Development ISSN: 2455-4030, Impact Factor: RJIF 5.24 www.advancedjournal.com Volume 2; Issue 1; January 2017; Page No. 05-09 Latest development in image
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationA Feature Point Matching Based Approach for Video Objects Segmentation
A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer
More information4.5 VISIBLE SURFACE DETECTION METHODES
4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There
More informationAutomated Segmentation Using a Fast Implementation of the Chan-Vese Models
Automated Segmentation Using a Fast Implementation of the Chan-Vese Models Huan Xu, and Xiao-Feng Wang,,3 Intelligent Computation Lab, Hefei Institute of Intelligent Machines, Chinese Academy of Science,
More informationEE 701 ROBOT VISION. Segmentation
EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing
More informationContent-based Image and Video Retrieval. Image Segmentation
Content-based Image and Video Retrieval Vorlesung, SS 2011 Image Segmentation 2.5.2011 / 9.5.2011 Image Segmentation One of the key problem in computer vision Identification of homogenous region in the
More informationIMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur
IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important
More informationContext based optimal shape coding
IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationAn Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners
An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners Mohammad Asiful Hossain, Abdul Kawsar Tushar, and Shofiullah Babor Computer Science and Engineering Department,
More informationExample 1: Regions. Image Segmentation. Example 3: Lines and Circular Arcs. Example 2: Straight Lines. Region Segmentation: Segmentation Criteria
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image Example 1: Regions. into linear
More informationExample 2: Straight Lines. Image Segmentation. Example 3: Lines and Circular Arcs. Example 1: Regions
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image Example : Straight Lines. into
More informationRobust Shape Retrieval Using Maximum Likelihood Theory
Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2
More informationIntermediate Representation in Model Based Recognition Using Straight Line and Ellipsoidal Arc Primitives
Intermediate Representation in Model Based Recognition Using Straight Line and Ellipsoidal Arc Primitives Sergiu Nedevschi, Tiberiu Marita, Daniela Puiu Technical University of Cluj-Napoca, ROMANIA Sergiu.Nedevschi@cs.utcluj.ro
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationObject Extraction Using Image Segmentation and Adaptive Constraint Propagation
Object Extraction Using Image Segmentation and Adaptive Constraint Propagation 1 Rajeshwary Patel, 2 Swarndeep Saket 1 Student, 2 Assistant Professor 1 2 Department of Computer Engineering, 1 2 L. J. Institutes
More informationAUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen
AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION Ninad Thakoor, Jean Gao and Huamei Chen Computer Science and Engineering Department The University of Texas Arlington TX 76019, USA ABSTRACT
More informationSome Thoughts on Visibility
Some Thoughts on Visibility Frédo Durand MIT Lab for Computer Science Visibility is hot! 4 papers at Siggraph 4 papers at the EG rendering workshop A wonderful dedicated workshop in Corsica! A big industrial
More informationWater-Filling: A Novel Way for Image Structural Feature Extraction
Water-Filling: A Novel Way for Image Structural Feature Extraction Xiang Sean Zhou Yong Rui Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana Champaign,
More informationThe Vehicle Logo Location System based on saliency model
ISSN 746-7659, England, UK Journal of Information and Computing Science Vol. 0, No. 3, 205, pp. 73-77 The Vehicle Logo Location System based on saliency model Shangbing Gao,2, Liangliang Wang, Hongyang
More informationApplying Catastrophe Theory to Image Segmentation
Applying Catastrophe Theory to Image Segmentation Mohamad Raad, Majd Ghareeb, Ali Bazzi Department of computer and communications engineering Lebanese International University Beirut, Lebanon Abstract
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationEXTREME POINTS AND AFFINE EQUIVALENCE
EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard
More informationDetecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds
9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School
More informationA NOVEL FEATURE EXTRACTION METHOD BASED ON SEGMENTATION OVER EDGE FIELD FOR MULTIMEDIA INDEXING AND RETRIEVAL
A NOVEL FEATURE EXTRACTION METHOD BASED ON SEGMENTATION OVER EDGE FIELD FOR MULTIMEDIA INDEXING AND RETRIEVAL Serkan Kiranyaz, Miguel Ferreira and Moncef Gabbouj Institute of Signal Processing, Tampere
More informationConsistent Line Clusters for Building Recognition in CBIR
Consistent Line Clusters for Building Recognition in CBIR Yi Li and Linda G. Shapiro Department of Computer Science and Engineering University of Washington Seattle, WA 98195-250 shapiro,yi @cs.washington.edu
More informationSUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.
SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS
More informationLecture 3: Art Gallery Problems and Polygon Triangulation
EECS 396/496: Computational Geometry Fall 2017 Lecture 3: Art Gallery Problems and Polygon Triangulation Lecturer: Huck Bennett In this lecture, we study the problem of guarding an art gallery (specified
More informationImage Segmentation. Selim Aksoy. Bilkent University
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]
More informationOCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE
OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE Wenju He, Marc Jäger, and Olaf Hellwich Berlin University of Technology FR3-1, Franklinstr. 28, 10587 Berlin, Germany {wenjuhe, jaeger,
More informationImage Segmentation. Selim Aksoy. Bilkent University
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]
More information