Use of Clustering Technique for Spatial Color Indexing Based on Silhouette Moment

Size: px
Start display at page:

Download "Use of Clustering Technique for Spatial Color Indexing Based on Silhouette Moment"

Transcription

1 Use of Clustering Technique for Spatial Color Indexing Based on Silhouette Moment P J Dutta, D K Bhattacharyya, M Dutta Department of Comp Sc & IT, Tezpur University, Napaam, India {pjd,dkb,malay}@tezu.ernet.in and J K Kalita Department of Computer Science, University of Colorado, Colorado Spring, USA kalita@pikespeak.uccs.edu ABSTRACT This paper presents an efficient spatial indexing technique based on silhouette moments that makes the index robust subject to the three basic transformations for content-based image retrieval. Spatial index is generated based upon a fast and robust clustering technique, which can recognize color clusters of any shape. The new clustering technique has been found to be efficient in terms of time complexity and cluster quality than many of its counterparts. The indices are stored in a spatial cluster and object tree, which is a variant of B-tree that helps in speeding up the retrieval performance. A matching engine has been devised to retrieve images from the image database, which has the capacity for global and regional similarity search. Based on experimentation, the performance of the new indexing technique has been found to be better than its counterparts providing for robust and compact index, better analysis of the color content of the image, higher precision and recall in retrieval and better retrieval time. Keywords: Content-based, spatial-index, color cluster, object, silhouette moment, precision, recall. 1. INTRODUCTION With the explosive advancement in imaging technologies, image retrieval has attracted increasing interest of researchers in the field of digital libraries, image processing and database systems. Motivated by the goal of automatically computing efficient and effective descriptors which symbolize various properties of images, recent research on image retrieval systems has been directed towards the development of content-based retrieval techniques for management of

2 visual information such as color, texture, shape and spatial constraints ([1]-[5]). As color plays an important role in image composition, many color indexing techniques have been studied. Out of them, the color histogram is one of the most important techniques for content-based image retrieval [1] because of its efficiency and effectiveness. However, due to the statistical nature, a color histogram can only index the color content of images in a limited way i.e., it does not include spatial information for colors in the image. To make the color histogram more effective for image indexing, spatial information is also essential ([3],[6],[7],[8]). George et. al. [14] have utilized Chromaticity Moments for content-based image retrieval (CBIR) which is based on the concept of chromaticity diagram and extracts a set of two-dimensional moments from it to characterize the shape and distribution of chromaticities of an image. This representation is compact (only a few chromaticity moments per image) and constant (independent of size and content of images) that helps in global similarity search but lacks in regional similarity search. Any colored image can be viewed as a distribution of colored pixels in a two dimensional plane. This distribution of colored pixels forms color clusters of arbitrary shape within the image. These clusters may have sharp edges demarcating a boundary between changes of color distribution or have diffused or fuzzy boundaries. Hence, the image pixels can be viewed as a data set having two dimensions viz., color and position. Now, considering the color content and the positional aspects of each pixel, an image can be characterized by a set of color clusters of arbitrary shapes. Moreover, the image is also formed by a number of objects present in it. These objects can be characterized by a single color cluster or a combination of many color clusters. The similarity search may be global or regional [8]. In global search, all the color clusters present in the query image are useful. In regional or object level search, the selected objects

3 formed by the color clusters of the query image is of particular help. In this paper, we have utilized the power of the BOO-Clustering algorithm that we have devised ourselves [16] and the Generalized Density Based Spatial Clustering of Applications of Noise GDBSCAN [13] to extract the color clusters from the images. Once the color clusters are extracted, the objects are formed by choosing one or few color clusters in an interactive manner. Second order rotation, translation, uniform and non-uniform scaling silhouette moments are used for generation of indices separately for color clusters produced by the BOO-Clustering and GDBSCAN [13] algorithm and the objects generated in an interactive manner and finally stored in the color cluster and object index database. Once the color clusters, and objects, are identified, it becomes easier for global or regional similarity search of images. The indexing technique can be found to be significant due to its robustness, compactness, higher precision & recall, global and regional similarity search. 2. RELATED WORK Several techniques have been proposed to integrate spatial information with color histograms. Hsu et al. [3] integrate spatial information with color histograms by first selecting a set of representative colors and then analyzing the spatial information of the selected colors using maximum entropy quantization with an event covering method. Stricker and Dimai [2] partition an image into five partially overlapping fuzzy regions, extract the first three moments of the color distribution of each region, and then organize them into a feature vector of small dimension. Smith and Chang [7] apply back-projection on binary color set to extract color regions. Pass and Zabih [9] define the concept of color coherent vector (CCV) and use it to split a color histogram vector into two parts: a coherent vector and a non-coherent vector. A pixel is

4 coherent if it is a part of some sizable similar-colored region, i.e., if its connected component is large enough and non-coherent otherwise. A color coherent vector of an image is the histogram over all coherent pixels of the image. Later, Huang [4] proposed a color corellogram for refining histograms. A corellogram is a table indexed by color pairs and distance, where k-th entry for <i,j> specifies the probability of finding a pixel of color j at a distance k from a pixel of color i. Rao et al. [8] have used annular, angular and hybrid histograms for indexing spatial distribution density of colors. The two dimensional image is uniformly partitioned into N concentric circles with origin at the centroid of the image to form N concentric circles. Similarly, the image is uniformly subdivided into N fan-like sectors with the centroid of the image as the origin. Annular and Angular histograms are defined by the distribution of the pixels in each of the sectors. By combining the above two approaches, a Hybrid histogram is defined. Moments have been extensively used in CBIR and especially in image processing because they carry the spatial information about the image. They also store translation, rotation and scale invariant properties of the image. Stricker et al. [17] and Mandal et al. [18] have employed cumulative histograms and Legendre histogram moments, respectively, for CBIR. George et al. [14] have used chromaticity moments for indexing spatial distribution density of colors. This method is based on the concept of the chromaticity diagram as defined within the CIE xyy color space. In particular, each pixel in a given image yields a pair of (x, y) chromaticity values, thus forming a set of chromaticities for the entire image. This set of (x, y) pairs is called the chromaticity set of the specific image, while the corresponding xy Euclidean space is called the chromaticity space. In addition, more than one pixel in a given image may yield the same

5 chromaticity pair of values (x, y), thus leading to the formation of a 2D distribution (histogram) over the chromaticity space. This method has the following advantages: 1. It is compact, i.e., only a few chromaticity moments are required (typically 6-12). 2. It is of constant size, i.e., it does not depend on the size of the given image or its content in terms of the spectral variability of the depicted scenes, and 3. It is comparably effective in terms of retrieval performance. But this method has the following drawbacks: 1. As the chromaticity moments are calculated over the whole image, the index stores the characteristics of the image as a whole. Hence this index finds a particular object within the image, i.e., it helps in global similarity of images, but lacks in regional or object level similarity. 2. The chromaticity pairs, (x, y) are quantized over the chromaticity space apriori. The chromaticity space is fixed for all the images in the image base. This is not desirable because to get proper color clusters from an image, the quantization parameter may vary from image to image. For example, an image with very feeble color difference or a highly textured image needs smaller quantization value to extract color clusters. On the other hand, an image with huge color differences or a less textured image needs higher quantization value to extract the color clusters. 3. They have calculated the moments over the quantized chromaticity space but not on the chromaticity set. Hence the spatial information of the chromaticity set is not reflected in the index. These drawbacks are overcome by the method we describe in this paper.

6 This paper presents a new silhouette moment based spatial color indexing scheme which differs from the approaches reported so far, in the following ways: We believe that quantization is necessary for all classes of images; it may cause deterioration in the quality of the index generated. Based on observation, we find that in the average case, the color content of the images is limited to a few colors only. Hence quantization has been kept as a user parameter so that the user can choose the proper quantization values for extracting proper color clusters from the input images. This has been discussed in Section 3.3. Two robust spatial clustering algorithm, BOO-Clustering [16] and GDBSCAN [13] have been used to extract color clusters from the input image. Once the color clusters are identified, objects formed by the color clusters of the image are extracted in an interactive manner. Second-order silhouette moments are calculated for both the color clusters and the objects of interest, these form the index of the image. These indices carry spatial information about the images. For faster retrieval, the indices are stored in a spatial cluster index and object index tree [16]. Both of them are a variant of B-tree.

7 Figure 1. Architecture of the proposed CBIR system. 3. THE SPATIAL INDEXING SCHEME Our indexing scheme works in four steps. In step 1, it accepts the input query image and generates all possible color clusters of the image using the BOO-Clustering algorithm and GDBSCAN. All of these color clusters of the input image may not be of interest for index generation. Hence color clusters and objects formed by color clusters of importance are identified in step 2 in an interactive manner. In step 3, indices are generated for the identified color clusters and objects of interest. In step 4, query results are given based on inferences made by a matching engine. The architecture of the proposed scheme is depicted in Figure 1. In the next few subsections we describe each of the modules present in the architecture in brief. 3.1 Color Space and Quantization A color space is used to specify a three-dimensional color co-ordinate system and a subspace of the system in which colors are represented as points. The most common color space for digital images and computer graphics is the RGB color space in which colors are represented as a linear combination of red, green, and blue color channels. The properties that are most important in color space processing are uniformity, completeness, and uniqueness. The RGB color space is not

8 perceptually uniform. The distance between two points in the color space does not suggest that the two colors are similar or dissimilar. Additionally, the three-color channels of the RGB color space do not vary consistently with one another with respect to brightness. Therefore, the pixels of the images in the image database, henceforth referred to as imagebase, and query examples must be transferred in to an alternative color space that satisfies the three properties, viz., uniformity, completeness, and uniqueness. A uniform color space is one in which the metric distance between points in the color space corresponds to the perceptual distance (or similarity) of the points. A complete color space contains points for all perceptually discernible colors. A color space is unique if two distinct points in the color space represent two perceptually different colors. Other color spaces such as CIE-LAB, CIE-LUV and Munsell offer improved perceptual uniformity [12,7]. In general they represent with equal emphasis the three parameters that characterize color: hue, value and saturation. Hue is that attribute of a color by which we distinguish colors. Value indicates the lightness of a color and Saturation indicates the density of color pigments, i.e., amount of color present. Quantization of the color space is necessary to reduce the dimensionality of the index that characterizes an image at the cost of the quality of the index. The proposed scheme quantizes the color contents of the query image over HSV color space. It attempts to quantize the hue component of each color pixel by balancing the visual fidelity and the dimensionality of the resulting quantization. The Human Visual System (HVS) discerns the changes in the hue component by much smaller gaps than changes in the saturation and value. In our experiment,

9 quantization over the hue component is a user parameter that facilitates the extraction of proper color clusters and objects of interest from the image. 3.2 Spatial Data Clustering Using Density/Neighborhood Approach Considering the color content and the position of each pixel, an image can be characterized by a set of color clusters and a set of objects of interest of arbitrary shapes. Hence, to identify the color clusters in an image, any density-based clustering technique can be applied. In density based clustering, a cluster is defined as a connected dense component, growing in any direction where density leads. Several useful clustering techniques have been proposed based on this concept. However, none of these can be found to operate in linear time. We call our clustering technique BOO-Clustering, which is a variant of a color segmentation technique, operating in linear time. Figure 2. A Sample Template Shown in an Example Image Terms Used in BOO-Clustering The clustering technique uses the following terms:

10 Definition 1: Unclassified Pixels are those pixels, which are not yet clustered. Classified pixels are those pixels, which belong to a particular color cluster. Definition 2: The Target Pixel (t) is a pixel, which is in hand to be classified. Definition 3: A template (T) of a target pixel t at the position (x, y) is a set of pixels which are already classified and are at a neighborhood distance d from t. The mathematical model is defined in the next section. A sample template (T) is shown in Figure 2. Definition 4: The Pixel Threshold (PT) is the minimum number of classified pixels required in a cluster so that it does not become a Noisy Cluster. Experimentally, it has been found that a cluster having fewer than the number of pixels in the template does not carry useful meaning of the image. Hence, these noisy clusters are discarded The BOO-Clustering Technique Figure 2 depicts an image where each block represents a pixel. Here we have not shown a color in each block but some blocks are marked as classified (C), some are marked as unclassified (U), some are marked as pixels from a template (TC). One block is marked as target pixel (t). The process of clustering starts from the position (0,0) of the image (i.e., upper left corner) and successively proceeds till the last pixel is classified. After completion of the clustering process, each pixel is assigned a Cluster Id. The concept behind BOO-Clustering to assign a Cluster Id to a target pixel (t) is that it searches in the template (T) of the target pixel for similar colored pixels. The template (T) of the target pixel (t) is constituted of a set of pixels (TC), which are already classified, i.e., pixels which have already been assigned a Cluster Id. Three cases may arise after the search operation. First, not a single pixel is found which is of the same color as that of the target pixel. In such a case, a new Cluster Id is assigned to the target pixel (t). It

11 implies that there is not a single cluster with the same color as that of the target pixel nearby. Second, if some or all pixels have the same color as that of the target pixel and those pixels have a common Cluster Id, then it assigns that common Cluster Id to the target pixel (t). This implies that there is a single cluster having the same color as that of the target pixel nearby. Third, if some or all pixels have the same color as that of the target pixel and they have more two or more Cluster Ids, there is more than one cluster with the same color as that of the target pixel. But, since they appear in the same template containing the target pixel, they should belong to the same cluster. Hence, in such a situation, the algorithm assigns any one of the found Cluster Ids to the target pixel and merges those found color clusters of the template with different Cluster Ids to one cluster. While merging the algorithm traverses back re-clustering successively, pixel by pixel, till it reaches the first pixel or does not find any one of those clusters in the template of the backtracking pixels. After assigning a Cluster Id (new or already assigned) to a target pixel, the algorithm takes up the next pixel as the target pixel and starts the process of clustering. The BOO-Clustering algorithm proceeds sequentially from the first pixel till it encounters the last pixel. In this step of execution, if the third case arises, the algorithm backtracks to the first pixel for merging of clusters (re-clustering) and then comes back to the position where it has left and starts clustering from the immediate next pixel. This is the heart of the algorithm which works like a BOOMERANG. Hence the name BOO-Clustering. This algorithm is of order θ(n 2 ). To make it linear, instead of backtracking, a link is maintained in a graph for those Cluster Ids which are found more than once in the template of a target pixel. The size of the image is m n = N number of pixels and d is a number that defines the size of the neighborhood template T for an unclassified target pixel. Figure 2 depicts a template for an unclassified target pixel (t) where d=3. Thus, the total no of pixels in the template with d=3 is 2(d 2 + d) = 24. The algorithm runs

12 for N number of pixels. To assign a Cluster Id to each pixel, it searches for similar colored pixels in its neighborhood template at 2(d 2 + d) times. If pixels with more than one different Cluster Id are found, generate_graph is called to keep a link between the clusters. Hence the time complexity of the algorithm is N + 2(d 2 + d). Here d is very small as compared to N and hence can be neglected. Thus the complexity of BOO-Clustering is θ(n). The proposed clustering technique can be found to be advantageous in comparison to its competitor algorithms [13] because (i) it operates in linear time, (ii) it can identify all shapes (concave as well as convex) of clusters. (iii) it can successfully handle overlapping sparse distribution of colors, i.e., fuzzy distribution and (iv) it is based on a simplified data structure which makes the scheme efficient in storage as well as execution time. Next, based on the clustering results, and by considering the other important spatial as well as shape features of each cluster and each object, the index of the image is generated. Data Structure/Symbols Used for BOO-Clustering Algorithm: Image(m, n) : array to hold spatial color data of an input color image where m is row & n is column of the array. Cluster(m, n) : array to hold the cluster id of the relative pixel of the Image. d : Number that defines the size of the Template. Template (T) : T = {P((x-i), (y-j)) if x-i>=0 and y-j>=0 (i=1,2,3,..d; j=0,1,2,...d) P((x+i), (y-j)) if x+i<=n and y-j>=0 (i=0,1,2,..d; j =1,2,3 d)} for a Target_pixel P. Graph : to keep the connected components of two clusters in an adjacency matrix.

13 Algorithm BOO-Clustering(Image(m,n)) for row = 1 to m for col = 1 to n if (row ==1 ) and (col == 1) Cluster(row, col) = 1; else Search_in_template(row, col,image(row,col)); endif Search_in_template(row,col, Target_pixel) Search in Template for Pixels having the same color value as that of Target Pixel; Case 1: Not a single pixel found; assign cluster(row,col) with a new Cluster id; Case 2: Some pixels found and they all have same Cluster id; assign cluster(row,col) with found Cluster id; Case 3: Some pixels found and they have different Cluster ids; assign cluster(row,col) with any one found Cluster ids; generate_graph(row, col, list of Cluster ids); generate_graph(row, col, list) generate adjacency matrix with list of cluster_no; 3.3 Identification of Color Clusters and Objects of Interest in an Image An iterative method is the only effective way for identification of proper color clusters and objects of interest in an image because the color clusters and objects depend on the distribution

14 of color within the image (controlled by the d parameter) and quantization of the color of the pixels (controlled by the quantization parameter). Both these parameters vary from image to image and a global method cannot be applied for all kinds of images. Hence, during color cluster generation, two parameters are given as input to the BOO-Clustering algorithm. They are quantization over the hue component of the color of the image, and d, the size of the neighborhood template. Quantization has been kept a user parameter because in our experiments, we found that in some images the hue component of the colors present have huge differences. Hence, in such images the quantization parameter can be kept large to get clear clusters. On the other hand, images that have small differences in the hue component, the quantization parameter is kept smaller to get distinct clusters. Hence this quantization parameter varies from image to image depending on the colors present in the image. Another parameter that affects the clustering of colors is d, the size of the neighborhood template. Bigger d value generates color clusters of pixels that are sparsely distributed as well as thickly populated and a smaller d value gives color clusters of pixels whose distribution is thickly populated. Hence proper choice of d value and quantization parameter is necessary to get proper color clusters. By selecting specific color clusters from the set of generated color clusters of the image, the objects of interest in the image are retrieved. Figure 3 shows clustering results of the BOO-Clustering algorithm and extraction of the objects of interest (the facial part is extracted from the green background). In our experiment we have also used GDBSCAN [13]. However, in the GDBSCAN algorithm, besides the quantization and d parameter, a third parameter min_point is also used. This parameter defines the minimum number of same colored pixels to be present in the template defined by the d parameter to make the target pixel a core object.

15 Figure 3. Clustering Results of BOO-Clustering & generation of objects of interest. 3.4 The Spatial Cluster Index and Object Index The output of the BOO-Clustering algorithm and the GDBSCAN algorithm is a set of color clusters based on positional distribution of colors. These color clusters can have sharp or diffused boundaries depending on the type of image. The objects of interest in the image are formed by selecting one or more color clusters from the set of color clusters of the image. To form a particular object, the color clusters that form the object are chosen. These color clusters and objects of interest need to be indexed separately by using parameters for similarity search. A robust index is one which is invariant under translation, rotation, uniform scaling and nonuniform scaling i.e. aspect ratio. To achieve this goal, we use silhouette moments [15] of second order in our experiments. A color cluster or an object of interest generated consists of a set of pixels denoted by S q having an average color C p that defines an area in the image having sharp or diffused boundary. Hence these image segments can be thought of as silhouette images. Silhouette images are binary images whose intensity value takes only two values, viz., 0 and 1 [15]. In our case, a pixel that

16 forms a cluster or an object is assigned a value 1, i.e., f(x,y) = 1 and other points are assigned a value 0, i.e., f(x,y) = 0. For a silhouette image, the point (x 0, y 0 ) gives the geometric center of the image region. When moments are calculated by shifting the origin of the reference system to the intensity centroid of the image, they are called central moments. This transformation makes the moment computation independent of the position of the image reference system; hence translation invariant. The central moment [15] can be defined as p q µ pq = ( x x ) ( y y0 ) f ( x, y) d x d 0 y (1) m n 1 p q µ pq = ( x x0 ) ( y y0 ) f ( x, y) m n (2) x y where p,q = 0,1,2,3. is the order of the moment. To make the index scale invariant, let the image be scaled by a uniform factor k. Then we have x' = kx; y' = ky. (3) Hence, the above transformation also leads to the following expression for the scaled area element: dx' dy' = k 2 dx dy. (4) The moments of the scaled image can now be expressed in terms of the moments of the original image as m ' p+ q+2 pq k = m. (5) pq From the above equation we also get ' 2 00 k m00 m =. (6)

17 By eliminating the unknown scale factor k from the above two equations, we get ( m ' 00 m ) ' pq ( m m = ( p+ q+ 2) / 2 ( p+ q+ 2) / 2 00 ) pq. (7) Thus the term η pq can be defined by µ pq pq = ( ) ( p+ q+ 2) / 2 µ 00 η (8) is invariant under both translation and scale variation of an image. If an image is transformed with unequal scale factors k 1, k 2 along x and y axes respectively, then the transformed moments can be obtained as m ' pq p+ 1 q+ 1 = ( k1) ( k2 ) m pq. (9) Writing the explicit equations for the first few orders of the moments, and eliminating k 1 and k 2 from these equations, we get ( µ ) ( p+ q+ 2) / 2 η 00 pq µ ( p+ 1) / 2 ( q+ 1) / pq ( µ µ 2 20 ) ( 02 ) = (10) which is invariant with respect to non-uniform scaling of the image and is known as aspect ratio invariant. The rotation of an silhouette image by an angle θ has an associated pixel co-ordinate transformation given by ' x cosθ = ' y sinθ sinθ x cosθ y (11) The second order moments of the rotated silhouette image can be related to the initial moments of the image by the equations below

18 m m m ' 20 ' 11 ' cosθ = m 2 sin 2θ = m cosθ = m cosθ 2 ( sin 2θ ) m11 + m02 sin 2θ 2 ( cos 2θ ) m11 m02 1+ cosθ 2 ( sin 2θ ) m11 + m02 (12) (13) (14) From the above three equations we get the term (m 20 + m 02 ) as rotation invariant. Using η pq given in equation (10) in place of m pq in the expressions for rotation invariants, we get functions that are invariant with respect to translation, aspect ratio, scale and rotation invariants. A set of second order moment invariants are ϕ + 1 η20 η02 = (15) 2 2 ( η η ) ϕ + ϕ = (16) η = η20η 02 η11. (17) In our experiment, equations (15), (16), and (17) are utilized for indexing the color clusters and the objects generated. Hence the indices for the color clusters and the objects of interest consist of four parameters, viz., <average color of the clusters, ϕ 1, ϕ 2, ϕ 3 > and <average color of the objects, ϕ 1, ϕ 2, ϕ 3 > respectively. 3.5 Robustness & Compactness of the Index To establish an index to be robust, it has to be robust subject to the three basic transformations, viz., translation, rotation, uniform and non-uniform scaling. The clusters produced by BOO- Clustering are more reasonable than GDBSCAN [13] due to the following reasons:

19 1. The time complexity of BOO-Clustering is θ(n) as compared to θ(n * log N) of GDBSCAN when applied to image data. 2. Both GDBSCAN and BOO-Clustering algorithms can identify color clusters of arbitrary shapes present in the image, viz., concave and convex. But the drawback of GDBSCAN is that it depends on two parameters, viz., d and min-points; where as BOO-Clustering depends only on one parameter, viz., d other than the third quantization parameter which is equally important for both the algorithms. Hence keeping d and quantization parameter to a fixed value we will always get a fixed set of color clusters for a particular image in case of BOO- Clustering algorithm. But in case of GDBSCAN, keeping d and the quantization parameter fixed but by altering the third variable min-points we get different sets of color clusters. Hence, the quality of the clusters produced by BOO-Clustering is better than GDBSCAN for shape recognition. The three parameters, viz., ϕ 1, ϕ 2, ϕ 3 have already been proved to be robust with respect to the three basic transformations [15]. The other parameter, average color of the cluster or the object, is also robust with respect to the three basic transformations because the average color of the cluster or object of an image always remains the same even the image is tempered with the three basic transformations. Moreover if the color of the image is tempered, the other three components of the index will not change. Thus the four dimensional index is robust in all respects. As regards to compactness of the index, both the BOO-Clustering and the GDBSCAN algorithms produce color clusters present in the image eliminating noisy clusters. From these color clusters, objects of interest are extracted. For each color cluster and each object of interest,

20 a four-tuple index is calculated. Hence, the size of the index varies from image to image depending on the color clusters and objects of interest present in the image. But index generated by the George et al. [14] method is more compact than ours because their method generates only one set of twelve chromatic moments for each image and it is fixed for any image. Considering retrieval efficiency, ours is better than our counterparts. This is because our method can perform global search as well as region or object level search; it is known that the George et al. method that it cannot search at object level. Figure 4. Clustering result by BOO-Clustering algorithm. Figure 5. Clustering result by GDBSCAN algorithm.

21 3.6 Database Organization The output of the BOO-Clustering algorithm is a set of color clusters from which the objects of interest are separated out from the image in an interactive manner. Both the color clusters and the objects of interest are silhouette images and second order moments are calculated for indexing. These are termed spatial cluster index and spatial object index, respectively. A spatial cluster index is represented by the parameters <average color of cluster, ϕ 1, ϕ 2,, ϕ 3 > of the indices for the color cluster. A spatial object index is represented by the parameters <average color of object, ϕ 1, ϕ 2,, ϕ 3 > of the indices for the object. Hence, a search on spatial cluster index of all the color clusters present in the image represents a global similarity search of the image. A search on spatial object index, or one or a few spatial cluster indices of the image represents regional similarity search of the image. The spatial cluster indices are stored in a spatial cluster tree and the spatial object indices are stored in a spatial object tree. Both the cluster tree and the object tree are a variant of B-tree (Figure 6 & 7). The root node of each tree points to k independent tree structures, where k is the number of parameters in the index. If a new parameter is to be accommodated (i.e., for a k+1 dimensional index), the root node has to be updated by insertion of a new pointer and accordingly, an associated tree structure will have to be generated. Each of the parameter trees will maintain the parameter key value (i.e., say, Cluster/Object Color value, ϕ 1 value, ϕ 2 value, ϕ 3 value), along with a pointer to a list of Image IDs (i.e., PIDs). For an image, the spatial index, which is a set of clusters or objects, will have the same ImageID in many image ID lists of different clusters or objects of the spatial index.

22 3.7 Matching Engine The matching engine has the facility for searching images in the imagebase based on the spatial cluster index and also on spatial object index. Both the conjunctions AND and OR can be used both on clusters/objects and their parameters < average color of cluster/object, ϕ 1, ϕ 2,, ϕ 3 >. For example, for a query image having five clusters/objects (O 1, O 2, O 3, O 4, O 5 ), the query can be, "Search for all images having Color value of clusters/objects AND ϕ 1 value OR ϕ 3 value for objects O 1 AND O 2 OR O 5 ". For this query the search will initiate from the cluster/object O 1. O 1 has four parameters and out of them ϕ 2 will not be considered. So the matching engine will find the Image IDs that match the color value of O 1. Similarly it will find the Image IDs for ϕ 1 value and ϕ 3 value of O 1. These three Image ID lists will be concatenated according to the given query Color value AND ϕ 1 value OR ϕ 3 value and a single Image ID list will be generated for cluster/object O 1. Similarly the search engine will generate other two Image ID lists for cluster/object O 2 and O 5. Finally it will merge the three image ID lists corresponding to the cluster/objects O 1, O 2 and O 5 into one Image ID lists according to the query O 1 AND O 2 OR O 5. This final list of Image ID will be the retrieved list of images for the given query. The time complexity for matching is dependent on the number of clusters/objects O i of the query image and total number of images S in the imagebase. Hence the complexity of the matching process is number of clusters/objects log S = O i log S = log S. As the number of clusters/objects is very small for the query image compared to the number of total images S in the imagebase, the complexity of the search is log S.

23 Figure 6. The Spatial Color Cluster Tree Figure 7. The Spatial Object Tree Data Structures and Symbols Used for the Matching Engine: Spatial_Index(O,P): The Spatial Index is a two dimensional array of the Query Image, where O represents the number of clusters/objects in the index and P represents the number of parameters of the objects.

24 Parameter(4) : An array to hold input parameters (Color, ϕ 1, ϕ 2,, ϕ 3 ) on which search is to be performed. PNO : The total number of parameters in Parameter(4). Conj_Parameter(3): An array to hold conjunction (AND/OR) for the input parameters. Conj_Object(O-1) : An array to hold conjunction (AND/OR) for the objects in the spatial index. Retrieved_Parameter_Image_List : A linked list to store retrieved Image Ids based on parameter. Retrieved_Object_Image_List : A linked list to store retrieved Image Ids based on objects. Matching_engine(Spatial_index, Parameter, Conj_Parameter, Conj_Object) Initialize Retrieved_Object_Image_list. For i = 1 to O Initialize Retrieved_Parameter_Image_list. For j = 1 to PNO go to root node Select pointer contained in Parameter(j) Search in the selected tree for the corresponding Spatial_Index(i, j) value if found() if j == 1 Copy list of Image Ids from the Spatial_Tree node to Retrieved_parameter_Image_List. else Merge list of Image Ids from Spatial_Tree node to Retrieved_Parameter_Image_List with associated conjunction in Conjuction_Parameter(j 1). if else i == 1 Copy Retrieved_Parameter_Image_List to Retrieved_Object_Image_List. Merge Retrieved_Parameter_Image_list to Retrieved_Object_Image_List with associated conjunction in Conjuction_Object(i 1) 3.8 Efficiency of Retrieval

25 There are two phases to the computation involved in querying an imagebase. First, calculation of index for the query image and second, comparison of the generated index with the stored indices of the images in the imagebase and subsequently retrieval of the similar images from the imagebase Retrieval Time by Chromatic Moments: 1. Computation of Chromatic Moments = M Q 2 (2 µ α) where M = No. of Chromatic Moments = 12, Q = No of histogram bins, µ, α = 2 multiplication and one addition per histogram bin. 2. Histogram computation time = N where N = Total numbers of pixel in the image 3. Time taken by reading the chromaticity moments of each stored image = S M ρ where S = Total number of images in the image base, ρ = Time for reading the number from the disk 4. Computing distance to each stored image = S M (α+d) where d = is the time taken to compute an absolute value Hence the total retrieval time = (1) + (2) + (3) + (4) = M Q 2 (2 µ α) + N + S M ρ + S M (α+d) = M Q 2 + N + S M + S M = M (Q S) + N

26 = Q 2 + 2S + N Retrieval Time by BOO-Clustering and GDBSCAN techniques: 1. Time taken for Cluster generation by BOO-Clustering technique = N 2. Time taken for Cluster generation by GDBSCAN technique = N * log N 3. Time taken for Index calculation for an object having C i no of pixels = Time taken for calculation of central moment for µ 00, µ 20, µ 02, µ 11 + Time taken for calculation of aspect ratio invariant moment for η 20,η 02,η 11 + Time taken for calculation of rotation, translation, scale and aspect ratio invariant moments ϕ 1,ϕ 2, ϕ 3. = 4 C i + ψ + ν 4. Time taken for calculation for O number of objects = 4 ( O C i i= 1 ) + ψ + ν = 4 N + ψ + ν = 4N 5. Retrieval time = O log S = log S 6. Total retrieval time taken by BOO-Clustering technique = (1) + (4) + (5) = N + 4N + log S = 5N + log S 7. Total retrieval time taken by GDBSCAN technique = (2) + (4) + (5) = N*log N + 4N + log S = N*log N + log S Here, Q 2 < 5N and Q 2 < N*log N but 2S >> log S. If the number of images in the imagebase is small in size, the Chromatic Moments index has time advantage over the BOO-Clustering and the GDBSCAN techniques at the cost of precision and recall. As the number of images in the imagebase increases in size, both the BOO-Clustering and the GDBSCAN techniques have time advantage over chromatic moments with a good precision and recall. D BOO- GDBSCAN

27 Clustering Min Point = 5 Min Point = 10 Min Point = 15 Min Point = 20 Min Point = 25 Min Point = 30 T C T C T C T C T C T C T C Table 1. Depicts a comparison of the BOO-Clustering algorithm vs. various plots of Min Points for GDBSCAN where Quantization parameter = 0.01 for both the algorithms, and D is the neighborhood distance, T is the execution time taken in millisecond and C is the number of clusters detected. 4. EXPERIMENTAL RESULTS To test the technique, we used a downloaded database ((a) Cohn-Kanade Facial Expression Database; (b) ftp://ftp.eecs.umich.edu.groups /ai/dberwick/essbthm.zip and other images) consisting of 5000 real world and synthetic images divided into 100 similar groups such as facial expressions, scenery, animals, cars, flowers and space. Implementation was carried out for chromatic moment based technique along with the proposed cluster and object based techniques using the BOO-Clustering method and the GDBSCAN methods in HSV color space. All the methods have been implemented on a Java platform on an Intel Pentium III machine. For any input image, indices are generated for the color clusters and objects of interest in an interactive manner for both the BOO-Clustering and the GDBSCAN methods. Similarity search is done on clusters of interest or objects of interest

28 using a matching engine. We have performed a three way test to establish the supremacy of the BOO-Clustering method over the Chromatic Moment and GDBSCAN methods. They are: 1. Comparison of the cluster computing performance of the BOO-Clustering method vs. GDBSCAN method. 2. Retrieval time comparison of BOO-Clustering method vs. GDBSCAN method and Chromatic Moment method. 3. Retrieval effectiveness of the three methods for global and regional search. Capacity of Cluster detection by BOO-Clustering vs GDBSCAN Algorithms No. of Clusters Detected BOO-Clustering GDBSCAN, MinPoint=10 GDBSCAN, MinPoint=20 GDBSCAN, MinPoint=30 Neighborhood distance D GDBSCAN, MinPoint=5 GDBSCAN, Minpoint=15 GDBSCAN, MinPoint=25 Figure 8. Capacity of Cluster detection by BOO-Clustering algorithm vs. GDBSCAN algorithm. Time in Millisecond Cluster Computation Time taken by BOO-Clustering vs GDBSCAN Algorithms BOO-Clustering GDBSCAN, MinPoint=5 GDBSCAN, MinPoint=10 GDBSCAN, MinPoint=15 GDBSCAN, MinPoint=20 GDBSCAN, MinPoint=25 GDBSCAN,

29 Figure 9. Execution time comparison of BOO-Clustering algorithm vs. GDBSCAN algorithm. 4.1 Comparison of the cluster computing performance of the BOO-Clustering algorithm vs. the GDBSCAN algorithm. The input parameters to the BOO-Clustering algorithm are neighborhood distance d, which defines the template, and quantization parameter over the hue component of the HSV model. The input parameters of the GDBSCAN algorithm are the neighborhood distance d, min_points and quantization parameter. To test the algorithms, the quantization parameter has been kept constant at Since clusters generated by the BOO-Clustering algorithm depends only on the d parameter, there is only one column entry for Execution time and No of Clusters detected by BOO-Clustering in Table 1 whereas GDBSCAN depends on both d and different min_point parameters. Figure 8 depicts the plot of BOO-Clustering vs. GDBSCAN with respect to d and number of clusters detected. At d = 2 the number of clusters detected by BOO-Clustering algorithm is the highest. But by GDBSCAN for min_points = 30 to 50, the number of clusters detected is equal to 0 because the number of pixels in the neighborhood of a pixel for d = 2 is less than the min_points 20, 25, 30. From d = 30 to 50, both algorithms behave in a similar way, i.e., they detect the same number of clusters. Hence, at this point if we compare the time taken by both algorithms to detect clusters, we will be able to compare the efficiency of the algorithms. Figure 9 shows the cluster computation time taken by both the algorithms at d = 30 to 50. It is clear from both plots that BOO-Clustering algorithm has the capacity to detect the smallest color clusters present in the image with a reasonably faster execution time.

30 4.2 Retrieval time comparison of BOO-Clustering vs. GDBSCAN and Chromatic Moment methods. In this experiment we have tried to plot the average time taken by the three methods vs. the number of images in the imagebase for global and regional search. In global search, all clusters generated by the GDBSCAN and the BOO-Clustering methods are taken into account for the search operation; and in regional search, some significant clusters or objects produced by the GDBSCAN and the BOO-Clustering methods are taken into account for the search operation. To compare the retrieval times, we have taken 100 images from the imagebase and the average retrieval times for the 100 images for both the global and regional search have been plotted for the BOO-Clustering, the GDBSCAN, and the Chromatic Moment methods for different database sizes (Figure 10 and 11). It can be clearly seen that as the size of the imagebase increases the BOO-Clustering method has time advantage over the GDBSCAN and the Chromatic Moment methods. 4.3 Retrieval effectiveness of the three methods for Global and Regional Search Retrieval effectiveness is calculated by the average precision recall (APR) curve. In our experiment, the APR was plotted for the best 200 queries out of the calculated 500 queries taken at random over the downloaded imagebase. Numerically, the area [9] enclosed by an APR curve and the axes as a performance metric, called performance area, is defined as 1 2 N 1 ( x i+ 1 xi )( yi yi ) (7) i= 1 where (x i,y i ) is the (recall, precision) pair when the number of retrieved images is i and N is the total number of top matches.

31 Our new method was compared with the Chromatic Moment based technique [11] along with the GDBSCAN method for both global and regional search. In global search, all the clusters produced by BOO-Clustering and GDBSCAN for a query image are taken into account for the search operation for both the techniques. Figure 12 shows the query result of global search of the three techniques. Based on experimental results, it is found that the performance areas for the curves are: 387 (BOO-Clustering based retrieval), 321 (GDBSCAN based retrieval) and 271 (Chromatic Moment based retrieval). Hence the improvement produced by the proposed BOO- Clustering based method over the GDBSCAN and the Chromatic Moment based technique are 20.56% and 42.8% respectively. Our new technique was also tested on some specific clusters and objects (formed by combination of some selected clusters) of the query image along with its competitors. For fine-tuning the search operation we used AND/OR conjunction between the clusters and objects of some of the query images. That is, if for a query image, five significant clusters (C1, C2, C3, C4, C5) are chosen for the search operation, the search can be given as: "Search for all the images in the imagebase where cluster C1 AND C2 AND C3 OR C4 AND C5 match with the query image". Figure 13 shows the query result for regional search using the three techniques. Based on experimental results, we find that the performance areas for the curves are: 309 (BOO-Clustering based retrieval), 258 (GDBSCAN based retrieval) and 202 (Chromatic Moment based retrieval). Hence, the improvement produced by the proposed BOO-Clustering based method over the GDBSCAN and Chromatic Moment based technique are: 19.77% and 52.97% respectively. Retrieval Time Comparison for Regional Search Time in Millisecond

32 Figure 10. Retrieval time comparison for Regional search. Retrieval Time Comparison for Global Search Time in Millisecond No. of Images in the Imagebase BOO-Clustering GDBSCAN Chromatic Moment Figure 11. Retrieval time comparison for Global Search. 5. CONCLUSION In this paper, we have presented an improved content-based indexing scheme. For any color image, the scheme generates a compact, 4-dimensional transformation invariant index for the color clusters and objects of interest produced by a robust data clustering technique of any color image. The indices produced help in global and regional similarity search for images. The indices generated are stored in a spatial tree structure for faster retrieval of images. The new scheme is shown to be superior in comparison to its counterparts [14]. The main drawback of the scheme is that, the clusters generated by BOO-Clustering and the GDBSCAN algorithms are to be selected in an interactive manner for object detection and subsequently index generation for all images while creating the spatial cluster and object index database. Hence, the scheme takes a large amount of time while creating the spatial cluster and object index database. The scheme

33 will be more efficient if this phase could be automated research is underway to achieve this with the help of cognitive learning processes. Average Precision-Recall for Global Search Precision Recall BOO-Clustering GDBSCAN Chromatic Moment Figure 12. Precession recall for BOO-Clustering vs. GDBSCAN and Chromatic Moment based techniques for Global search. Average Precision-Recall for Regional Search Precision Recall BOO-Clustering GDBSCAN Chromatic Moment Figure 13. Precession recall for BOO-Clustering vs. GDBSCAN and Chromatic Moment based techniques for Regional search.

34 6. REFERENCES [1] Swain, M.J. & Ballard, D.H. (1991) "Color indexing", International Journal of Computer Vision, 7(1): [2] Stricker, M. & Dimai, A. (1996) "Color Indexing with Weak Spatial Constraints", SPIE Proceedings, 2670:29-40, February. [3] Hsu, W. Chua, T.S. & Pung, H.K. (1995) "An Integrated Color-Spatial Approach to Content- Based Image Retrieval", ACM Multimedia Conference, pages [4] Huang, J. (1998) "Color-Spatial Image Indexing and Applications", PhD thesis, Cornell University. [5] J. M. Zachary. An Information Theoretic Approach to Content Based Image Retrieval. PhD thesis, B. S. Louisiana State University, [6] R. Rickman and J. Stonham. Content-Based Image Retrieval Using Color Tuple Histograms. SPIE Proceedings, pages 2-7, [7] J. Smith and S. F. Chang. Tools and Techniques for Color Image Retrieval. SPIE Proceedings, pages , [8] A. Rao, R K. Srihari, and Z. Zhang. Spatial Color Histograms for Content-Based Image Retrieval. 11 th IEEE International Conference on Tools with Artificial Intelligence. Chicago, Illinois, November, [9] G. Pass and R. Zabih. Histogram Refinement for Content-Based Image Retrieval. IEEE Workshop on Applications of Computer Vision, pages , December, [10] E Kolatch. Clustering Algorithms for Spatial Databases: A Survey. / html.

35 [11] R. K. Srihari, Z. F. Zhang, and A. Rao. Image Background Search: Combining Objects Detection Techniques with Content-Based Image Retrieval (CBIR) Systems. Proceedings of the IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL 99), in conjunction with CVPR 99, June [12] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods. John Wiley and Sons [13] J Sander, M Ester, HP Kriegel, X Xu. Density-Based Clustering Spatial Databases: The Algorithm GDBSCAN and its Applications: [14] George Paschos, Ivan Radev, Nagarajan Prabakar. Image Content-Based Retrieval Using Chromaticity Moments. IEEE Transactions on Knowledge and Data Engineering. Vol 15, No 5, September/October [15] R Mukundan and K R Ramakrishan. Moment Functions in Image Analysis. World Scientific [16] P J Dutta, D Bhattacharryya, J Kalita, M Dutta. Spatial Color Indexing Using Clustering Technique. The 8 th World Multi-Conference on Systemics, Cybernatics and Informatics (SCI' 2004). Vol VI: Image, Acoustics, Signal Processing & Optical Systems, Technologies and Applications, pages , Orlando, Florida, July [17] M Stricker and M Orengo. Similarity of Color Images. SPIE Proceedings: Storage and Retrieval for Image and Video Databases III, Vol 2420, pages , [18] M K Mandal, T Aboulnasr and S Panchanathan. Image Indexing Using Moments and Wavelets. IEEE Transaction on Consume Electronics, Vol 42, No 3, pages , 1996.

36 Figure 14. Some Retrieval Results of the proposed method.

Spatial Color Indexing Using Data Clustering Technique

Spatial Color Indexing Using Data Clustering Technique Spatial Color Indexing Using Data Clustering Techniue P J Dutta, D K Bhattacharyya, M Dutta Department of Comp Sc & IT, Tezpur University, Napaam, India {pjd,dkb,malay}@tezu.ernet.in and J K Kalita Dept

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical

More information

Robust Shape Retrieval Using Maximum Likelihood Theory

Robust Shape Retrieval Using Maximum Likelihood Theory Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION OF TEXTURES

MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION OF TEXTURES International Journal of Computer Science and Communication Vol. 3, No. 1, January-June 2012, pp. 125-130 MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

AN EFFICIENT BATIK IMAGE RETRIEVAL SYSTEM BASED ON COLOR AND TEXTURE FEATURES

AN EFFICIENT BATIK IMAGE RETRIEVAL SYSTEM BASED ON COLOR AND TEXTURE FEATURES AN EFFICIENT BATIK IMAGE RETRIEVAL SYSTEM BASED ON COLOR AND TEXTURE FEATURES 1 RIMA TRI WAHYUNINGRUM, 2 INDAH AGUSTIEN SIRADJUDDIN 1, 2 Department of Informatics Engineering, University of Trunojoyo Madura,

More information

Image Matching Using Run-Length Feature

Image Matching Using Run-Length Feature Image Matching Using Run-Length Feature Yung-Kuan Chan and Chin-Chen Chang Department of Computer Science and Information Engineering National Chung Cheng University, Chiayi, Taiwan, 621, R.O.C. E-mail:{chan,

More information

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class: family of patterns

More information

Content Based Image Retrieval: Survey and Comparison between RGB and HSV model

Content Based Image Retrieval: Survey and Comparison between RGB and HSV model Content Based Image Retrieval: Survey and Comparison between RGB and HSV model Simardeep Kaur 1 and Dr. Vijay Kumar Banga 2 AMRITSAR COLLEGE OF ENGG & TECHNOLOGY, Amritsar, India Abstract Content based

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Content-Based Image Retrieval Some Basics

Content-Based Image Retrieval Some Basics Content-Based Image Retrieval Some Basics Gerald Schaefer Department of Computer Science Loughborough University Loughborough, U.K. gerald.schaefer@ieee.org Abstract. Image collections are growing at a

More information

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Machine learning Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class:

More information

Salient Region Detection using Weighted Feature Maps based on the Human Visual Attention Model

Salient Region Detection using Weighted Feature Maps based on the Human Visual Attention Model Salient Region Detection using Weighted Feature Maps based on the Human Visual Attention Model Yiqun Hu 2, Xing Xie 1, Wei-Ying Ma 1, Liang-Tien Chia 2 and Deepu Rajan 2 1 Microsoft Research Asia 5/F Sigma

More information

AN ACCELERATED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

AN ACCELERATED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION AN ACCELERATED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION 1 SEYED MOJTABA TAFAGHOD SADAT ZADEH, 1 ALIREZA MEHRSINA, 2 MINA BASIRAT, 1 Faculty of Computer Science and Information Systems, Universiti

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Short Run length Descriptor for Image Retrieval

Short Run length Descriptor for Image Retrieval CHAPTER -6 Short Run length Descriptor for Image Retrieval 6.1 Introduction In the recent years, growth of multimedia information from various sources has increased many folds. This has created the demand

More information

One image is worth 1,000 words

One image is worth 1,000 words Image Databases Prof. Paolo Ciaccia http://www-db. db.deis.unibo.it/courses/si-ls/ 07_ImageDBs.pdf Sistemi Informativi LS One image is worth 1,000 words Undoubtedly, images are the most wide-spread MM

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class: family of patterns

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) 5 MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) Contents 5.1 Introduction.128 5.2 Vector Quantization in MRT Domain Using Isometric Transformations and Scaling.130 5.2.1

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Texture Segmentation by Windowed Projection

Texture Segmentation by Windowed Projection Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw

More information

Processing of binary images

Processing of binary images Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Content based Image Retrieval Using Multichannel Feature Extraction Techniques

Content based Image Retrieval Using Multichannel Feature Extraction Techniques ISSN 2395-1621 Content based Image Retrieval Using Multichannel Feature Extraction Techniques #1 Pooja P. Patil1, #2 Prof. B.H. Thombare 1 patilpoojapandit@gmail.com #1 M.E. Student, Computer Engineering

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

A Review: Content Base Image Mining Technique for Image Retrieval Using Hybrid Clustering

A Review: Content Base Image Mining Technique for Image Retrieval Using Hybrid Clustering A Review: Content Base Image Mining Technique for Image Retrieval Using Hybrid Clustering Gurpreet Kaur M-Tech Student, Department of Computer Engineering, Yadawindra College of Engineering, Talwandi Sabo,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

Image retrieval based on region shape similarity

Image retrieval based on region shape similarity Image retrieval based on region shape similarity Cheng Chang Liu Wenyin Hongjiang Zhang Microsoft Research China, 49 Zhichun Road, Beijing 8, China {wyliu, hjzhang}@microsoft.com ABSTRACT This paper presents

More information

Holistic Correlation of Color Models, Color Features and Distance Metrics on Content-Based Image Retrieval

Holistic Correlation of Color Models, Color Features and Distance Metrics on Content-Based Image Retrieval Holistic Correlation of Color Models, Color Features and Distance Metrics on Content-Based Image Retrieval Swapnil Saurav 1, Prajakta Belsare 2, Siddhartha Sarkar 3 1Researcher, Abhidheya Labs and Knowledge

More information

2. LITERATURE REVIEW

2. LITERATURE REVIEW 2. LITERATURE REVIEW CBIR has come long way before 1990 and very little papers have been published at that time, however the number of papers published since 1997 is increasing. There are many CBIR algorithms

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)

More information

CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS

CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS B.Vanajakshi Department of Electronics & Communications Engg. Assoc.prof. Sri Viveka Institute of Technology Vijayawada, India E-mail:

More information

A Computer Vision System for Graphical Pattern Recognition and Semantic Object Detection

A Computer Vision System for Graphical Pattern Recognition and Semantic Object Detection A Computer Vision System for Graphical Pattern Recognition and Semantic Object Detection Tudor Barbu Institute of Computer Science, Iaşi, Romania Abstract We have focused on a set of problems related to

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Beyond Mere Pixels: How Can Computers Interpret and Compare Digital Images? Nicholas R. Howe Cornell University

Beyond Mere Pixels: How Can Computers Interpret and Compare Digital Images? Nicholas R. Howe Cornell University Beyond Mere Pixels: How Can Computers Interpret and Compare Digital Images? Nicholas R. Howe Cornell University Why Image Retrieval? World Wide Web: Millions of hosts Billions of images Growth of video

More information

A Miniature-Based Image Retrieval System

A Miniature-Based Image Retrieval System A Miniature-Based Image Retrieval System Md. Saiful Islam 1 and Md. Haider Ali 2 Institute of Information Technology 1, Dept. of Computer Science and Engineering 2, University of Dhaka 1, 2, Dhaka-1000,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Tools for texture/color based search of images

Tools for texture/color based search of images pp 496-507, SPIE Int. Conf. 3106, Human Vision and Electronic Imaging II, Feb. 1997. Tools for texture/color based search of images W. Y. Ma, Yining Deng, and B. S. Manjunath Department of Electrical and

More information

Normalized cuts and image segmentation

Normalized cuts and image segmentation Normalized cuts and image segmentation Department of EE University of Washington Yeping Su Xiaodan Song Normalized Cuts and Image Segmentation, IEEE Trans. PAMI, August 2000 5/20/2003 1 Outline 1. Image

More information

6. Object Identification L AK S H M O U. E D U

6. Object Identification L AK S H M O U. E D U 6. Object Identification L AK S H M AN @ O U. E D U Objects Information extracted from spatial grids often need to be associated with objects not just an individual pixel Group of pixels that form a real-world

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

Textural Features for Image Database Retrieval

Textural Features for Image Database Retrieval Textural Features for Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {aksoy,haralick}@@isl.ee.washington.edu

More information

Indexing by Shape of Image Databases Based on Extended Grid Files

Indexing by Shape of Image Databases Based on Extended Grid Files Indexing by Shape of Image Databases Based on Extended Grid Files Carlo Combi, Gian Luca Foresti, Massimo Franceschet, Angelo Montanari Department of Mathematics and ComputerScience, University of Udine

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

A METHOD FOR CONTENT-BASED SEARCHING OF 3D MODEL DATABASES

A METHOD FOR CONTENT-BASED SEARCHING OF 3D MODEL DATABASES A METHOD FOR CONTENT-BASED SEARCHING OF 3D MODEL DATABASES Jiale Wang *, Hongming Cai 2 and Yuanjun He * Department of Computer Science & Technology, Shanghai Jiaotong University, China Email: wjl8026@yahoo.com.cn

More information

Indexing Tamper Resistant Features for Image Copy Detection

Indexing Tamper Resistant Features for Image Copy Detection Indexing Tamper Resistant Features for Image Copy Detection Peter Mork y, Beitao Li z, Edward Chang z, Junghoo Cho y,chenli y, and James Wang yλ Abstract In this paper we present the image copy detection

More information

CIE L*a*b* color model

CIE L*a*b* color model CIE L*a*b* color model To further strengthen the correlation between the color model and human perception, we apply the following non-linear transformation: with where (X n,y n,z n ) are the tristimulus

More information

Enhanced Hemisphere Concept for Color Pixel Classification

Enhanced Hemisphere Concept for Color Pixel Classification 2016 International Conference on Multimedia Systems and Signal Processing Enhanced Hemisphere Concept for Color Pixel Classification Van Ng Graduate School of Information Sciences Tohoku University Sendai,

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

More information

COLOR AND SHAPE BASED IMAGE RETRIEVAL

COLOR AND SHAPE BASED IMAGE RETRIEVAL International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol.2, Issue 4, Dec 2012 39-44 TJPRC Pvt. Ltd. COLOR AND SHAPE BASED IMAGE RETRIEVAL

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Efficient Content Based Image Retrieval System with Metadata Processing

Efficient Content Based Image Retrieval System with Metadata Processing IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 10 March 2015 ISSN (online): 2349-6010 Efficient Content Based Image Retrieval System with Metadata Processing

More information

Dr. Ulas Bagci

Dr. Ulas Bagci CAP5415-Computer Vision Lecture 11-Image Segmentation (BASICS): Thresholding, Region Growing, Clustering Dr. Ulas Bagci bagci@ucf.edu 1 Image Segmentation Aim: to partition an image into a collection of

More information

Semantics-based Image Retrieval by Region Saliency

Semantics-based Image Retrieval by Region Saliency Semantics-based Image Retrieval by Region Saliency Wei Wang, Yuqing Song and Aidong Zhang Department of Computer Science and Engineering, State University of New York at Buffalo, Buffalo, NY 14260, USA

More information

Generic Fourier Descriptor for Shape-based Image Retrieval

Generic Fourier Descriptor for Shape-based Image Retrieval 1 Generic Fourier Descriptor for Shape-based Image Retrieval Dengsheng Zhang, Guojun Lu Gippsland School of Comp. & Info Tech Monash University Churchill, VIC 3842 Australia dengsheng.zhang@infotech.monash.edu.au

More information

Tracking and Recognizing People in Colour using the Earth Mover s Distance

Tracking and Recognizing People in Colour using the Earth Mover s Distance Tracking and Recognizing People in Colour using the Earth Mover s Distance DANIEL WOJTASZEK, ROBERT LAGANIÈRE S.I.T.E. University of Ottawa, Ottawa, Ontario, Canada K1N 6N5 danielw@site.uottawa.ca, laganier@site.uottawa.ca

More information

Invarianceness for Character Recognition Using Geo-Discretization Features

Invarianceness for Character Recognition Using Geo-Discretization Features Computer and Information Science; Vol. 9, No. 2; 2016 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Science and Education Invarianceness for Character Recognition Using Geo-Discretization

More information

A Graph Theoretic Approach to Image Database Retrieval

A Graph Theoretic Approach to Image Database Retrieval A Graph Theoretic Approach to Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington, Seattle, WA 98195-2500

More information

Lecture 8: Fitting. Tuesday, Sept 25

Lecture 8: Fitting. Tuesday, Sept 25 Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review

More information

Selective Search for Object Recognition

Selective Search for Object Recognition Selective Search for Object Recognition Uijlings et al. Schuyler Smith Overview Introduction Object Recognition Selective Search Similarity Metrics Results Object Recognition Kitten Goal: Problem: Where

More information

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Improving the Efficiency of Fast Using Semantic Similarity Algorithm International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014 1 Improving the Efficiency of Fast Using Semantic Similarity Algorithm D.KARTHIKA 1, S. DIVAKAR 2 Final year

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

An Efficient Semantic Image Retrieval based on Color and Texture Features and Data Mining Techniques

An Efficient Semantic Image Retrieval based on Color and Texture Features and Data Mining Techniques An Efficient Semantic Image Retrieval based on Color and Texture Features and Data Mining Techniques Doaa M. Alebiary Department of computer Science, Faculty of computers and informatics Benha University

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors Segmentation I Goal Separate image into coherent regions Berkeley segmentation database: http://www.eecs.berkeley.edu/research/projects/cs/vision/grouping/segbench/ Slide by L. Lazebnik Applications Intelligent

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK REVIEW ON CONTENT BASED IMAGE RETRIEVAL BY USING VISUAL SEARCH RANKING MS. PRAGATI

More information