CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR)

Size: px
Start display at page:

Download "CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR)"

Transcription

1 63 CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR) 4.1 INTRODUCTION The Semantic Region Based Image Retrieval (SRBIR) system automatically segments the dominant foreground region and retrieves it, using semantic learning. The system segments the image into different regions, based on the semantic concepts and finds the dominant foreground region. Then, the low-level features of the dominant foreground region are extracted. The Support Vector Machine-Binary Decision Tree (SVM-BDT) is used for semantic learning, and it finds the semantic category of the image. The low level features of the dominant region of each category image are used in finding the semantic template of that category. The SVM-BDT is constructed using these semantic templates. A high level concept of the query image is obtained, using the SVM-BDT. Similarity matching is done between the query image and the set of images, belonging to the semantic category of the query image, and the top images with the least distance are retrieved. The training and testing phases of the SRBIR are explained in the next section. 4.2 PHASES OF THE SRBIR An image consists of many regions. The features can be extracted either from the whole image or from the regions of the image. Normally, the user is interested in a specific region than in the whole image. The

2 64 representation of the images at the region level is closer to that of human perception. The CBIR based on the image regions is called as Region Based Image Retrieval (RBIR). This SRBIR system compares the dominant region of the query image and the dominant region of the DB image. The dominant foreground region represents the semantics of the image. The block diagram of the training phase is given in Figure 4.1. Training images Determination of the dominant region of each image Extraction of the low level features of the dominant region Feature DB (Feature vectors of the dominant regions) Construction of SVM-BDT SVM-BDT Figure 4.1 Block Diagram of the Training Phase of the SRBIR The dominant foreground region is extracted from each image in the DB. The low level features, namely, color and texture features are extracted from the dominant region. These extracted low level features are stored in a DB. The image DB contains the images of several semantic categories. The color-texture semantic templates for each semantic category are found, using the low level features of the dominant region of the images in that category. These semantic templates are used in constructing the SVM- BDT. Hence, at the end of the training phase, the SVM-BDT is constructed

3 65 for classification. Upon receiving a query image feature vector, the SVM- BDT will be able to predict the label, (ie), the semantic category of the query image. RGB Query image Determination of the dominant region in the query image Extraction of the low level features of the dominant region Label of the query image Prediction of the class label for the query image using SVM-BDT Feature DB (Feature vectors of dominant regions of the entire DB) Retrieval of the low level features of all the images in the predicted class Determining the Similarity between the features of each image in the predicted class and the query image features Image index of the first 15 values Raw Image DB Retrieval of the top 15 images with the least similarity distance Resultant Images Figure 4.2 Block Diagram of the Testing Phase of the SRBIR The block diagram of the testing phase is given in Figure 4.2. Given the query image, the dominant foreground region is extracted from it.

4 66 The color, and texture low level features are extracted from this dominant region. These low level features of the dominant region of the query image are given as input to the SVM-BDT, and it predicts the category of the query image. The low level features of all the images in the predicted category are taken from the feature DB. The similarity distance is calculated between the low level features of the dominant region of the query image, and the low level features of the dominant region of all the images in the predicted category. The similarity distances are sorted in the ascending order, and the corresponding top images with the least distance from the query image are retrieved from the image DB and displayed. The next section explains the segmentation of the dominant foreground region from the image. 4.3 SEGMENTATION OF THE DOMINANT FOREGROUND REGION The dominant foreground region occupies most of the space in the image. Mostly, the dominant object in the image s foreground tells the semantics of the image. The natural image photographs contain some dominant foreground region. The SRBIR extracts the solid dominant foreground region. So, the extracted dominant region, has reduced noise, since it is not the outline. Therefore, the low-level features extracted from the dominant region do not have much distortion. The query image is an RGB image which uses 24 bits per pixel (RGB - 8 bits per color). The indexed image is an image in which the image pixels do not contain the full specification of its color, but only the index in the color map. The indexed image requires the color map and an image matrix. The color map is an ordered set of values representing the colors in the image. For each image pixel, the image matrix contains the corresponding index in the color map. The size of the color map is n 3 for an image containing n colors. Each row of the color map matrix is a 1 3 vector,

5 67 containing the values of the red, green and blue components that correspond to a single color in the image. For each pixel in the image matrix, the color corresponding to the index in the color map is displayed. Hence, in an indexed image, the colors available in the image are preserved in the color map, and the image matrix contains only the indices of the color map. The MATLAB command rgb2ind converts the RGB image to an index image. Equation (4.1) gives the format of the rgb2ind command. [X, map] = rgb2ind(rgb, n) (4.1) Equation (4.1) converts the RGB image RGB to an indexed image X, using the minimum variance quantization; map is the color map which contains at the most n colors. n must be less than or equal to When the image has more than n colors, the minimum variance quantization is applied to reduce the number of colors. The values in the resultant image matrix X are indexes in the color map map and cannot be used in the filtering operations. For example, the RGB image is read in the variable im and converted to indexed image ind_img and the color map col_map, using the imread MATLAB command. im = imread('c:\101_objectcategories\lotus\image_0002.jpg'); [ind_img, col_map] = rgb2ind(im, 65536); The portion of the indexed image ind_img is given in Table 4.1. The portion of the color map details is given in Table 4.2. In Table 4.1, the indexed image contains the indices of the color map. The pixel value at position (1, 1) of Table 4.1 is 4549 and it is shaded. The corresponding color value is got from the color map value at the index 4549 and it is shaded in Table 4.2. The RGB value in this index is substituted in the pixel position

6 68 (1, 1) of the indexed image. In the same way, each pixel value in the indexed image is substituted from the color map and the image is displayed. Table 4.1 Portion of the Indexed Image (10 rows and 10 columns) Row >> ind_img(1:10, 1:10) The size of the RGB image is , and is represented as im. The image size is The values of the red, green and blue components are stored in the image. The size of the indexed image ind_img is , and each element of it contains an index of the color map. Since, the number of colors in the image im is 5534, the size of the color map col_map is Each color map entry will have 3 values corresponding to R, G and B, as in Table 4.2. The amount of memory needed to store the RGB image is , which is, 2,66,400 pixel values. The amount of memory needed for the indexed image is , which is, 1,05,402 pixel values. Since, normally the images contain repeated colors, the

7 69 number of entries in an indexed image will be less, and hence the amount of memory needed for an indexed image will be less. Also, in an indexed image, the colors of the image are preserved in the color map. Table 4.2 Portion of the Color Map >>col_map(4543:4550, :) Row R G B The given RGB image is converted to an indexed image. The RGB image is converted to a gray scale image and the noise is removed, using a median filter. The median filter reduces noise and also preserves the edges. Median filtering is based on the median value in the m n neighborhood around the corresponding pixel in the input image. The edges of the image are found, using the canny edge detection. Then, the edges of the image are smoothed to reduce the number of connected components, which is found using the 8 connectivity. The component number for the background image is 0. The biggest connected component in the image is found. For the pixels that are in the maximum connected component, the original pixel value from the indexed image is copied, and, for all the remaining pixels the value is set to

8 70 zero. This biggest connected component is treated as the dominant region. The dominant region obtained is not a solid region. To make it solid, each row of this new image is scanned, and the pixels in between two non-zero values are also set to the original value from the indexed image. The obtained solid region is converted back into a color image, using the color map. The steps involved in the extraction of the dominant foreground region are presented in Figure 4.3. Algorithm DOMINANT_FOREGROUND_REGION(im) The dominant foreground region of the RGB image is determined. 1. [ Obtain the indexed image from the RGB image.] img imread(im) [ind_img, map] rgb2ind(img, 65,536) /* map is the color map which contains the maximum of 65,536 colors. */ 2. [ Obtain the gray scale image from the color image. ] gray_img rgb2gray(img) 3. [ Remove noise by applying the median filter.] filt_img medfilt2(gray_img, [3 3]) /* [3 3] is the size of the neighborhood around the corresponding pixel in the input image*/ 4. [The edges of the image are found using the canny edge detector. ] BW edge (filt_img, canny ) /* The grayscale image filt_img is the input to the function edge, and returns a binary image BW of the same size as filt_img. BW is set to 1 when the function finds edge the in filt_img. Otherwise 0 is placed.*/ Figure 4.3 (Continued)

9 71 5. [Smoothing of the edges is done to reduce the number of connected components in it.] B = conv2 (BW, mask) where mask = The function conv2(bw, mask) computes the two-dimensional convolution of matrices BW and mask. 6. [This step finds the connected components of the image using the 8 connectivity. ] [L, num] bwlabel(b, 8) /* The function bwlabel() returns a matrix L, of the same size as matrix B, which is the first parameter. L would contain the labels for the connected objects in B. It returns the value num, which is the number of connected objects found in B. */ 7. [ Find the biggest connected component in the image foreground ] max_ind Label of the biggest connected component. 8. [For the pixels that are in the maximum connected component, the original pixel value from the indexed image is copied, and for all the remaining pixels the value is set to zero.] [M, N] size of the image Repeat for i = 1, 2,, M Repeat for j = 1, 2,., N If (L(i,j)== max_ind) n(i,j) ind_img(i,j) Else n(i,j) 0 Figure 4.3 (Continued)

10 72 End End End. 9. [The dominant region is converted to a solid region.] Repeat for i = 1, 2,., M Repeat for j = 1, 2,.., N If (the pixel value n(i, j) = 0) If (n(i, j) is in between two non zero values, either in the horizontal or in the vertical direction) n(i, j) ind_img(i, j) End If End If End End 10. Convert solid region into a color image by applying color mapping. Figure 4.3 Algorithm for Extracting the Dominant Foreground Region of an Image Figure 4.4 is the result of the segmentation of the dominant region from an image. The first column represents the original images and the corresponding second column images are the dominant regions of those images, obtained by the automatic segmentation of the dominant region. The first four images in Figure 4.4, namely, rose, bus, mountain, dinosaurs are segmented properly, while the segmentation of the African man in Figure 4.4, has some noise. Many of the region based image retrieval systems are based on the selection of the region of interest. Figure 4.5 is the segmentation as a result of user selection. The segmentation by selecting the region of interest has been carried out, using the SegTool. The region based image retrieval experiments were carried out, by using both the automatic segmentation of the dominant foreground region, and the region selection method. After getting the region of interest, the low level features are extracted from the obtained region. The next section describes how the low level features are extracted from the dominant region.

11 73 Original Image Dominant Foreground Region Figure 4.4 Image and its Dominant Foreground Region

12 74 Original Image Region Segmented using SegTool Figure 4.5 The Image and the Segmented Region Using the SegTool 4.4 FEATURE EXTRACTION FROM THE DOMINANT REGION After the segmentation of the dominant region from the image, its color and texture features are extracted. The first, second and third central moments of each color channel are used in representing the color feature vector in the Hue, Saturation and Value in the HSV color space. The six dimensional H, S, V, H, S, V color feature vector (f c ) is extracted. represents the mean and is the standard deviation of the color channel in the HSV, and the values are calculated using the Equations (2.1) and (2.2). The texture information is extracted from the gray-level cooccurrence matrix. The four co-occurrence matrices of the four different orientations (horizontal 0, vertical 90 and two diagonals 45 and 135 ) are constructed. The co-occurrence matrix reveals certain properties of the spatial distribution of the gray levels in the image. Higher order features like energy, contrast, homogeneity, correlation and entropy are measured, using equations

13 75 (2.4) to (2.8) on each gray level co-occurrence matrix, to form a five dimensional feature vector. Finally, a twenty dimensional texture feature vector (f t ) is obtained by concatenating the feature vectors of each cooccurrence matrix. Thus, the color-texture feature vector of dimension 26 is obtained (f ct = f c + f t ). The color and texture features of the dominant region of the rose image in Figure 4.4 are given in Table 4.3. The next section describes the construction of the semantic template for each semantic category of images. Table 4.3 The Feature Vectors of the Dominant Region of the Rose Image Feature Color features (6 dimensional) H, S, V, H, S, V Texture features (20- dimensional) Feature Vector SEMANTIC TEMPLATE CONSTRUCTION A Semantic template is the centroid of the low-level features of all the sample regions. For the j th sample region in class i, where ( j= 1, 2,, 100 ) and (i = 1, 2,, 10), its color and texture features are given by,,,,, and,,, respectively. For the first dimension of the color and texture features, the centroid (Liu et al 2008) is calculated using Equations (4.2) and (4.3).

14 76 =, = 1, 2,, (4.2) =, = 1, 2,, (4.3) where n in Equation (4.3) is the number of images in each class, and it is 100 here and m is the number of semantic concepts in the image DB. Hence, the color and texture template of concept i is C i and T i, as given in equations (4.4) and (4.5) respectively. The color-texture template CT i of the i th concept is calculated as CT i = C i + T i. Hence the color-texture template is also of dimension 26. The Color-texture template is found for all the images in the DB. The next section describes the construction of the SVM-BDT using these color-texture templates. =,,,,, (4.4) =,,, (4.5) 4.6 CONSTRUCTION OF THE SVM-BDT The SVM binary decision tree construction consists of two major steps. The first step in constructing the Binary Decision Tree (BDT) is clustering the various classes of the DB images. The second step associates a binary class SVM with each non leaf node of the BDT. The distance between each of the color-texture templates is found, and the SVM-BDT is constructed by grouping the classes based on this distance. The block diagram for constructing the SVM-BDT is given in Figure 4.6. The SVM-BDT predicts the class of the query image. The statistical distance between the features of the query image and the features of the images in the predicted category are found, and the top images with the least distance are displayed as the output. The Euclidean, Bhattacharya and Mahalanobis statistical distance measures

15 77 are used for finding the similarity between the query image and the DB images. Training images Finding the dominant region of the images Extraction of the low level features Generation of the colortexture semantic template for all the category of images Determination of the distance between the color-texture templates SVM-BDT Construction of SVM- BDT by grouping the classes based on the distance Figure 4.6 Block Diagram for the Construction of the SVM-BDT The SVM-BDT is constructed for multi-class classification. If K classes are available in the DB, the color-texture template is found for each class in the DB. Then, the Euclidean distance between the colour-texture templates of all the K classes is found. Thus, the K K distance matrix is obtained. The distance matrix is used for further grouping. The two classes that have the largest Euclidean distance between them are assigned to each of the two clustering groups, and the colour-texture template of the two classes

16 78 is taken as the cluster center for the corresponding group. After this, the pair of classes each of which is closest to the cluster centers of the groups are found and assigned to the corresponding group. The cluster center is updated to the color-texture template of the class, that is included recently in the group. The process continues by finding the next pair of unassigned classes, each of which is closest to one of the two clustering groups, and assigns them to the corresponding group, and the cluster center is updated. Thus, all the classes are assigned to one of the two possible groups of classes. The SVM binary classifier trains the samples in each non-leaf node of the decision tree. The classes from the first clustering group are assigned to the first (left) sub tree, while the classes from the second clustering group are assigned to the second (right) sub tree. The process of recursively dividing each of the groups into two sub groups continues, until there is only one class per group, which is the leaf in the decision tree (Madzarov et al 2009b). This procedure leads to the balanced SVM-BDT. The Euclidean distances between all the semantic templates are found, and the distance matrix is given in Table 4.4. In Table 4.4, the highest value, , that is darkly shaded, and it is between the 5 th and 7 th class. Hence, classes c 5 and c 7 are the farthest, and are assigned to groups G 1 and G 2 respectively. The color-texture template of class c 5 is taken as the cluster center of group G 1, and the color-texture template of class c 7 is taken as the cluster center of group G 2. To find the class closest to G 1, the class closest to its cluster center is determined. Hence, the smallest distance in the row corresponding to c 5 is obtained from Table 4.4. The smallest non-zero value in row 5 of Table 4.4 is , and it is lightly shaded. This value is between c 5 and c 3. Hence, class c 3 is closest to the group G 1 and it is added to that group. Similarly, the class closest to G 2 is found by examining the row corresponding to its cluster center c 7. The smallest non-zero value in row 7 of Table 4.4 is , and it is lightly shaded. This value is between c 7 and c 6.

17 79 Hence, class c 6 is closest to group G 2 and it is added to that group. The cluster center of G 1 is changed to the color-texture template of the newly added class c 3 and the cluster center of G 2 is changed to the color-texture template of class c 6. Table 4.4 Distance between the Color-Texture Templates In the next step, the closest class for group G 1 is examined. The smallest unassigned non-zero value in the row corresponding to c 3 is , which corresponds to class c 4. Hence, class c 4 is assigned to group G 1, and its cluster center is changed to the color-texture template of class c 4. In the same way, the closest class for group G 2 is obtained by getting the smallest unassigned non-zero value in the row corresponding to class c 6. The smallest unassigned non-zero value in row 6 of Table 4.4 is , which corresponds to class c 8. Hence, c 8 is assigned to group G 2, and the cluster center of G 2 is updated to the semantic template of class c 8.

18 80 G 1 (c 5,c 3,c 4,c 10,c 1 ) SVM 1 G 2 (c 7,c 6,c 8,c 2,c 9 ) SVM 2 SVM 3 G 1,1 (c 4,c 3,c 5 ) G 1,2 (c 1,c 10 ) G 1,3 (c 8,c 2,c 9 ) G 1,4 (c 7,c 6 ) SVM 4 SVM 5 SVM 6 SVM 7 G 2,1 (c 5,c 3 ) G 2,2 (c 2,c 9 ) SVM 8 c 4 c 1 c 10 SVM 9 c 8 c 7 c 6 c 3 c 5 c 2 c 9 Figure 4.7 SVM-BDT for Semantic Learning In the next step, the next element of group G 1 is obtained, by finding the smallest unassigned non-zero value in the row corresponding to class c 4. The smallest unassigned non-zero value in row 4 of Table 4.4 is , and it corresponds to class c 10. Hence, c 10 is assigned to group G 1 and the cluster center of G 1 is updated to the color-texture template of class c 10. Similarly, the next element of group G 2 is obtained by finding the smallest unassigned non-zero value in the row corresponding to class c 8. The smallest unassigned non-zero value in row 8 of Table 4.4 is , which corresponds to class c 2. Hence, c 2 is assigned to group G 2, and the cluster center of G 2 is updated to the semantic template of class c 2. In the next step, the closest class for group G 1 is found, by finding the next smallest unassigned non-zero value from row 10 of Table 4.4. This corresponds to class c 1 and is assigned to

19 81 group G 1. In the same way, the last unassigned class c 9 is assigned to group G 2. This completes the first round of grouping that defines the classes that will be transferred to the left and right sub tree of the node. Hence, the first level grouping of the SVM-BDT shown in Figure 4.7 is {c 5, c 3, c 4, c 10, c 1 } and {c 7, c 6, c 8, c 2, c 9 }. The SVM classifier in the root of SVM-BDT is trained, by considering the samples from classes {c 5, c 3, c 4, c 10, c 1 } as positive, and the samples from classes {c 7, c 6, c 8, c 2, c 9 } as negative. The grouping procedure is repeated independently for the classes on the left and right sub trees of the root. The distance between the colortexture templates of all the classes in G 1 is given in Table 4.5. Table 4.5 Distance between the Color-Texture Templates of the Classes in G The highest value in Table 4.5 is , and it is between class c 4 and c 1. Hence, c 4 is added to group G 11, and c 1 is added to group G 12. The smallest value in row 4 is , and it corresponds to class c 3. Hence c 3 is added to group G 11. The next class of group G 12 is obtained by finding the smallest unassigned non-zero value in row 1. It is and it corresponds to class c 10. Hence, c 10 is added to group G 12. The next class of group G 11 is obtained, by finding the smallest unassigned non-zero value in row 3. It is and it corresponds to class c 5. Hence, the classes in G 1 are divided into

20 82 two groups G 11, consisting of {c 4, c 3, c 5 } on the left side and G 12 comprising of {c 1, c 10 } on the right side. Similarly, the classes in group G 2 are divided into two groups, namely, G 13 with {c 8, c 2, c 9 } on its left side and G 14 has {c 7, c 6 } on its right side. On the next level, G 11 is divided into G 21 having {c 5, c 3 } and {c4} to the right of SVM4; G 12 is divided into two groups {c 1 } and {c 10 }; G 13 is divided into G 22 with {c 2, c 9 } and {c 8 } to the right of SVM6. G 14 is divided into {c 7 } and {c 6 }. On the next level, G 21 is divided into two groups {c 5 } and {c 3 }, and G 22 is divided into two groups {c 2 } and {c 9 }. Since each group contains only a single class in it, the construction of the SVM-BDT is complete. At each non-leaf node of the SVM-BDT, the SVM binary classifier is used for training the positive and negative samples. The next section describes how the SVM-BDT is used in predicting the class of the query image. 4.7 CLASS PREDICTION USING THE SVM-BDT AND IMAGE RETRIEVAL In the testing phase, the user submits a query image in order to retrieve the relevant images from the DB. The dominant region is extracted from the query image and the color, texture features are extracted and this feature vector is given to the SVM-BDT for class prediction. The SVM binary classifier at each non-leaf node branches the given input through the SVM-BDT, and thereby the class label of the query image is predicted. For example, a query image belonging to class 4 is submitted to the SVM binary classifier at the root. SVM 1 is consulted, and it results in +1, since, the query image belong to the category in G 1. Hence, it branches to the left side and SVM 2 is visited next. Since the query image category belongs to the group G 11, SVM 2 produces +1 and branches to the left side. So, SVM 4 is visited next. SVM 4 produces -1 and branches right. The leaf node, c 4 is reached. Hence, the label of the query image is predicted as 4. All the SVM classifiers

21 83 visited during the classification of the query image belonging to category 4, are shaded in Figure 4.7. After predicting the class, statistical similarity measures are applied between the query image, and only the images of that particular class. This reduces the search space and search time. The three popular measures, namely, Euclidean, Bhattacharya and Mahalanobis distance measures are used for finding the statistical similarity between the query image and the DB images. The distances between the feature vector of the query image and the feature vector of the images of the predicted class are found. These distances are sorted in the increasing order and the top k images with the least distance are obtained, and the corresponding images are displayed as the output. The next section provides the obtained results. 4.8 EXPERIMENTAL RESULT In order to verify the effectiveness and efficiency of the SRBIR system, experiments were conducted on the COREL dataset consisting of 1000 images of sizes and The training sample is the fully labeled DB. The automatic segmentation algorithm correctly segmented 86% of the images and the dominant object in the image is obtained. The remaining 14% of the images were not accurately segmented. For the construction of the SVM-BDT, the dominant region is obtained and its color and texture features are obtained. The feature vectors are stored in the feature DB. The semantic template of each category is calculated by obtaining the mean for each feature vector of the images in each semantic category. Thus, ten semantic templates are calculated for the considered 10 semantic categories. Then, the Euclidean distances between all the semantic templates are calculated and represented as the distance matrix. This distance matrix is used for classification, and the training process

22 84 continues till each group contains a single class. The SVM binary classifier trains the nodes at each level, and divides the group into two sub groups. Nine SVM binary classifiers are needed, to perform the multi class classification for 10 classes. Figure 4.8 Image Retrieval Based on the Segmentation of the Dominant Foreground Region and the SVM-BDT The dominant region of the query image is found, and its colortexture feature is given as the input to the SVM-BDT classification, for predicting the semantic class of the query image. The MATLAB SVM toolbox has been used for implementing the SVM binary classifiers in the SVM-BDT. During the training time, the SVM-BDT is constructed and during testing time, the SVM-BDT is used in predicting the label of the query image. The retrieval results obtained using the automatic segmentation of the dominant foreground region along with the SVM-BDT, and the region selection using the SegTool along with the SVM-BDT, are shown in Figures 4.8 and 4.9. In both the methods, the query image is an elephant

23 85 image and the retrieval results are correct. The performance of the SRBIR is compared with the other decision tree induction methods in the following section. Figure 4.9 Image Retrieval by Region Selection Using the SegTool and the SVM-BDT 4.9 COMPARISON OF SRBIR WITH OTHER DECISION TREE INDUCTION METHODS The results of the SRBIR system are given in Table 4.6; they reveal that if the SVM-BDT is trained with 100% of the training set images, the system produces 100% accuracy. If the SVM-BDT is trained with 75% of the images in the training data set, it produces 95.4% accuracy. The testing time is the same for these two types, and the training time increases as the training set size increases. The percentage of performance improvement in SRBIR compared to the SVM-BDT approach is given in Table 4.6. The percentage of time reduction is calculated using Equation (3.13). The performance of the

24 86 SRBIR by extracting the features from the dominant region, is better than that of the SVM-BDT approach, which extracts features from the whole image. When 75% of DB images are used for training, the accuracy rate is increased by 3.9% and it is increased by 2.4%, when 100% of DB images are used for training. But the training time and testing time of both the approaches are closely similar, as shown in Table 4.6. The results of the SRBIR system are compared with those of the other decision tree learning methods, such as the DT-ST, ID3, C4.5, and the RBIR with the region selection and the SVM- BDT. The comparison is given in Table 4.7. Table 4.6 Results of the SRBIR for the COREL Image Data Set Measured in terms Accuracy rate (%) Training time(sec) Testing time(sec) SVM-BDT approach with feature extracted from whole image Trained with 75% Trained with 100% SRBIR with segmentation of Dominant Region Trained with 75% Trained with 100% % of performance improvement in SRBIR compared to SVM-BDT Trained with 75% Trained with 100% 91.5% 97.6% 95.4% 100% 3.9% 2.4% % 0% % 0% Table 4.7 Comparison with Different Induction Methods Method Classification Accuracy SRBIR with dominant region and SVM-BDT 100% RBIR with region selection and SVM-BDT 86% RBIR with DT-ST (Liu et al 2008) 74.6% ID3 (Liu et al 2008) 63.5% C4.5 (Liu et al 2008) 73.8%

25 87 From Table 4.7, it is seen that the SRBIR produces higher accuracy than the existing RBIR techniques, because it extracts the dominant foreground region and compares them. This SRBIR technique is suitable for images which contain the dominant foreground region. Since, in the selected COREL dataset, the images contain the dominant region which tells the semantic of the image, this SRBIR technique with the extraction of the dominant region and the SVM-BDT produces good results CONCLUSION This chapter described the SRBIR system, which looks for features close to the human interpretation of images. The algorithm for the automatic segmentation of the dominant foreground region from the image, provides the high-level semantics of the image. The automatic segmentation of the dominant region reduces the noise in the segmentation, and the low-level features of the region are maintained without much distortion. The low level features are extracted from the dominant region of each of the images in the DB, and these features are used in training the SVM-binary decision tree. The SVM-BDT is trained with the color-texture template of each image category. This SVM-BDT is used in predicting the class label of the query image. Thus, only the images whose high level semantics match with the query image are considered for similarity matching. This reduces the testing time and the accuracy of the system is promising when compared to the other region-based image retrieval techniques. If the query image belongs to a category which is not in the training set, then the SRBIR system produces misclassifications. This is the limitation of the SRBIR system. The next chapter discusses the method of adaptively training the SRBIR system with the new category images.

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

Short Run length Descriptor for Image Retrieval

Short Run length Descriptor for Image Retrieval CHAPTER -6 Short Run length Descriptor for Image Retrieval 6.1 Introduction In the recent years, growth of multimedia information from various sources has increased many folds. This has created the demand

More information

A Review on Plant Disease Detection using Image Processing

A Review on Plant Disease Detection using Image Processing A Review on Plant Disease Detection using Image Processing Tejashri jadhav 1, Neha Chavan 2, Shital jadhav 3, Vishakha Dubhele 4 1,2,3,4BE Student, Dept. of Electronic & Telecommunication Engineering,

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION 4.1. Introduction Indian economy is highly dependent of agricultural productivity. Therefore, in field of agriculture, detection of

More information

VC 11/12 T14 Visual Feature Extraction

VC 11/12 T14 Visual Feature Extraction VC 11/12 T14 Visual Feature Extraction Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Feature Vectors Colour Texture

More information

Global Journal of Engineering Science and Research Management

Global Journal of Engineering Science and Research Management ADVANCED K-MEANS ALGORITHM FOR BRAIN TUMOR DETECTION USING NAIVE BAYES CLASSIFIER Veena Bai K*, Dr. Niharika Kumar * MTech CSE, Department of Computer Science and Engineering, B.N.M. Institute of Technology,

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Content Based Image Retrieval Using Color and Texture Feature with Distance Matrices

Content Based Image Retrieval Using Color and Texture Feature with Distance Matrices International Journal of Scientific and Research Publications, Volume 7, Issue 8, August 2017 512 Content Based Image Retrieval Using Color and Texture Feature with Distance Matrices Manisha Rajput Department

More information

Depatment of Computer Science Rutgers University CS443 Digital Imaging and Multimedia Assignment 4 Due Apr 15 th, 2008

Depatment of Computer Science Rutgers University CS443 Digital Imaging and Multimedia Assignment 4 Due Apr 15 th, 2008 CS443 Spring 2008 - page 1/5 Depatment of Computer Science Rutgers University CS443 Digital Imaging and Multimedia Assignment 4 Due Apr 15 th, 2008 This assignment is supposed to be a tutorial assignment

More information

Supervised vs unsupervised clustering

Supervised vs unsupervised clustering Classification Supervised vs unsupervised clustering Cluster analysis: Classes are not known a- priori. Classification: Classes are defined a-priori Sometimes called supervised clustering Extract useful

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Handwritten Hindi Numerals Recognition System

Handwritten Hindi Numerals Recognition System CS365 Project Report Handwritten Hindi Numerals Recognition System Submitted by: Akarshan Sarkar Kritika Singh Project Mentor: Prof. Amitabha Mukerjee 1 Abstract In this project, we consider the problem

More information

Feature Selection for fmri Classification

Feature Selection for fmri Classification Feature Selection for fmri Classification Chuang Wu Program of Computational Biology Carnegie Mellon University Pittsburgh, PA 15213 chuangw@andrew.cmu.edu Abstract The functional Magnetic Resonance Imaging

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK REVIEW ON CONTENT BASED IMAGE RETRIEVAL BY USING VISUAL SEARCH RANKING MS. PRAGATI

More information

2. LITERATURE REVIEW

2. LITERATURE REVIEW 2. LITERATURE REVIEW CBIR has come long way before 1990 and very little papers have been published at that time, however the number of papers published since 1997 is increasing. There are many CBIR algorithms

More information

CHAPTER 6 INFORMATION HIDING USING VECTOR QUANTIZATION

CHAPTER 6 INFORMATION HIDING USING VECTOR QUANTIZATION CHAPTER 6 INFORMATION HIDING USING VECTOR QUANTIZATION In the earlier part of the thesis different methods in the spatial domain and transform domain are studied This chapter deals with the techniques

More information

Using Machine Learning for Classification of Cancer Cells

Using Machine Learning for Classification of Cancer Cells Using Machine Learning for Classification of Cancer Cells Camille Biscarrat University of California, Berkeley I Introduction Cell screening is a commonly used technique in the development of new drugs.

More information

[Programming Assignment] (1)

[Programming Assignment] (1) http://crcv.ucf.edu/people/faculty/bagci/ [Programming Assignment] (1) Computer Vision Dr. Ulas Bagci (Fall) 2015 University of Central Florida (UCF) Coding Standard and General Requirements Code for all

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Content Based Image Retrieval

Content Based Image Retrieval Content Based Image Retrieval R. Venkatesh Babu Outline What is CBIR Approaches Features for content based image retrieval Global Local Hybrid Similarity measure Trtaditional Image Retrieval Traditional

More information

Temperature Calculation of Pellet Rotary Kiln Based on Texture

Temperature Calculation of Pellet Rotary Kiln Based on Texture Intelligent Control and Automation, 2017, 8, 67-74 http://www.scirp.org/journal/ica ISSN Online: 2153-0661 ISSN Print: 2153-0653 Temperature Calculation of Pellet Rotary Kiln Based on Texture Chunli Lin,

More information

Textural Features for Image Database Retrieval

Textural Features for Image Database Retrieval Textural Features for Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {aksoy,haralick}@@isl.ee.washington.edu

More information

Multistage Content Based Image Retrieval

Multistage Content Based Image Retrieval CHAPTER - 3 Multistage Content Based Image Retrieval 3.1. Introduction Content Based Image Retrieval (CBIR) is process of searching similar images from the database based on their visual content. A general

More information

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) 5 MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) Contents 5.1 Introduction.128 5.2 Vector Quantization in MRT Domain Using Isometric Transformations and Scaling.130 5.2.1

More information

CONTENT BASED IMAGE RETRIEVAL (CBIR) USING MULTIPLE FEATURES FOR TEXTILE IMAGES BY USING SVM CLASSIFIER

CONTENT BASED IMAGE RETRIEVAL (CBIR) USING MULTIPLE FEATURES FOR TEXTILE IMAGES BY USING SVM CLASSIFIER CONTENT BASED IMAGE RETRIEVAL (CBIR) USING MULTIPLE FEATURES FOR TEXTILE IMAGES BY USING SVM CLASSIFIER Mr.P.Anand 1 (AP/ECE), T.Ajitha 2, M.Priyadharshini 3 and M.G.Vaishali 4 Velammal college of Engineering

More information

Chapter 4. The Classification of Species and Colors of Finished Wooden Parts Using RBFNs

Chapter 4. The Classification of Species and Colors of Finished Wooden Parts Using RBFNs Chapter 4. The Classification of Species and Colors of Finished Wooden Parts Using RBFNs 4.1 Introduction In Chapter 1, an introduction was given to the species and color classification problem of kitchen

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

AN EFFICIENT BATIK IMAGE RETRIEVAL SYSTEM BASED ON COLOR AND TEXTURE FEATURES

AN EFFICIENT BATIK IMAGE RETRIEVAL SYSTEM BASED ON COLOR AND TEXTURE FEATURES AN EFFICIENT BATIK IMAGE RETRIEVAL SYSTEM BASED ON COLOR AND TEXTURE FEATURES 1 RIMA TRI WAHYUNINGRUM, 2 INDAH AGUSTIEN SIRADJUDDIN 1, 2 Department of Informatics Engineering, University of Trunojoyo Madura,

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

Extraction of Color and Texture Features of an Image

Extraction of Color and Texture Features of an Image International Journal of Engineering Research ISSN: 2348-4039 & Management Technology July-2015 Volume 2, Issue-4 Email: editor@ijermt.org www.ijermt.org Extraction of Color and Texture Features of an

More information

Edge Histogram Descriptor, Geometric Moment and Sobel Edge Detector Combined Features Based Object Recognition and Retrieval System

Edge Histogram Descriptor, Geometric Moment and Sobel Edge Detector Combined Features Based Object Recognition and Retrieval System Edge Histogram Descriptor, Geometric Moment and Sobel Edge Detector Combined Features Based Object Recognition and Retrieval System Neetesh Prajapati M. Tech Scholar VNS college,bhopal Amit Kumar Nandanwar

More information

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai Decision Trees Decision Tree Decision Trees (DTs) are a nonparametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target

More information

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion

More information

CHAPTER 4 TEXTURE FEATURE EXTRACTION

CHAPTER 4 TEXTURE FEATURE EXTRACTION 83 CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with various feature extraction technique based on spatial, transform, edge and boundary, color, shape and texture features. A brief introduction

More information

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES EXPERIMENTAL WORK PART I CHAPTER 6 DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES The evaluation of models built using statistical in conjunction with various feature subset

More information

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16 Review Edges and Binary Images Tuesday, Sept 6 Thought question: how could we compute a temporal gradient from video data? What filter is likely to have produced this image output? original filtered output

More information

Medical Image Processing using MATLAB

Medical Image Processing using MATLAB Medical Image Processing using MATLAB Emilia Dana SELEŢCHI University of Bucharest, Romania ABSTRACT 2. 3. 2. IMAGE PROCESSING TOOLBOX MATLAB and the Image Processing Toolbox provide a wide range of advanced

More information

All aids allowed. Laptop computer with Matlab required. Name :... Signature :... Desk no. :... Question

All aids allowed. Laptop computer with Matlab required. Name :... Signature :... Desk no. :... Question Page of 6 pages Written exam, December 4, 06 Course name: Image analysis Course number: 050 Aids allowed: Duration: Weighting: All aids allowed. Laptop computer with Matlab required 4 hours All questions

More information

CHAPTER 4 SEGMENTATION

CHAPTER 4 SEGMENTATION 69 CHAPTER 4 SEGMENTATION 4.1 INTRODUCTION One of the most efficient methods for breast cancer early detection is mammography. A new method for detection and classification of micro calcifications is presented.

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

A Feature Level Fusion in Similarity Matching to Content-Based Image Retrieval

A Feature Level Fusion in Similarity Matching to Content-Based Image Retrieval A Feature Level Fusion in Similarity Matching to Content-Based Image Retrieval Md. Mahmudur Rahman, Bipin C. Desai Computer Science Department Concordia University Montreal, QC, CANADA mah rahm@cs.concordia.ca

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Lesson 3. Prof. Enza Messina

Lesson 3. Prof. Enza Messina Lesson 3 Prof. Enza Messina Clustering techniques are generally classified into these classes: PARTITIONING ALGORITHMS Directly divides data points into some prespecified number of clusters without a hierarchical

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

Lab 2. Hanz Cuevas Velásquez, Bob Fisher Advanced Vision School of Informatics, University of Edinburgh Week 3, 2018

Lab 2. Hanz Cuevas Velásquez, Bob Fisher Advanced Vision School of Informatics, University of Edinburgh Week 3, 2018 Lab 2 Hanz Cuevas Velásquez, Bob Fisher Advanced Vision School of Informatics, University of Edinburgh Week 3, 2018 This lab will focus on learning simple image transformations and the Canny edge detector.

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Content Based Image Retrieval using Combined Features of Color and Texture Features with SVM Classification

Content Based Image Retrieval using Combined Features of Color and Texture Features with SVM Classification Content Based Image Retrieval using Combined Features of Color and Texture Features with SVM Classification R. Usha [1] K. Perumal [2] Research Scholar [1] Associate Professor [2] Madurai Kamaraj University,

More information

NAME :... Signature :... Desk no. :... Question Answer

NAME :... Signature :... Desk no. :... Question Answer Written test Tuesday 19th of December 2000. Aids allowed : All usual aids Weighting : All questions are equally weighted. NAME :................................................... Signature :...................................................

More information

7. Decision or classification trees

7. Decision or classification trees 7. Decision or classification trees Next we are going to consider a rather different approach from those presented so far to machine learning that use one of the most common and important data structure,

More information

Edges and Binary Image Analysis April 12 th, 2018

Edges and Binary Image Analysis April 12 th, 2018 4/2/208 Edges and Binary Image Analysis April 2 th, 208 Yong Jae Lee UC Davis Previously Filters allow local image neighborhood to influence our description and features Smoothing to reduce noise Derivatives

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Content based Image Retrieval Using Multichannel Feature Extraction Techniques

Content based Image Retrieval Using Multichannel Feature Extraction Techniques ISSN 2395-1621 Content based Image Retrieval Using Multichannel Feature Extraction Techniques #1 Pooja P. Patil1, #2 Prof. B.H. Thombare 1 patilpoojapandit@gmail.com #1 M.E. Student, Computer Engineering

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

A new predictive image compression scheme using histogram analysis and pattern matching

A new predictive image compression scheme using histogram analysis and pattern matching University of Wollongong Research Online University of Wollongong in Dubai - Papers University of Wollongong in Dubai 00 A new predictive image compression scheme using histogram analysis and pattern matching

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Holistic Correlation of Color Models, Color Features and Distance Metrics on Content-Based Image Retrieval

Holistic Correlation of Color Models, Color Features and Distance Metrics on Content-Based Image Retrieval Holistic Correlation of Color Models, Color Features and Distance Metrics on Content-Based Image Retrieval Swapnil Saurav 1, Prajakta Belsare 2, Siddhartha Sarkar 3 1Researcher, Abhidheya Labs and Knowledge

More information

Assignment: Backgrounding and Optical Flow.

Assignment: Backgrounding and Optical Flow. Assignment: Backgrounding and Optical Flow. April 6, 00 Backgrounding In this part of the assignment, you will develop a simple background subtraction program.. In this assignment, you are given two videos.

More information

Solution Sketches Midterm Exam COSC 6342 Machine Learning March 20, 2013

Solution Sketches Midterm Exam COSC 6342 Machine Learning March 20, 2013 Your Name: Your student id: Solution Sketches Midterm Exam COSC 6342 Machine Learning March 20, 2013 Problem 1 [5+?]: Hypothesis Classes Problem 2 [8]: Losses and Risks Problem 3 [11]: Model Generation

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

Classification of Protein Crystallization Imagery

Classification of Protein Crystallization Imagery Classification of Protein Crystallization Imagery Xiaoqing Zhu, Shaohua Sun, Samuel Cheng Stanford University Marshall Bern Palo Alto Research Center September 2004, EMBC 04 Outline Background X-ray crystallography

More information

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Goals. The goal of the first part of this lab is to demonstrate how the SVD can be used to remove redundancies in data; in this example

More information

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data Nian Zhang and Lara Thompson Department of Electrical and Computer Engineering, University

More information

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Orlando HERNANDEZ and Richard KNOWLES Department Electrical and Computer Engineering, The College

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Data Mining and Machine Learning: Techniques and Algorithms

Data Mining and Machine Learning: Techniques and Algorithms Instance based classification Data Mining and Machine Learning: Techniques and Algorithms Eneldo Loza Mencía eneldo@ke.tu-darmstadt.de Knowledge Engineering Group, TU Darmstadt International Week 2019,

More information

CHAPTER 2 ADAPTIVE DECISION BASED MEDIAN FILTER AND ITS VARIATION

CHAPTER 2 ADAPTIVE DECISION BASED MEDIAN FILTER AND ITS VARIATION 21 CHAPTER 2 ADAPTIVE DECISION BASED MEDIAN FILTER AND ITS VARIATION The main challenge in salt and pepper noise removal is to remove the noise as well as to preserve the image details. The removal of

More information

Available Online through

Available Online through Available Online through www.ijptonline.com ISSN: 0975-766X CODEN: IJPTFI Research Article ANALYSIS OF CT LIVER IMAGES FOR TUMOUR DIAGNOSIS BASED ON CLUSTERING TECHNIQUE AND TEXTURE FEATURES M.Krithika

More information

HCR Using K-Means Clustering Algorithm

HCR Using K-Means Clustering Algorithm HCR Using K-Means Clustering Algorithm Meha Mathur 1, Anil Saroliya 2 Amity School of Engineering & Technology Amity University Rajasthan, India Abstract: Hindi is a national language of India, there are

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Neural Network based textural labeling of images in multimedia applications

Neural Network based textural labeling of images in multimedia applications Neural Network based textural labeling of images in multimedia applications S.A. Karkanis +, G.D. Magoulas +, and D.A. Karras ++ + University of Athens, Dept. of Informatics, Typa Build., Panepistimiopolis,

More information

A Review: Content Base Image Mining Technique for Image Retrieval Using Hybrid Clustering

A Review: Content Base Image Mining Technique for Image Retrieval Using Hybrid Clustering A Review: Content Base Image Mining Technique for Image Retrieval Using Hybrid Clustering Gurpreet Kaur M-Tech Student, Department of Computer Engineering, Yadawindra College of Engineering, Talwandi Sabo,

More information

Spatial Information Based Image Classification Using Support Vector Machine

Spatial Information Based Image Classification Using Support Vector Machine Spatial Information Based Image Classification Using Support Vector Machine P.Jeevitha, Dr. P. Ganesh Kumar PG Scholar, Dept of IT, Regional Centre of Anna University, Coimbatore, India. Assistant Professor,

More information

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM Neha 1, Tanvi Jain 2 1,2 Senior Research Fellow (SRF), SAM-C, Defence R & D Organization, (India) ABSTRACT Content Based Image Retrieval

More information

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET 69 CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET 3.1 WAVELET Wavelet as a subject is highly interdisciplinary and it draws in crucial ways on ideas from the outside world. The working of wavelet in

More information

Fuzzy Segmentation. Chapter Introduction. 4.2 Unsupervised Clustering.

Fuzzy Segmentation. Chapter Introduction. 4.2 Unsupervised Clustering. Chapter 4 Fuzzy Segmentation 4. Introduction. The segmentation of objects whose color-composition is not common represents a difficult task, due to the illumination and the appropriate threshold selection

More information

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners Data Mining 3.5 (Instance-Based Learners) Fall 2008 Instructor: Dr. Masoud Yaghini Outline Introduction k-nearest-neighbor Classifiers References Introduction Introduction Lazy vs. eager learning Eager

More information

Tilt correction of images used for surveillance

Tilt correction of images used for surveillance Tilt correction of images used for surveillance Pallav Nandi Chaudhuri1 Samriddha Dey2 and Subhadip Bhattacharya3 1 23 Computer Science and Engineering Techno India Banipur Applied Electronics and Instrumentation

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/ TEXTURE ANALYSIS Texture analysis is covered very briefly in Gonzalez and Woods, pages 66 671. This handout is intended to supplement that

More information

Comparative Analysis of Edge Detection Algorithms Based on Content Based Image Retrieval With Heterogeneous Images

Comparative Analysis of Edge Detection Algorithms Based on Content Based Image Retrieval With Heterogeneous Images Comparative Analysis of Edge Detection Algorithms Based on Content Based Image Retrieval With Heterogeneous Images T. Dharani I. Laurence Aroquiaraj V. Mageshwari Department of Computer Science, Department

More information

Inital Starting Point Analysis for K-Means Clustering: A Case Study

Inital Starting Point Analysis for K-Means Clustering: A Case Study lemson University TigerPrints Publications School of omputing 3-26 Inital Starting Point Analysis for K-Means lustering: A ase Study Amy Apon lemson University, aapon@clemson.edu Frank Robinson Vanderbilt

More information

ECEN 447 Digital Image Processing

ECEN 447 Digital Image Processing ECEN 447 Digital Image Processing Lecture 8: Segmentation and Description Ulisses Braga-Neto ECE Department Texas A&M University Image Segmentation and Description Image segmentation and description are

More information

Blood Microscopic Image Analysis for Acute Leukemia Detection

Blood Microscopic Image Analysis for Acute Leukemia Detection I J C T A, 9(9), 2016, pp. 3731-3735 International Science Press Blood Microscopic Image Analysis for Acute Leukemia Detection V. Renuga, J. Sivaraman, S. Vinuraj Kumar, S. Sathish, P. Padmapriya and R.

More information

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class: family of patterns

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image Histograms h(r k ) = n k Histogram: number of times intensity level rk appears in the image p(r k )= n k /NM normalized histogram also a probability of occurence 1 Histogram of Image Intensities Create

More information

A Content Based Image Retrieval System Based on Color Features

A Content Based Image Retrieval System Based on Color Features A Content Based Image Retrieval System Based on Features Irena Valova, University of Rousse Angel Kanchev, Department of Computer Systems and Technologies, Rousse, Bulgaria, Irena@ecs.ru.acad.bg Boris

More information

Lecture 11: Classification

Lecture 11: Classification Lecture 11: Classification 1 2009-04-28 Patrik Malm Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapters for this lecture 12.1 12.2 in

More information

Multivariate analyses in ecology. Cluster (part 2) Ordination (part 1 & 2)

Multivariate analyses in ecology. Cluster (part 2) Ordination (part 1 & 2) Multivariate analyses in ecology Cluster (part 2) Ordination (part 1 & 2) 1 Exercise 9B - solut 2 Exercise 9B - solut 3 Exercise 9B - solut 4 Exercise 9B - solut 5 Multivariate analyses in ecology Cluster

More information

CS4445 Data Mining and Knowledge Discovery in Databases. A Term 2008 Exam 2 October 14, 2008

CS4445 Data Mining and Knowledge Discovery in Databases. A Term 2008 Exam 2 October 14, 2008 CS4445 Data Mining and Knowledge Discovery in Databases. A Term 2008 Exam 2 October 14, 2008 Prof. Carolina Ruiz Department of Computer Science Worcester Polytechnic Institute NAME: Prof. Ruiz Problem

More information