CHAPTER 4 SPECTRAL HISTOGRAM: A GENERIC FEATURE FOR IMAGES

Size: px
Start display at page:

Download "CHAPTER 4 SPECTRAL HISTOGRAM: A GENERIC FEATURE FOR IMAGES"

Transcription

1 CHAPTER 4 SPECTRAL HISTOGRAM: A GENERIC FEATURE FOR IMAGES In this chapter, we propose a generic statistic feature for homogeneous texture images, which we call spectral histograms. A similarity measure between any given image patches is then defined as the Kullback-Leibler divergence or other distance measures between the corresponding spectral histograms. Unlike other similarity measures, it can discriminate texture as well as intensity images and provide a unified, nonparametric similarity measure for images. We demonstrate this using examples in texture image synthesis [147], texture image classification, and content-based image retrieval. We compare several different distance measures and find that the spectral histogram is not sensitive to the particular form of distance measure. We also compare the spectral histogram with other statistic features and find that the spectral histogram gives the best result for classification of a texture image database. We find that distribution of local features is important while the local features themselves do not appear to be critically important for texture discrimination and classification. 75

2 4.1 Introduction As discussed in Chapter 1, the ultimate goal of a machine vision system is to derive a description of input images. To build an efficient and effective machine vision system, a critical step is to derive meaningful features. Here meaningful states that the high-level modules such as recognition should be able to use the derived features readily. To illustrate the problem, Figure 1.1(a) shows a texture image and a small patch from 1.1(a) is shown in Figure 1.1(b). Figure 1.1(c) shows the numerical values of (b). It is extremely difficult, if not impossible, to derive a segmentation or useful representation for the given image purely based on the numerical values. This example demonstrates that features need to be extracted based on input images for machine vision systems as well as for biological vision systems. While many algorithms and systems have been proposed for image segmentation, classification, and recognition, feature extraction is not well addressed. In most cases, features are chosen by assumption for mathematical convenience or domain-specific heuristics. For example, Canny [13] derived the most widely used edge detection algorithm based on a step edge with additive Gaussian noise. Using variational techniques, he obtained the optimal filter for the assumed model and proposed the firstorder derivative of Gaussian as an efficient and good approximation. Inspired by the neurophysiological experimental findings of on- and off-center simple cells [54] [53], Marr and Hildreth [89] proposed the second-order derivative of Gaussian (LoG, Laplacian of Gaussian) to model the responses of the on- and off-center cells. This simple, piece-wise constant model has been widely used in image segmentation algorithms. For example, Mumford and Shah [94] proposed an energy functional for 76

3 image segmentation E(f, Γ) = µ (f I) 2 dxdy + f 2 dxdy + ν Γ. (4.1) R R\Γ Here I is a two-dimensional input image defined on R, and f is the solution to be found and Γ is the boundary of f. One can see that the underlying assumption of the solution space is piece-wise smooth images. The energy functional shown in (4.1) was claimed to be a generic energy functional [92] in that most existing segmentation algorithms and techniques can be derived from the proposed energy functional. Another major line of research related to feature extraction is texture discrimination and classification/segmentation. There are no obvious features that work well for all texture images. The human visual system, however, can discriminate textures robustly and effectively. This observation inspired many research activities. Julesz [62] pioneered the research in searching for feature statistics for human visual perception. He first studied k-gon statistics and conjectured that k = 2 is sufficient for human visual perception. The conjecture was experimentally disproved by synthesizing perceptually different textures with identical 2-gon statistics [11] [24]. Other early features for texture discrimination include co-occurrence matrices [44] [43], first-order statistics [61], second-order statistics [18], and Markov random fields [65] [21] [16]. Those features have limited expressive power due to that the analysis of spatial interaction is limited to a relatively small neighborhood [147] and are applied successfully to so-called micro-textures. In the 198s, theories on human texture perception were established, largely based on available psychophysical and neurophysiological data [12] [23] and joint spatial/frequency representation. Theories state that the human visual system transforms the retinal image into local spatial/frequency representation, which can be 77

4 computationally simulated by convolving the input image with a bank of filters with tuned frequencies and orientations. The mathematical framework for the local spatial/frequency was laid out by Gabor [33] in the context of communication systems. In Fourier transform, a signal can be represented in time and frequency domains. The basis functions in time domain are impulses with different time delays, and the basis functions in frequency domain are complex sinusoid functions with different frequencies. A major problem with Fourier transform is localization. As shown in Figure 4.1, while the impulse in time domain precisely localizes the signal component, its Fourier transform does not provide any localization information in frequency domain. While the sinusoid function provides accurate localization in frequency domain, it is not possible to localize the signal in time domain. Essentially, Fourier transform uses two basis functions, which provides best localization in one domain and no localization in the other. Based on this observation, Gabor proposed a more generic time/frequency representation [33], where the basis functions of Fourier transforms are just two opposite extremes. By minimizing the localization uncertainty both in time and frequency domains, Gabor proposed basis functions, which can achieve the optimality simultaneously in time and frequency domains. Gabor basis functions were extended to two-dimensional images by Daugman [22]. Very recently, this theory has also been confirmed by deriving similar feature detectors from natural images based on certain optimization criteria [1] [11] [12] [1]. The human perception theory and the local spatial/frequency representational framework inspired much research in texture classification and segmentation. Within this framework, however, statistic features still need to be chosen due to that filter 78

5 (a) (b) Figure 4.1: Basis functions of Fourier transform in time and frequency domains with their Fourier transforms. (a) An impulse and its Fourier transform. (b) A sinusoid function and its Fourier transform. 79

6 responses are not homogeneous within homogeneous textures and not sufficient because they are linear. As shown in Figure 4.2, a Gabor filter, shown in Figure 4.2(b), responses to local and oriented structures, as shown in Figure 4.2(c), and the filter response itself does not characterize the texture. Intuitively, texture appearance can not be characterized by very local pixel values because texture is a regional property. If we want to define a feature that is homogeneous within a texture region, it is necessary to integrate responses from filters with multiple orientations and scales. In other words, features need to be defined as statistic measures on filter responses. For example, Unser [127] used variances from different filters to characterize textures. Ojala et al [99] compared different features for texture classification using the distribution of detected local features using a database consisting of nine images. Puzicha et al [17] used a distribution of responses from a set of Gabor filters as features. However, they posed the texture segmentation as an energy minimization based on a pair-wise discrimination matrix, and the features used are not analyzed in terms of characterizing texture appearance. Recently, Heeger and Bergen [45] proposed a texture synthesis algorithm that can match texture appearance. The algorithm tries to transform a random noisy image into an image with similar appearance to the given target image by matching independently the histograms of image pyramids constructed from the noisy and target images. The experimental results are impressive even though no theoretical justification was given. De Bonet and Viola [8] attempted to match the joint histograms by utilizing the conditional probability defined on parent vectors. A parent vector is a vector consisting of the filter responses in the constructed image pyramid up to a 8

7 (a) (b) (c) Figure 4.2: A texture image with its Gabor filter response. (a) Input texture image. (b) A Gabor filter, which is truncated to save computation. (c) The filter response obtained through convolution. given scale. As pointed out by Zhu et al. [147], these methods do not guarantee to match the proposed statistics closely. Zhu et al. [148] [149] [15] proposed a theory for learning probability models by matching histograms based on maximum entropy principle and a FRAME (Filters, Random field, And Maximum Entropy) model is developed for texture synthesis. To avoid the computational problem of learning the Lagrange multipliers in the FRAME model, Julesz ensemble is defined as the set of texture images that have the same statistics as the observed images [147]. It demonstrates through experiments that feature pursuit and texture synthesis can be done effectively by sampling from the Julesz ensemble using MCMC (Monte-Carlo Markov Chain) sampling. It has been shown that the Julesz ensemble is consistent with the FRAME model [143]. In this chapter, inspired by the FRAME model [148] [149] [15] and especially the texture synthesis model [147], we define a feature, what we call spectral histograms. Given a window in an input image centered around a given pixel, we construct a 81

8 pyramid based on the local window using a bank of chosen filters. We calculate the histogram for each local window in the pyramid. We obtain a vector, consisting of the histograms from all filters, which is defined as spectral histogram at the given location. For chosen statistic features to be used for successive modules, such as classification, segmentation, and recognition, a similarity/distance measure must be defined. A distance measure between two spectral histograms is defined as the χ 2 -statistic or other distance measures between the spectral histograms. A distance measure using χ 2 -statistic was proposed based on empirical experiments [17] [5]. However, as we will demonstrate using classification, the particular form of distance measure is not critical for spectral histograms. In Section 4.2, we formally define spectral histograms and give some properties of spectral histograms. In Section 4.3 we show how to synthesize texture images by matching spectral histograms. In Section 4.4, we study spectral histograms for texture classification. In Section 4.5, we apply spectral histograms to the problem of content-based image retrieval. In Section 4.6, we compare different texture features and different similarity measures using classification. In Section 4.7, we apply our model to synthetic texture pair discrimination. Section 4.8 concludes this chapter with further discussion. 4.2 Spectral Histograms Given an input image I, defined on a finite two-dimensional lattice, and a bank of filters {F (α), α = 1, 2,..., K}, a sub-band image I (α) is computed for each filter through linear convolution, i.e., I (α) (v) = F (α) I(v) = u F (u)i(v u). I (α), α = 82

9 1, 2,..., K can be considered as an image pyramid constructed from the given image I when there exist scaling relationships among the chosen filters. Here we loosely call I (α), α = 1, 2,..., K as an image pyramid for an arbitrary chosen bank of filters. For each sub-band image, we define the marginal distribution, or histogram H (α) I (z) = 1 I δ(z I (α) (v)). (4.2) v We then define the spectral histogram with respect to the chosen filters as H I = (H (1) I, H (2) I,..., H (K) I ). (4.3) Spectral histograms of an image or an image patch are essentially a vector consisting of marginal distributions of filter responses. The size of the input image or the input image patch is called integration scale. We define a similarity measure between two spectral histograms using different standard distance measures. L p -norm distance is defined as H I1 H I2 p = ( K (1/p) (H (α) I 1 (z) H (α) I 2 (z)) p). (4.4) α=1 z Also because marginal distribution of each filter response is a distribution, a distance can be defined based on discrete Kullback-Leibler divergence [7] KL(H I1, H I2 ) = K α=1 z (H (α) I 1 (z) H (α) I 2 (z)) log H(α) I 1 (z) H (α) I 2 (z). (4.5) Another choice for distance is χ 2 -statistic, which is a first-order approximation of Kullback-Leibler divergence and is used widely to compare histograms χ 2 (H I1, H I2 ) = K α=1 z (H (α) I 1 (z) H (α) I 2 (z)) 2 H (α) I 1 (z) + H (α) I 2 (z). (4.6) 83

10 (a) (b) (c) (d) Figure 4.3: A texture image and its spectral histograms. (a) Input image. (b) A Gabor filter. (c) The histogram of the filter. (d) Spectral histograms of the image. There are eight filters including intensity filter, gradient filters D xx and D yy, four LoG filters with T = 2/2, 1, 2, and 4, and a Gabor filter Gcos(12, 15 ). There are 8 bins in the histograms of intensity and gradient filters and 11 bins for the other filters. 84

11 4.2.1 Properties of Spectral Histograms Images sharing the same spectral histograms define an ensemble, called Julesz ensemble [147]. Equivalence between Julesz ensemble and the FRAME model [149] [15] has been established [143]. As proven in [15], the true probability model of one type of texture images can be approximated by linear combinations of marginal distributions given in spectral histograms. In other words, spectral histograms provide a set of basis functions for statistic modeling of texture images. The spectral histogram and the associated distance measure provide a unified similarity measure for images. Because the marginal distribution is independent of image sizes, any two image patches can be compared using the spectral histogram. Naturally, we can define a scale space using different integration scales, which can be used to measure the homogeneity. This will be studied in the next chapter. Due to that spectral histograms are based on marginal distributions, they provide a statistical measure and two images do not need to be aligned in order to be compared Choice of Filters The filters used consist of filters that are consistent with the human perception theory. Following Zhu et al [149], we use four kinds of filters. 1. The intensity filter, which is the δ() function and captures the intensity value at a given pixel. 2. Difference or gradient filters. We use four of them: 85

12 D x = C [ ] D y = C D xx = C [ ]. D yy = C Here C is a normalization constant Laplacian of Gaussian filters: LoG(x, y T ) = C (x 2 + y 2 T 2 )e x2 +y 2 T 2, (4.7) where C is a constant for normalization and T = 2σ determines the scale of the filter and σ is the variance of the Gaussian function. These filters are referred to as LoG(T ). 4. The Gabor filters with both sine and cosine components: Gabor(x, y T, θ) = C e 1 2T 2 (4(x cos θ+y sin θ)2 +( x cos θ+y sin θ) 2) 2π i e T (x cos θ+y sin θ). (4.8) Here C is a normalization constant and T is a scale. These filters are referred to as Gcos(T, θ) and Gsin(T, θ). While there may exist an optimal set for a given texture, we do not change filters within a task. In general, we use more filters for texture synthesis, namely 56 filters. We use around 8 filters for texture classification and content-based image retrieval to save computation. More importantly, it seems unnecessary to use more filters 86

13 for texture classification and content-based image retrieval when relatively a small integration scale is used. 4.3 Texture Synthesis In this section we demonstrate the effectiveness of the spectral histogram in characterizing texture appearance using texture synthesis. We define a relationship between two texture images using the divergence between their spectral histograms. There exists such a relationship if and only if their spectral histograms are sufficiently close. It is easy to check that this defines an equivalence class. Given observed feature statistics, {H (α) obs, α = 1,..., K}, which are spectral histograms computed from observed images, we define an energy function [147] Then the corresponding Gibbs distribution is where Θ is the temperature. K E(I) = D(H (α) (I), H(α) obs ). (4.9) α=1 q(i) = 1 Z Θ exp( E(I) Θ ) (4.1) The Gibbs distribution can be sampled by a Gibbs sampler or other MCMC algorithms. Here we use a Gibbs sampler [147] given in Figure 4.4. In Figure 4.4, q(i syn (v) I syn ( v)) is the conditional probability of pixel values at v given the rest of the image. D is a distance measure and L 1 -norm is used for texture synthesis. For texture synthesis, we use 56 filters: The intensity filter. 4 gradient filters. 87

14 Gibbs Sampler Compute {H (α) obs, α = 1,..., K} from observed texture images. Initialize I syn as any image (e.g., white noise). Θ Θ. Repeat Randomly pick up a pixel location v in I syn. Calculate q(i syn (v) I syn ( v)). Randomly flip one pixel I syn (v) under q(i syn (v) I syn ( v)). Reduce Θ gradually. Until D(H (α) ) ɛ for α = 1, 2,..., K. (I syn ), H(α) obs Figure 4.4: Gibbs sampler for texture synthesis. 7 LoG filters with T = 2/2, 1, 2, 3, 4, 5, and Cosine Gabor filters with T = 2, 4, 6, 8, 1, and 12, and six orientations θ =, 3, 6, 9, 12, and 15 at each scale. 6 Sine Gabor filters with T = 2, 4, 6, 8, 1, and 12, and one orientation θ = 45 at each scale. 2 Sine Gabor filters with T = 2, and 12 and one orientation θ = 6. Those filters are chosen primarily because they are used by Zhu et al [149]. The cooling schedule is fixed for all the experiments shown in this section. Figure 4.5(a) shows a texture image. Figure 4.5(b) shows a white noise image, which is used as the initial image. After 14 sweeps, the noise image is transformed gradually to the image shown in Figure 4.5(c) by matching the spectral histogram of the two images. Figure 4.5(d) shows the L 1 -norm distance between the spectral histogram of the observed and synthesized image with respect to the number of sweeps. 88

15 The matched error decreases at an exponential rate, demonstrating the synthesis algorithm is computationally efficient. One can see that the synthesized image shown in Figure 4.5(c) is perceptually similar to the observed image. By matching the spectral histogram, synthesized image captures the textural elements and their arrangement and gives similar perceptual appearance. Figure 4.6 shows the temporal evolution of a Gabor filter. Figure 4.6(a) shows the filter, which is truncated to save computation. Figure 4.6(b) shows the histogram of the filter at different sweeps. Figure 4.6(c) shows the matching error of the filter. Figure 4.6(d) shows the differences of histograms of the observed and synthesis images, which is multiplied by 1 for display purposes. The biggest error among the bins is less than.3. Figure 4.7 shows three more texture images and the synthesized images from the algorithm. The synthesized images are perceptually similar to the observed images and their spectral histograms match closely. In Figure 4.7(b), due to local minima, there are local regions which are not reproduced well. Figure 4.8 shows two texture examples with regular patterns. The texture image in Figure 4.8(a) shows a very regular leather surface. The synthesized image after 2 sweeps shown in Figure 4.8(a) is perceptually similar to the input texture. But the regularity of patterns is blurred and each element is not as clear as in the input image. However, the two images give quite similar percepts and the synthesized image captures the essential arrangement of patterns and the prominent edges in the input. Figure 4.8(b) shows an example where the vertical long-range arrangements are promising. While the synthesized image captures the local vertical arrangements, 89

16 (a) (b) (c) (d) Figure 4.5: Texture image synthesis by matching observed statistics. (a) Observed texture image. (b) Initial image. (c) Synthesized image after 14 sweeps. (d) The total matched error with respect to sweeps. 9

17 (a) (b) (c) (d) Figure 4.6: Temporal evolution of a selected filter for texture synthesis. (a) A Gabor filter. (b) The histograms of the Gabor filter. Dotted line - observed histogram, which is covered by the histogram after 14 sweeps; dashed line - initial histogram; dash-dotted line - histogram after 2 sweeps. solid line - histogram after 14 sweeps. (c) The error of the chosen filter with respect to the sweeps. (d) The error between the observed histogram and the synthesized one after 14 sweeps. Here the error is multiplied by 1. 91

18 (a) (b) (c) Figure 4.7: More texture synthesis examples. Left column shows the observed images and right column shows the synthesized image within 15 sweeps. In (b), due to local minima, there are local regions which are not perceptually similar to the observed image. 92

19 it does not sufficiently capture the long-range arrangements due to that the synthesized algorithm is purely local and long-range couplings are almost impossible to be captured. While the texture images shown above are homogeneous textures, Figure 4.9(a) shows an intensity image consisting of several regions. Figure 4.9(c) shows the synthesized image after 1 sweeps. While the spectral histogram does not capture the spatial relationships between different regions, big regions with similar gray values emerge along the temporal evolution. Due to the inhomogeneity, the Gibbs sampler converges slower compared with homogeneous texture images as shown in Figure 4.5(d). Figure 4.1 shows a synthesized image for a synthetic texton image. In order to synthesize a similar texton, Zhu et al [149] [15] used a texton filter which is the template of one texton element. Here we use the same filters as for other images. As shown in Figure 4.1(b), the texton elements are reproduced well except in two small regions where the MCMC is trapped into a local minimum. This example clearly demonstrates that spectral histograms provide a generic feature for different types of textures, eliminating the need for ad hoc features for a particular set of textures. Figure 4.11 shows a synthetic example where there are two distinctive regions in the original. As shown in Figure 4.11(b), the synthesized image captures the appearance of both regions using the spectral histogram. Here the boundary between two regions is not reproduced because spectral histograms do not incorporate geometric constraints. Using some geometric constraints, the boundary may be reproduced well, which would give a more powerful feature for images consisting of different regions. Figure 4.12 shows an interesting result for a face image. While all the elements are 93

20 (a) (b) Figure 4.8: Real texture images of regular patterns with synthesized images after 2 sweeps. (a) An image of a leather surface. The total matched error after 2 sweeps is.82. (b) An image of a pressed calf leather surface. The total matched error after 2 sweeps is

21 (a) (b) (c) (d) Figure 4.9: Texture synthesis for an image with different regions. (a) The observed texture image. This image is not a homogeneous texture image and consists mainly of two homogeneous regions. (b) The initial image. (c) Synthesized image after 1 sweeps. Even though the spectral histogram of each filter is matched well, compared to other images, the error is still large. Especially for the intensity filter, the error is still about 7.44 %. The synthesized image is perceptually similar to the observed image except the geometrical relationships among the homogeneous regions. (d) The matched error with respect to the sweeps. Due to that the observed image is not homogeneous, the synthesis algorithm converges slower compared with Figure 4.5(d). 95

22 (a) (b) Figure 4.1: A synthesis example for a synthetic texton image. (a) The original synthetic texton image with size (b) The synthesized image with size captured in the synthesized image, the result is not meaningful unless some geometric constraints are incorporated. This will be investigated further in the future. To evaluate the synthesis algorithm more systematically, we have applied it to all the 4 texture images shown in Figure To save space, the reduced images are shown in Figure These examples clearly demonstrate that spectral histograms capture texture appearance well Comparison with Heeger and Bergen s Algorithm As pointed out by Zhu et al [147], Heeger and Bergen s algorithm does not actually match any statistic features defined on the input image. On the other hand, the synthesis algorithm described above characterizes texture appearance explicitly through the spectral histogram of the observed image(s), as demonstrated using real texture images. One critical difference is that our proposed algorithm provides a statistic modeling of the observed image(s) in that the algorithm only needs to know 96

23 (a) (b) Figure 4.11: A synthesis example for an image consisting of two regions. (a) The original synthetic image with size , consisting of two intensity regions. (b) The synthesized image with size (a) (b) Figure 4.12: A synthesis example for a face image. (a) Lena image with size (b) The synthesized image with size

24 Fabric- Fabric-4 Fabric-7 Fabric-9 Fabric-15 Fabric-17 Fabric-18 Food- Food-5 Leaves-3 Leaves-8 Leaves-12 Leaves-13 Metal- Metal-2 Misc- Misc-2 Stone-5 Water-1 Water-2 Water-6 Water-8 Beachsand-2 Calfleath-1 Calfleath-2 Grass-1 Grass-7 Grave-5 Hexholes-2 Pigskin-1 Pigskin-2 Plasticbubs-13 Raffia-1 Raffia-2 Roughwall-5 Sand-1 Woodgrain-1 Woodgrain-2 Woolencloth-1 Woolencloth-2 Figure 4.13: The synthesized images of the 4 texture images shown in Figure Here same filters and cooling schedule are used for all the images. 98

25 Observed Black Gray White Error L 1 -norm RMS L 1 -norm RMS L 1 -norm RMS L 1 -norm RMS Observed. %..1 % % % 55.4 Black.1 % %..11 % % 6.8 Gray.1 % % %..12 % 5.3 While.11 % % % 5.3. %. Table 4.1: L 1 -norm distance of the spectral histograms and RMS distance between images. the spectral histogram of the input image(s) and it does not need the input images while Heeger and Bergen s algorithm needs the input image. For comparison, we use the texture image shown in Figure 4.3(a). For the proposed algorithm, we synthesize images starting with different initial images, as shown in Figure One can see that different initial images are transformed into perceptually similar images by matching the spectral histogram, where the matching error is shown in Figure 4.14(d). Table 4.1 shows the L 1 -norm distance of the histograms of the observed and synthesized images and the Root-Mean-Square (RMS) distance of the corresponding images. From Table 4.1, one can see clearly that even though the rooted mean square distance is large between the observed and synthesized images from different initial conditions, the corresponding L 1 -norm distance between their spectral histograms is quite small. Given that the synthesized images are perceptually similar to the observed image, we conclude that spectral histograms provide a statistic feature to characterize texture appearance. For the same input image, we have also applied the algorithm by Heeger and Bergen [45]. The implementation used here is by El-Maraghi [28]. Figure 4.15 (a)- (c) show the synthesized images with different number of iterations. Compared to 99

26 = (a) = (b) = (c) (d) Figure 4.14: Synthesized images from different initial images for the texture image shown in Figure 4.3(a). (a)-(c) Left column is the initial image and right column is the synthesized image after 2 sweeps. (d) The matched error with respect to the number of sweeps. 1

27 the synthesized images from our method shown in Figure 4.14(a)-(c), one can see easily that the synthesized images in Figure 4.14(a)-(c) are perceptually similar to the input texture image and capture texture elements and their distributions, while the synthesized images in Figure 4.15 (a)-(c) are not percetually similar to the input texture. As shown in Figure 4.15(d), the Heeger and Bergen s algorithm does not match the statistic features on which the algorithm is based and the error does not decrease after 1 iteration. Because the Heeger and Bergen s algorithm used Laplacian of Gaussian paramid, we choose LoG and local filters for the spectral histogram here for a fair comparison. 4.4 Texture Classification Texture classification is closely related to texture segmentation and content-based image retrieval. Here we demonstrate the discriminative power of spectral histograms. A texture image database is given first and we extract spectral histograms for each image in the database. The classification here is then to classify all the pixels or selected pixels in an input image. Given a database with M texture images, we represent each image by its average spectral histogram H obs m at a given integration scale. We use a minimum-distance classifier, given by Here D is a similarity measure and χ 2 -statistic is used. m (v) = min m D(H I(v), H obs m). (4.11) We use a texture image database available on-line at As shown in Figure 4.16, the database we use consists of 4 texture images from Brodatz textures [1]. 11

28 (a) (b) (c) (d) Figure 4.15: Synthesized images from Heeger and Bergen s aglorithm and the matched spectral histogram error for the image shown in Figure 4.3(a). (a) Synthesized image at 3 iterations. (b) Synthesized image at 1 iterations. (c) Synthesized image at 1 iterations. (d) The L 1 -norm error of the observed spectral histogram and the synthesized one. 12

29 Fabric- Fabric-4 Fabric-7 Fabric-9 Fabric-15 Fabric-17 Fabric-18 Food- Food-5 Leaves-3 Leaves-8 Leaves-12 Leaves-13 Metal- Metal-2 Misc- Misc-2 Stone-5 Water-1 Water-2 Water-6 Water-8 Beachsand-2 Calfleath-1 Calfleath-2 Grass-1 Grass-7 Grave-5 Hexholes-2 Pigskin-1 Pigskin-2 Plasticbubs-13 Raffia-1 Raffia-2 Roughwall-5 Sand-1 Woodgrain-1 Woodgrain-2 Woolencloth-1 Woolencloth-2 Figure 4.16: Forty texture images used in the classification experiments. The input image size is

30 4.4.1 Classification at Fixed Scales First we study the classification performance using the spectral histogram at a given integration scale. As discussed above, the classification algorithm is a minimum distance classifier and each location is classified independently from other locations. For the database shown in Figure 4.16, we use a window of pixels. For each image in the database, we extract an average spectral histogram by averaging spectral histograms extracted at a larger grid size to save computation. For classification and content-based image retrieval, we use 8 filters to compute spectral histograms: The intensity filter. Two gradient filters D xx and D yy. Two LoG filters with T = 2 and 5. Three Cosine Gabor filters with T = 6 and three orientations θ = 3, 9, and 15. Figure 4.17(a) shows the divergence between the feature vector of each image numerically and Figure 4.17(b) shows the divergence in (a) as an image for easier interpretation. Each texture is divided into non-overlapping image patches with size Each image patch is classified into one of the 4 given classes using the minimum distance classifier. Figure 4.18 shows the error rate for each image along with variance within each image and minimum divergence of each image from the other images in the database. The overall classification error is %. As shown in Figure 14

31 4.18(a), the error is not evenly distributed. Five images, namely Fabric-15, Leaves- 13, Water-1, Water-2, and Woolencloth-2 account for more than half of the misclassified cases, with an average error of 19.18%. For the other 35 images, the classification error is 2.1%. There are two major reasons of the large error rates for the five images. First, with respect to the given scale, which is 35 35, the images are not homogeneous. Figure 4.18(b) shows the variance of the spectral histogram of images. Second, the image is similar to some other images in the database. This is measured by the minimum divergence between the feature vector of an image and the feature vector of the other images. Figure 4.18(c) shows the minimum divergence. As shown clearly, the five images with large classification error tend to have large variance and small minimum divergence. The dotted curve in Figure 4.18(a) shows the ratio between the variance and the minimum divergence of each image. The peaks in the classification error curve tend to be coincident with the peaks in the ratio curve Classification at Different Scales We also study the classification error with respect to different scales. We use 8 different integration scales in this experiment: 5 5, 9 9, 15 15, 23 23, 35 35, 65 65, , and in pixels. The classification algorithm and the procedure are the same as described for the fixed scale. The overall classification error is shown in Figure As expected, the classification error decreases when the scale increases. For spectral histograms, the classification error decreases approximately at an exponential rate. This demonstrates that spectral histograms are very good in capturing texture characteristics. 15

32 (a) (b) Figure 4.17: The divergence between the feature vector of each image in the texture image database shown in Figure (a) The cross-divergence matrix shown in numerical values. (b) The numerical values are displayed as an image. 16

33 (a) (b) (c) Figure 4.18: (a) The classification error for each image in the texture database along with the ratio between the maximum and minimum divergence shown in (b) and (c) respectively. (b) The maximum divergence of spectral histogram from the feature vector of each image. (c) The minimum divergence between each image and the other ones. 17

34 Figure 4.19: The classification error of the texture database with respect to the scale for feature extraction. As one would expect, the classification error with respect to scales varies considerably for images. For example, image Hexholes-2, shown in Figure 4.2(a), is very homogeneous and visually very different from other images in the database. For this image, the classification result is good for all the scales used in the experiment. Even using a window of 5 5 pixels, the classification error is 1.61%. At all other scales, the classification is 1% correct, as shown in Figure 4.2(b). Figure 4.21(a) shows image Woolencloth-2 from the database. This image is not homogeneous and perceptually similar to some other images. For this image, the classification error is large when the scale is small. When the scale is 35 35, the classification error is 2.41%. When the scale is larger than 35 35, the classification result is perfect, as shown in Figure 4.21(b) Image Classification In this section, we classify images using the database. Each pixel in the image is classified using the minimum distance classifier. A window centered at a given 18

35 (a) (b) (c) Figure 4.2: (a) Image Hexholes-2 from the texture database. (b) The classification error rate for the image. (c) The ratio between maximum divergence and minimum cross divergence with respect to scales. 19

36 (a) (b) (c) Figure 4.21: (a) Image Woolencloth-2 from the texture database. (b) The classification error rate for the image. (c) The ratio between maximum divergence and minimum cross divergence with respect to scales. 11

37 pixel is used. For pixels near image borders, a window that is inside the image and contains the pixel is used. The input texture image, shown in Figure 4.22(a) consists of five texture regions from the database. Figure 4.22(b) shows the classification result, Figure 4.22(c) shows the divergence between the spectral histogram and the feature vector of the assigned texture image. Figure 4.22(d) shows the ground truth segmentation and Figure 4.22(e) shows the misclassified pixels, which are shown in black. One can see that the interior pixels in each homogeneous texture region are classified correctly and the divergence of these pixels is small. Only the pixels that are between texture regions are misclassified due to that the spectral histogram computed is a mixture of different texture regions. At misclassified pixels, the divergence is large. This demonstrates that the proposed spectral histogram is a reliable similarity/dissimilarity measure between texture images. Furthermore, it also provides a reliable measure for goodness of the classification. The classification result can be improved by incorporating context-sensitive feature detectors, as discussed in Section Training Samples and Generalization For the classification results shown in this section, training samples are not separated clearly from samples for testing due to the limited number of samples especially at large integration scales. Through experiments, we demonstrate that the number of training samples is not critical using spectral histograms. First we re-do some of the experiments for classification. Here we use half of the samples for training and do the testing on the remaining half. Figure 4.23(a) shows the classification error for each image at integration scale and Figure 111

38 (a) (b) (c) (d) (e) Figure 4.22: (a) A texture image consisting of five texture regions from the texture database. (b) Classification result using spectral histograms. (c) Divergence between spectral histograms and the feature vector of the assigned texture image. (d) The ground truth segmentation of the image. (e) Misclassified pixels, shown in black. 112

39 (a) (b) Figure 4.23: (a) The classification error for each image in the database at integration scale (b) The classification error at different integration scales. In both cases, solid line training is done using half of the samples; dashed line training is done using all the samples. 4.23(b) shows the overall error at different integration scales along with the results shown before. While the classification error varies from image to image using different training samples, the overall classification error does not change much. We also examine the influence of the ratio of testing samples to training samples. Figure 4.24 shows the classification error with respect to the ratio of testing samples to training samples at integration scales and It demonstrates that the spectral histogram captures texture characteristics well from a small number of training samples Comparison with Existing Approaches Recently, Randen and Husoy did an extensive comparative study for texture classification using different filtering-based methods [18]. We have applied our method 113

40 Figure 4.24: The classification error with respect to the ratio of testing samples to training samples. Solid line integration scale 35 35; dashed line integration scale to the same images. We use integration scale and the same 8 filters used in Section We use 1/3 samples for training and the remaining 2/3 samples for testing. For the other methods, we show the average performance and the best shown in Tables 3, 6, 8, and 9 in [18]. The results for two groups of texture images are shown in Table 4.2. The first group consists of 1 texture images, which are shown in Figure In this group, each image is visually different from other images. Our method is significantly better than the best performance shown in [18]. The second group, shown in Figure 4.26, is very challenging for filtering methods due to the inhomogeneity within each texture region and similarity among different textures. For all the methods in [18], the performance is close to a random decision. Our method, however, gives 17.5% classification error, which dramatically improves the classification performance. 114

41 Texture Existing methods in [18] Proposed group Average Best method Figure % 32.3 % 9.7 % Figure % 84.9 % 17.5 % Table 4.2: Classification errors of methods shown in [18] and our method This comparison clearly suggests that classification based on filtering output is not sufficient in characterizing texture appearance and an integration after filtering must be done as Malik and Perona pointed out that a nonlinearity after filtering is necessary for texture discrimination [87]. This comparison along with the results on texture synthesis strongly indicates that spectral histogram is necessary in order to capture texture appearance. 4.5 Content-based Image Retrieval Content-based image retrieval is closely related to image classification and segmentation. Given some desired feature statistics, one would like to find all the images in a given database that is similar to the given feature statistics. For example, one can use intensity histogram and other statistics that are easy to compute to efficiently find images. Because those heuristic-based features do not provide sufficient modeling for images, it is not possible to provide any theoretical justification about the result. As shown in previous sections on texture image synthesis and classification, spectral histograms provide a statistically sufficient model for natural homogeneous textures. In this section, we demonstrate how the spectral histogram and the associated similarity measure can be used to retrieve images that contain perceptually similar region. 115

42 Figure 4.25: A group of 1 texture images used in [18]. Each image is

43 Figure 4.26: A group of 1 texture images used in [18]. Each image is

44 For content-based image retrieval, we use a database consisting of 1 texture images each of which is composed of five texture regions from the texture image database used in the classification example. For a given image patch, we extract its spectral histogram as its footprint. As mentioned before, the same filters for classification are used. Then we try to identify the best match in a texture image in the database. To save some computation, we only compute spectral histogram on a coarsely sampled grid. Figure 4.27 shows an example. Figure 4.27(a) shows an input image patch with size of in pixels. Figure 4.27(b) shows the minimum divergence between the given patch and the images in the database. It shows clearly that there is a step edge. The images that have a smaller matched error than the threshold given by the step edge all actually contain the input match. Figure 4.27(c) shows the first nine image with the matched error. The matched errors of the first eight images are much smaller than the ninth image. This demonstrates that the spectral histogram characterizes texture appearance very well for homogeneous textures. Figure 4.28 shows another example. Here the texture is not very homogeneous and consists of regular texture patterns. Figure 4.28(a) shows the input image patch with size of in pixels. Figure 4.28(b) shows the matched error of the 1 images in the database. While the edge is not a step edge, a ramp edge can be seen clearly. As in Figure 4.27, this ramp edge defines a threshold for image that actually contain patches that are perceptually similar to the given patch. Figure 4.28(c) shows the first 12 images with the smallest match errors. All of them are correctly retrieved. From these two examples, we can see a fixed threshold might be chosen. In both cases, the threshold is around.25. Because χ 2 -statistic obeys a χ 2 distribution, a 118

45 (a) (b) Figure 4.27: Image retrieval result from a 1-image database using a given image patch based on spectral histograms. (a) Input image patch with size (b) The sorted matched error for the 1 images in the database. (c) The first nine image with smallest errors. 119

46 (a) (b) Figure 4.28: Image retrieval result from a 1-image database using a given image patch based on spectral histograms. (a) Input image patch with size (b) The sorted matched error for the 1 images in the database. (c) The first nine image with smallest errors. 12

47 threshold can be computed given a level of confidence. Also the algorithm is intrinsically parallel and local. Using parallel machines, the search may be implemented very efficiently. 4.6 Comparison of Statistic Features In this section, we numerically evaluate the performance of classification of different features, different filters, and different distance measures for spectral histograms. First we compare the spectral histogram with statistical values such as mean and variance. If we assume that the underlying true image is a constant image, the mean value is the best choice for a statistic feature. If we use a Gaussian model to characterize the input image, mean and variance values are the features to be used. Here we use a linear combination of mean and variance values, where the weights are determined for the best performance at integration scale We also use the intensity histogram of image patches as a feature vector. Figure 4.29 shows the classification error of the database used in Section 4 with respect to the integration scale. From Figure 4.29, we see that as the model gets more sophisticated, the performance gets better. The mean value of each image patch does not provide a sufficient feature for texture images and gives the largest classification error at all integration scales. While the performance improves when the scale increases, the improvement is not significant. Also the combination of mean and variance does not give good results, suggesting Gaussian model, even though it is widely used in the literature, is not good enough even for homogeneous texture images. The spectral histogram gives the best performance at all integration scales. Compared with mean and Gaussian 121

48 Figure 4.29: Classification error in percentage of texture database for different features. Solid line: spectral histogram of eight filters including intensity, gradients, LoG with two scales and Gabor with three different orientations. Dotted line: Mean value of the image patch. Dashed line: Weighted sum of mean and variance values of the image patch. The weights are determined to achieve the best result for window size Dash-dotted line: Intensity histogram of image patches. model, the intensity histogram gives much better performance. This indicates that the distribution is more important for texture discrimination and classification. Ojala et al [99] did a comparison of different statistic features for texture image classification using a database consisting of nine images only. Here we compare the spectral histogram with gradient detectors, edge detectors (LoG) with different scales, and Gabor filters with different scales and orientations. Figure 4.3 shows the result. Compared to Figure 4.29, the differences between different feature detectors are not large. At large integration scales, Gabor filters gives the second best result, suggesting that oriented structures are more important for large texture patches than symmetric edge detectors. Because gradient detectors are very local, they give the worst performance. But if compared with the intensity histogram and Gaussian model shown 122

49 Figure 4.3: Classification error in percentage of the texture database for different filters. Solid line: spectral histogram of eight filters including intensity, gradients, LoG with two scales and Gabor with three different orientations. Dotted line: Gradient filters D xx and D yy ; Dashed line: Laplacian of Gaussian filters LoG( 2/2), LoG(1), and LoG(2). Dash-dotted line: Six Cosine Gabor filters with T = 4 and six orientations θ =, 3, 6, 9, 12, and 15. in Figure 4.29, it still gives a very good performance. This again confirms that the distribution of local features is important for texture classification and segmentation. Finally, for spectral histograms, we compare different distance measures. As given in equations (4.4) (4.5), and (4.6), here we use L 1 -norm, L 2 -norm, Kullback-Leibler divergence, and χ 2 -statistic. As shown in Figure 4.31, the differences between different distance measures are very small, suggesting that spectral histograms do not depend on the particular form of distance measure. This observation is different from the conclusion in [17], where the authors claim χ 2 -statistic consistently gives the best performance. The reason for the differences is that spectral histograms in our case are derived by integrating information from the same window at different scales, while different windows are used in [17] at different scales. 123

50 Figure 4.31: Classification error in percentage of the texture database for different distance measures. Solid line: χ 2 -square statistic. Dotted line: L 1 -norm. Dashed line: L 2 -norm. Dash-dotted line: Kullback-Leibler divergence. 4.7 A Model for Texture Discrimination In this section we apply our model to texture discrimination which is widely studied in psychophysical experiments [69] using synthetic texton patterns. These texton patterns are in general not homogeneous even within one texture region, for example, as those shown in Figure Our model is intended to characterize texture regions with homogeneous appearance and thus the texton patterns do not fit to our assumptions well. However, the result from our model is consistent with existing psychophysical data and the data from the model by Malik and Perona [87]. We adopt similar procedures used by Malik and Perona [87]. Instead of using 192 filter pairs, we use two gradient filters D xx and D yy, three LoG filters with T = 2/2, 1, and 2, resulting in total of five filters. At each pixel, we extract local spectral histograms at integration scale and the gradient is the χ 2 -square 124

51 (+ O) (+ []) (L +) (L M) ( ) (+ T) (+ X) (T L) (L L M L ) (R-mirror-R) Figure 4.32: Ten synthetic texture pairs scanned from Malik and Perona [87]. The size is distance between the spectral histograms of the two adjacent windows. Then the gradient is averaged along each column. The images used in our experiment are shown in Figure 4.32, which were scanned from Malik and Perona [87]. The texture gradients for selected texture pairs (+ O) and (R-mirror-R) from our method are shown in Figure 4.33 (b) and (d). There are several observations that can be made from the gradient results shown in Figure First, the texture pattern does not give rise to a homogeneous texture 125

Visual Learning with Explicit and Implicit Manifolds

Visual Learning with Explicit and Implicit Manifolds Visual Learning with Explicit and Implicit Manifolds --- A Mathematical Model of, Texton, and Primal Sketch Song-Chun Zhu Departments of Statistics and Computer Science University of California, Los Angeles

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Texture Classification Using Spectral Histograms

Texture Classification Using Spectral Histograms IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 6, JUNE 2003 661 Texture Classification Using Spectral Histograms Xiuwen Liu, Senior Member, IEEE, and DeLiang Wang, Senior Member, IEEE Abstract Based

More information

Computational Investigation of Feature Extraction and Image Organization

Computational Investigation of Feature Extraction and Image Organization Computational Investigation of Feature Extraction and Image Organization DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of

More information

A Spectral Histogram Model for Textons and Texture Discrimination

A Spectral Histogram Model for Textons and Texture Discrimination A Spectral Histogram Model for Textons and Texture Discrimination Xiuwen Liu DeLiang Wang Department of Computer Science Department of Computer & Information Science P.O. Box 4530 Center for Cognitive

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

THE FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES

THE FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES THE FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES APPEARANCE-BASED CLASSIFICATION AND RECOGNITION USING SPECTRAL HISTOGRAM REPRESENTATIONS AND HIERARCHICAL LEARNING FOR OCA By QIANG ZHANG A Thesis

More information

Parametric Texture Model based on Joint Statistics

Parametric Texture Model based on Joint Statistics Parametric Texture Model based on Joint Statistics Gowtham Bellala, Kumar Sricharan, Jayanth Srinivasa Department of Electrical Engineering, University of Michigan, Ann Arbor 1. INTRODUCTION Texture images

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Filtering, scale, orientation, localization, and texture. Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth)

Filtering, scale, orientation, localization, and texture. Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth) Filtering, scale, orientation, localization, and texture Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth) Beyond edges we have talked a lot about edges while they are important, it

More information

CS 534: Computer Vision Texture

CS 534: Computer Vision Texture CS 534: Computer Vision Texture Ahmed Elgammal Dept of Computer Science CS 534 Texture - 1 Outlines Finding templates by convolution What is Texture Co-occurrence matrices for texture Spatial Filtering

More information

Exploring Texture Ensembles by Efficient Markov Chain Monte CarloÐToward a ªTrichromacyº Theory of Texture

Exploring Texture Ensembles by Efficient Markov Chain Monte CarloÐToward a ªTrichromacyº Theory of Texture 554 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 22, NO. 6, JUNE 2000 Exploring Texture Ensembles by Efficient Markov Chain Monte CarloÐToward a ªTrichromacyº Theory of Texture

More information

5. Feature Extraction from Images

5. Feature Extraction from Images 5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i

More information

Markov Random Fields and Gibbs Sampling for Image Denoising

Markov Random Fields and Gibbs Sampling for Image Denoising Markov Random Fields and Gibbs Sampling for Image Denoising Chang Yue Electrical Engineering Stanford University changyue@stanfoed.edu Abstract This project applies Gibbs Sampling based on different Markov

More information

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation Texture Outline Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation 1 Image Representation The standard basis for images is the set

More information

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University Texture Analysis Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Texture An important approach to image description is to quantify its texture content. Texture

More information

Median filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today

Median filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:

More information

Non-linear filtering example

Non-linear filtering example Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Analysis and Synthesis of Texture

Analysis and Synthesis of Texture Analysis and Synthesis of Texture CMPE 264: Image Analysis and Computer Vision Spring 02, Hai Tao 31/5/02 Extracting image structure by filter banks Q Represent image textures using the responses of a

More information

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms

More information

Frequency analysis, pyramids, texture analysis, applications (face detection, category recognition)

Frequency analysis, pyramids, texture analysis, applications (face detection, category recognition) Frequency analysis, pyramids, texture analysis, applications (face detection, category recognition) Outline Measuring frequencies in images: Definitions, properties Sampling issues Relation with Gaussian

More information

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian

More information

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1 Wavelet Applications Texture analysis&synthesis Gloria Menegaz 1 Wavelet based IP Compression and Coding The good approximation properties of wavelets allow to represent reasonably smooth signals with

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Image pyramids and their applications Bill Freeman and Fredo Durand Feb. 28, 2006

Image pyramids and their applications Bill Freeman and Fredo Durand Feb. 28, 2006 Image pyramids and their applications 6.882 Bill Freeman and Fredo Durand Feb. 28, 2006 Image pyramids Gaussian Laplacian Wavelet/QMF Steerable pyramid http://www-bcs.mit.edu/people/adelson/pub_pdfs/pyramid83.pdf

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

CS 534: Computer Vision Texture

CS 534: Computer Vision Texture CS 534: Computer Vision Texture Spring 2004 Ahmed Elgammal Dept of Computer Science CS 534 Ahmed Elgammal Texture - 1 Outlines Finding templates by convolution What is Texture Co-occurrence matrecis for

More information

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University

More information

Learning Sparse FRAME Models for Natural Image Patterns

Learning Sparse FRAME Models for Natural Image Patterns International Journal of Computer Vision (IJCV Learning Sparse FRAME Models for Natural Image Patterns Jianwen Xie Wenze Hu Song-Chun Zhu Ying Nian Wu Received: 1 February 2014 / Accepted: 13 August 2014

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Histograms of Oriented Gradients

Histograms of Oriented Gradients Histograms of Oriented Gradients Carlo Tomasi September 18, 2017 A useful question to ask of an image is whether it contains one or more instances of a certain object: a person, a face, a car, and so forth.

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

TEXTURE ANALYSIS USING GABOR FILTERS

TEXTURE ANALYSIS USING GABOR FILTERS TEXTURE ANALYSIS USING GABOR FILTERS Texture Types Definition of Texture Texture types Synthetic Natural Stochastic < Prev Next > Texture Definition Texture: the regular repetition of an element or pattern

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Texture Similarity Measure. Pavel Vácha. Institute of Information Theory and Automation, AS CR Faculty of Mathematics and Physics, Charles University

Texture Similarity Measure. Pavel Vácha. Institute of Information Theory and Automation, AS CR Faculty of Mathematics and Physics, Charles University Texture Similarity Measure Pavel Vácha Institute of Information Theory and Automation, AS CR Faculty of Mathematics and Physics, Charles University What is texture similarity? Outline 1. Introduction Julesz

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009 Analysis: TextonBoost and Semantic Texton Forests Daniel Munoz 16-721 Februrary 9, 2009 Papers [shotton-eccv-06] J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

Texture. Announcements. 2) Synthesis. Issues: 1) Discrimination/Analysis

Texture. Announcements. 2) Synthesis. Issues: 1) Discrimination/Analysis Announcements For future problems sets: email matlab code by 11am, due date (same as deadline to hand in hardcopy). Today s reading: Chapter 9, except 9.4. Texture Edge detectors find differences in overall

More information

Image Analysis - Lecture 5

Image Analysis - Lecture 5 Texture Segmentation Clustering Review Image Analysis - Lecture 5 Texture and Segmentation Magnus Oskarsson Lecture 5 Texture Segmentation Clustering Review Contents Texture Textons Filter Banks Gabor

More information

Texture Segmentation Using Gabor Filters

Texture Segmentation Using Gabor Filters TEXTURE SEGMENTATION USING GABOR FILTERS 1 Texture Segmentation Using Gabor Filters Khaled Hammouda Prof. Ed Jernigan University of Waterloo, Ontario, Canada Abstract Texture segmentation is the process

More information

Statistical texture classification via histograms of wavelet filtered images

Statistical texture classification via histograms of wavelet filtered images Statistical texture classification via histograms of wavelet filtered images Author Liew, Alan Wee-Chung, Jo, Jun Hyung, Chae, Tae Byong, Chun, Yong-Sik Published 2011 Conference Title Proceedings of the

More information

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

More information

Halftoning and quasi-monte Carlo

Halftoning and quasi-monte Carlo Halftoning and quasi-monte Carlo Ken Hanson CCS-2, Methods for Advanced Scientific Simulations Los Alamos National Laboratory This presentation available at http://www.lanl.gov/home/kmh/ LA-UR-04-1854

More information

Obtaining Feature Correspondences

Obtaining Feature Correspondences Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Texture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis

Texture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis Texture Texture D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Key issue: How do we represent texture? Topics: Texture segmentation Texture-based matching Texture

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

BRIEF Features for Texture Segmentation

BRIEF Features for Texture Segmentation BRIEF Features for Texture Segmentation Suraya Mohammad 1, Tim Morris 2 1 Communication Technology Section, Universiti Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 2 School of

More information

Exercise: Simulating spatial processing in the visual pathway with convolution

Exercise: Simulating spatial processing in the visual pathway with convolution Exercise: Simulating spatial processing in the visual pathway with convolution This problem uses convolution to simulate the spatial filtering performed by neurons in the early stages of the visual pathway,

More information

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Bipartite Graph Partitioning and Content-based Image Clustering

Bipartite Graph Partitioning and Content-based Image Clustering Bipartite Graph Partitioning and Content-based Image Clustering Guoping Qiu School of Computer Science The University of Nottingham qiu @ cs.nott.ac.uk Abstract This paper presents a method to model the

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Filters, random fields and maximum entropy (FRAME): Towards. Unified Theory for Texture Modeling

Filters, random fields and maximum entropy (FRAME): Towards. Unified Theory for Texture Modeling Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling The Harvard community has made this article openly available. Please share how this access benefits you.

More information

Today: non-linear filters, and uses for the filters and representations from last time. Review pyramid representations Non-linear filtering Textures

Today: non-linear filters, and uses for the filters and representations from last time. Review pyramid representations Non-linear filtering Textures 1 Today: non-linear filters, and uses for the filters and representations from last time Review pyramid representations Non-linear filtering Textures 2 Reading Related to today s lecture: Chapter 9, Forsyth&Ponce..

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Estimating Human Pose in Images. Navraj Singh December 11, 2009 Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Scaled representations

Scaled representations Scaled representations Big bars (resp. spots, hands, etc.) and little bars are both interesting Stripes and hairs, say Inefficient to detect big bars with big filters And there is superfluous detail in

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image Processing

More information

TEXTURE ANALYSIS USING GABOR FILTERS FIL

TEXTURE ANALYSIS USING GABOR FILTERS FIL TEXTURE ANALYSIS USING GABOR FILTERS Texture Types Definition of Texture Texture types Synthetic ti Natural Stochastic < Prev Next > Texture Definition Texture: the regular repetition of an element or

More information

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I) Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing

More information

Primal Sketch: Integrating Texture and Structure

Primal Sketch: Integrating Texture and Structure Primal Sketch: Integrating Texture and Structure Cheng-en Guo, Song-Chun Zhu, and Ying Nian Wu Departments of Statistics and Computer Science University of California, Los Angeles Los Angeles, CA 90095

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

More information

Concepts in. Edge Detection

Concepts in. Edge Detection Concepts in Edge Detection Dr. Sukhendu Das Deptt. of Computer Science and Engg., Indian Institute of Technology, Madras Chennai 636, India. http://www.cs.iitm.ernet.in/~sdas Email: sdas@iitm.ac.in Edge

More information

Texture Segmentation Using Multichannel Gabor Filtering

Texture Segmentation Using Multichannel Gabor Filtering IOSR Journal of Electronics and Communication Engineering (IOSRJECE) ISSN : 2278-2834 Volume 2, Issue 6 (Sep-Oct 2012), PP 22-26 Texture Segmentation Using Multichannel Gabor Filtering M. Sivalingamaiah

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Final Review CMSC 733 Fall 2014

Final Review CMSC 733 Fall 2014 Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,

More information

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++ Dietrich Paulus Joachim Hornegger Pattern Recognition of Images and Speech in C++ To Dorothea, Belinda, and Dominik In the text we use the following names which are protected, trademarks owned by a company

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

NCC 2009, January 16-18, IIT Guwahati 267

NCC 2009, January 16-18, IIT Guwahati 267 NCC 2009, January 6-8, IIT Guwahati 267 Unsupervised texture segmentation based on Hadamard transform Tathagata Ray, Pranab Kumar Dutta Department Of Electrical Engineering Indian Institute of Technology

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach Volume 1, No. 7, September 2012 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Edge Detection

More information

Edge Detection Lecture 03 Computer Vision

Edge Detection Lecture 03 Computer Vision Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,

More information

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical

More information

CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS

CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS Introduction Ning Liu and Dean S. Oliver University of Oklahoma, Norman, Oklahoma, USA; ning@ou.edu The problem of estimating the

More information

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis N.Padmapriya, Ovidiu Ghita, and Paul.F.Whelan Vision Systems Laboratory,

More information

Segmentation. (template) matching

Segmentation. (template) matching Segmentation. (template) matching 1 Announcements Midterm March 3 2010, in class Duration: 45 min Weight: 20% of final course mark Closed book, closed notes Calculator allowed Practice midterm posted.

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Report: Reducing the error rate of a Cat classifier

Report: Reducing the error rate of a Cat classifier Report: Reducing the error rate of a Cat classifier Raphael Sznitman 6 August, 2007 Abstract The following report discusses my work at the IDIAP from 06.2007 to 08.2007. This work had for objective to

More information

Texture April 14 th, 2015

Texture April 14 th, 2015 Texture April 14 th, 2015 Yong Jae Lee UC Davis Announcements PS1 out today due 4/29 th, 11:59 pm start early! 2 Review: last time Edge detection: Filter for gradient Threshold gradient magnitude, thin

More information

Image Processing. David Kauchak cs160 Fall Empirical Evaluation of Dissimilarity Measures for Color and Texture

Image Processing. David Kauchak cs160 Fall Empirical Evaluation of Dissimilarity Measures for Color and Texture Image Processing Empirical Evaluation of Dissimilarity Measures for Color and Texture Jan Puzicha, Joachim M. Buhmann, Yossi Rubner & Carlo Tomasi David Kauchak cs160 Fall 2009 Administrative 11/4 class

More information