Wavelet-based Texture Segmentation: Two Case Studies

Similar documents
FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Handwritten Script Recognition at Block Level

Autonomous Agent Navigation Based on Textural Analysis Rand C. Chandler, A. A. Arroyo, M. Nechyba, E.M. Schwartz

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Feature Extraction from Wavelet Coefficients for Pattern Recognition Tasks. Rajat Aggarwal Chandu Sharvani Koteru Gopinath

Tools for texture/color based search of images

Query by Fax for Content-Based Image Retrieval

Neural Network based textural labeling of images in multimedia applications

Feature Based Watermarking Algorithm by Adopting Arnold Transform

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Texture-based Image Retrieval Using Multiscale Sub-image Matching

Figure (5) Kohonen Self-Organized Map

Computer Vision. Exercise Session 10 Image Categorization

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

Digital Image Steganography Techniques: Case Study. Karnataka, India.

Autoregressive and Random Field Texture Models

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

COMPUTER PROGRAM FOR IMAGE TEXTURE ANALYSIS IN PhD STUDENTS LABORATORY

Directionally Selective Fractional Wavelet Transform Using a 2-D Non-Separable Unbalanced Lifting Structure

Voronoi Region. K-means method for Signal Compression: Vector Quantization. Compression Formula 11/20/2013

Region-based Segmentation

CHAPTER 4 SEGMENTATION

Analysis of Image Compression Approaches Using Wavelet Transform and Kohonen's Network

IMAGE DIGITIZATION BY WAVELET COEFFICIENT WITH HISTOGRAM SHAPING AND SPECIFICATION

A DWT Based Steganography Approach

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Image Processing. David Kauchak cs160 Fall Empirical Evaluation of Dissimilarity Measures for Color and Texture

Invariant Features of Local Textures a rotation invariant local texture descriptor

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Classification and Detection in Images. D.A. Forsyth

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Similarity Measure. Pavel Vácha. Institute of Information Theory and Automation, AS CR Faculty of Mathematics and Physics, Charles University

GPU-Based DWT Acceleration for JPEG2000

FPGA Implementation of an Efficient Two-dimensional Wavelet Decomposing Algorithm

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Dynamic Thresholding for Image Analysis

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

Color-Texture Segmentation of Medical Images Based on Local Contrast Information

CSEP 521 Applied Algorithms Spring Lossy Image Compression

SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) Volume 3 Issue 6 June 2016

A Study on the Effect of Codebook and CodeVector Size on Image Retrieval Using Vector Quantization

Original Research Articles

Sampling and Reconstruction

D.A. Karras 1, S.A. Karkanis 2, D. Iakovidis 2, D. E. Maroulis 2 and B.G. Mertzios , Greece,

Wavelet Based Image Compression, Pattern Recognition And Data Hiding

Face Recognition by Combining Kernel Associative Memory and Gabor Transforms

Ulrik Söderström 16 Feb Image Processing. Segmentation

Chapter 4. The Classification of Species and Colors of Finished Wooden Parts Using RBFNs

COLOR TEXTURE CLASSIFICATION USING LOCAL & GLOBAL METHOD FEATURE EXTRACTION

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

TEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

CHAPTER-4 WATERMARKING OF GRAY IMAGES

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

Linear Regression Model on Multiresolution Analysis for Texture Classification

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Color and Texture Feature For Content Based Image Retrieval

Lecture 9: Hough Transform and Thresholding base Segmentation

DOI: /jos Tel/Fax: by Journal of Software. All rights reserved. , )

Today s lecture. Clustering and unsupervised learning. Hierarchical clustering. K-means, K-medoids, VQ

Color-Based Classification of Natural Rock Images Using Classifier Combinations

Image Enhancement: To improve the quality of images

LOSSY COLOR IMAGE COMPRESSION BASED ON QUANTIZATION

Region Based Image Fusion Using SVM

Image Compression & Decompression using DWT & IDWT Algorithm in Verilog HDL

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

Image Denoising Methods Based on Wavelet Transform and Threshold Functions

Hybrid Face Recognition and Classification System for Real Time Environment

International Journal of Wavelets, Multiresolution and Information Processing c World Scientific Publishing Company

Color Content Based Image Classification

Wavelet Transform in Face Recognition

Image Resolution Improvement By Using DWT & SWT Transform

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Texton-based Texture Classification

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

FOURIER TRANSFORM GABOR FILTERS. and some textons

Adaptive Quantization for Video Compression in Frequency Domain

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

Feature-Guided K-Means Algorithm for Optimal Image Vector Quantizer Design

A Texture Feature Extraction Technique Using 2D-DFT and Hamming Distance

Document Text Extraction from Document Images Using Haar Discrete Wavelet Transform

Estimating the wavelength composition of scene illumination from image data is an

Computer Graphics: Graphics Output Primitives Line Drawing Algorithms

Several pattern recognition approaches for region-based image analysis

A Modified SVD-DCT Method for Enhancement of Low Contrast Satellite Images

Texture Classification with Feature Analysis: A Wavelet Based Approach

TEXTURE CLASSIFICATION METHODS: A REVIEW

Image analysis. Computer Vision and Classification Image Segmentation. 7 Image analysis

Texture classification using fuzzy uncertainty texture spectrum

Texture image retrieval and image segmentation using composite sub-band gradient vectors q

Transcription:

Wavelet-based Texture Segmentation: Two Case Studies 1 Introduction (last edited 02/15/2004) In this set of notes, we illustrate wavelet-based texture segmentation on images from the Brodatz Textures Database [1]. We perform two sets of experiments on groups of four texture images each, as shown in Figures 1 and 2, respectively. Each individual texture image has dimensions 320 320. For each set of textures, we use the top halves of Figures 1 and 2 for training wavelet-based models of texture appearances, while all testing is conducted on the bottom halves. Figure 1: Texture set #1. Figure 2: Texture set #2. 2 Approach In this section, we describe our approach to texture segmentation as well as our training of statistical texture-appearance models. 2.1 Training data In our approach to classifying and segmenting texture images, we choose to analyze 16 16 pixel neighborhoods W. Therefore, for each texture class ω {1,...,C}, we extract 16 16 subimages Wj ω, j {1,...,N}, where the origin of each is offset from its nearest neighbor by 4 pixels in both the x and y directions. For example, a 64 64 image generates 13 13 = 169 subimages, as illustrated in Figure 3 for a small sample texture image. Note that for training images sized 160 320 (top half of individual texture images in Figures 1 and 2), we extract N = 77 77 = 2849 individual windows Wj ω for each texture ω. 1

Figure 3: All 16 16 subimages W (offset by 4 pixels) for a 64 64 sample image. Now, each subimage Wj ω gives rise to a set of six feature vectors x ω ij, i {1,...,6}. As depicted in Figure 4, we first perform a 2-level, 2D Discrete Wavelet Transform (DWT) on Wj ω. Next, we take the magnitude of the resulting wavelet coefficients. Then, we define x ω ij as the ith band of the magnitude DWT, where Figure 4 specifies which i corresponds to which band (i.e. LH 1, HL 1, HH 1, LH 2, HL 2 and HH 2 ). Note that we throw out intensity information in the original image by not including the LL 2 band, since average intensities are not, in general, discriminating for texture images of the type shown in Figures 1 and 2. Finally, we observe that feature vectors x ω 1j, xω 2j and xω 3j are of length 8 8 = 64, while feature vectors xω 4j, x ω 5j and xω 6j are of length 4 4 = 16. 2.2 Statistical texture models Given the feature vectors x ω ij, j {1,..., N}, i {1,...,6}, ω {1,...,C}, we now want to built histogram probability models for each band i and texture class ω. This requires that we first construct common quantizations of feature spaces i across all texture classes ω. We will derive these feature-specific quantizations using vector quantization; see [2] for specifics on vector quantization. Let X ω i = {x ω ij } denote all training feature vectors for band i and texture class ω, and let X i = {X ω i } denote all training feature vectors for band i across all texture classes. Then, Z L i = V Q(X i, L) = {µ ik }, k {1,...,L} (1) denotes the L-level VQ codebook of prototype vectors {µ ik } trained on X i, where L is user-defined. Given the six VQ codebooks Zi L, i {1,...,6}, we can now assign an integer label lω ij to every training feature vector x ω ij : l ω ij = arg min d(x ω k ij, µ ik ) (2) where d(a, b) denotes the Euclidean distance between vectors a and b. Furthermore, let n ω ik denote the number of vectors in Xi ω with label l ω ij = k, and let nω i denote the total number of vectors in Xi ω. Then, we assign the probability of prototype vector µ ik for texture class ω as: P ω i (k) = nω ik. (3) The probabilities Hi ω = {Pi ω (k)}, k {1,..., L}, define the histogram probability model for feature space (i.e. band) i and texture class ω over quantization Zi L. n ω i 2

Figure 4: Extraction of feature vectors x ω ij for subimage W ω j. 2.3 Classification Given the VQ codebooks Zi L and histogram models Hi ω, we are now in position to classify an unknown 16 16 subimage W t. Let x t i, i {1,...,6} denote the six feature vectors corresponding to subimage W t, as in Figure 4), and let l t i denote the integer label for xt i and VQ codebook ZL i, as in equation (2) above. Then, we classify subimage W t as texture class ω, where, 3 Experiments ω = arg max ω 6 i=1 Pi ω (lt i ). (4) In this section, we summarize texture segmentation results for the texture image sets in Figures 1 and 2. Much more detailed results can be found at [3]. 3.1 Texture set #1 This set of experiments relates to the C = 4 texture images shown in Figure 1; numbering of the textures (from 1 to 4) goes from left to right. The top halves of the texture images were used for training the VQ codebooks Zi L and histogram models Hi ω ; the bottom halves of the texture images were used for testing segmentation performance for the resulting models. Values of L = {2, 4, 8, 16, 32, 64} were tried; roughly equivalent segmentation performance was achieved for L = {16, 32, 64}. Therefore, here, we show results only for L = 16; additional results are available at [3]. Figure 5 plots the histograms Hi ω, along with the VQ codebooks Z16 i (below each histogram). Figure 6 illustrates overall segmentation results over test textures with the classification rule as in equation (4). Averaged correct segmentation is approximately 92%; most of the error occurs for texture class ω = 1 for a region that is atypical of the texture image as a whole. Finally, Figure 7 illustrates segmentation results for individual bands i. Note that segmentation performance in Figure 6, when combining probabilities from all bands, is far superior to segmentation results based on individual bands i only. 3

Figure 5: Set #1: VQ codebooks Z 16 i and histogram models H ω i. Figure 6: Set #1: Segmentation of test textures. Correct classification is approximately 92%. 4

Figure 7: Set #1: Segmentation of test textures based on individual bands i; none of the individual-band segmentations is nearly as good across all texture classes as the results for all bands combined in Figure 6. 5

3.2 Texture set #2 This set of experiments relates to the C = 4 texture images shown in Figure 2; numbering of the textures (from 1 to 4) goes from left to right. The top halves of the texture images were used for training the VQ codebooks Zi L and histogram models Hi ω ; the bottom halves of the texture images were used for testing segmentation performance for the resulting models. Values of L = {2, 4, 8, 16, 32, 64} were tried; roughly equivalent segmentation performance was achieved for L = {16, 32, 64}. Therefore, here, we show results only for L = 16; additional results are available at [3]. Figure 8 plots the histograms Hi ω, along with the VQ codebooks Z16 i (below each histogram). Figure 9 illustrates overall segmentation results over test textures with the classification rule as in equation (4). Averaged correct segmentation is approximately 90%; most of the error occurs because of difficulty distinguishing the very similar textures ω = 2 and ω = 3 (two different wood grains). Finally, Figure 10 illustrates segmentation results for individual bands i. Note again that segmentation performance in Figure 9, when combining probabilities from all bands, is far superior to segmentation results based on individual bands i only. References [1] Brodatz Textures Database, http://www.ux.his.no/~tranden/brodatz.html, February, 2004. [2] EEL6825, http://mil.ufl.edu/~nechyba/ eel6825.f2003/course_materials.html, Fall, 2003. [3] EEL6562, http://mil.ufl.edu/~nechyba/eel6562/course_materials.html, Spring, 2004. 6

Figure 8: Set #2: VQ codebooks Z 16 i and histogram models H ω i. Figure 9: Set #2: Segmentation of test textures. Correct classification is approximately 92%. 7

Figure 10: Set #2: Segmentation of test textures based on individual bands i; none of the individual-band segmentations is nearly as good across all texture classes as the results for all bands combined in Figure 9. 8