ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

Similar documents
Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

TEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?

CHAPTER 4 TEXTURE FEATURE EXTRACTION

Digital Image Processing. Lecture # 15 Image Segmentation & Texture

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Schedule for Rest of Semester

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

CoE4TN4 Image Processing

CHAPTER 4 FEATURE EXTRACTION AND SELECTION TECHNIQUES

Texture Segmentation

EE795: Computer Vision and Intelligent Systems

October 17, 2017 Basic Image Processing Algorithms 3

Scale Invariant Feature Transform

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Image and Multidimensional Signal Processing

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Topic 4 Image Segmentation

Scale Invariant Feature Transform

Chapter 3: Intensity Transformations and Spatial Filtering

Local Features: Detection, Description & Matching

Face Detection for Skintone Images Using Wavelet and Texture Features

Segmentation and Grouping

Anno accademico 2006/2007. Davide Migliore

Coarse-to-fine image registration

Lecture 4: Spatial Domain Transformations

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Lecture 8 Object Descriptors

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Haralick Parameters for Texture feature Extraction

SIFT - scale-invariant feature transform Konrad Schindler

Chapter 11 Representation & Description

Practical Image and Video Processing Using MATLAB

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

Image Segmentation and Registration

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Lecture 6: Edge Detection

Image features. Image Features

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Region-based Segmentation

10.4 Measures of Central Tendency and Variation

10.4 Measures of Central Tendency and Variation

ECEN 447 Digital Image Processing

Outline 7/2/201011/6/

5. Feature Extraction from Images

Lecture 2 Image Processing and Filtering

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Image representation. 1. Introduction

Chapter 11 Representation & Description

ECG782: Multidimensional Digital Signal Processing

Feature Detectors and Descriptors: Corners, Lines, etc.

2D Image Processing Feature Descriptors

Image Enhancement in Spatial Domain (Chapter 3)

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

CS4733 Class Notes, Computer Vision

Digital Image Processing. Image Enhancement - Filtering

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Acquisition Description Exploration Examination Understanding what data is collected. Characterizing properties of data.

ECG782: Multidimensional Digital Signal Processing

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

NAME :... Signature :... Desk no. :... Question Answer

Basic relations between pixels (Chapter 2)

Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami

A MULTI-RESOLUTION APPROACH TO DEPTH FIELD ESTIMATION IN DENSE IMAGE ARRAYS F. Battisti, M. Brizzi, M. Carli, A. Neri

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Application-oriented approach to Texture feature extraction using Grey Level Co-occurrence Matrix (GLCM)

CS 534: Computer Vision Texture

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Digital Image Processing Chapter 11: Image Description and Representation

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

15 Wyner Statistics Fall 2013

Sampling and Reconstruction

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Local Feature Detectors

Spatial Enhancement Definition

MetroPro Surface Texture Parameters

Lecture: Edge Detection

Statistical Texture Analysis

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Introduction to Medical Imaging (5XSA0)

Image Enhancement: To improve the quality of images

Digital Image Processing

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Local features: detection and description. Local invariant features

Filtering and Enhancing Images

Edge and local feature detection - 2. Importance of edge detection in computer vision

Processing of binary images

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Particle localization and tracking GUI: TrackingGUI_rp.m

Digital Image Processing

Transcription:

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/ TEXTURE ANALYSIS Texture analysis is covered very briefly in Gonzalez and Woods, pages 66 671. This handout is intended to supplement that discussion. Applications: Texture is hugely important in analysis of remote sensing images (determination of land use, crop yields, bug infestations, weather prediction, ice fractures, etc.), in manufacturing and inspection (semiconductor wafers, metal fatigue, etc.), in medical imaging (e.g., cirrhosis of the liver) among other areas. Attempts at definitions of texture : 1. local variation in brightness 2. If brightness is interpreted as elevation, then texture corresponds to the roughness of the surface. (another ill-defined term). 3. Some local order is repeated over a region which is large compared to the order s size, the order consists in the nonrandom arrangement of elementary parts, the parts are roughly uniform entities having approximately the same dimensions everywhere within the textured region. 4. Texture is used to describe 2-D arrays of variations... the elements and rules of spacing or arrangement may be arbitrarily manipulated, provided a characteristic repetitiveness remains. These definitions of texture do not lead to quantitative texture measures in the sense that the description of an edge discontinuity leads to a quantitative description of an edge in terms of its location, slope angle, orientation, and height. Often, texture can be decomposed into two basic dimensions: description of the tonal primitives out of which the texture is composed, description of the spatial dependence or interaction between the primitives. Tonal primitives are regions with tonal (amplitude) properties, such as average intensity, min and max. The dependence may be structural, probablistic, etc. Small primitives result in microtexture (e.g., sand, grass). Larger primitives result in macrotexture (e.g, brick wall, coins). Road map of texture analysis methods which we will cover: 1

1) Statistical methods Histogram properties Hurst operator Gray level co-occurrence matrices (GLCM) Autocorrelation methods 2) Structural methods Microstructure Run lengths Generalized co-occurrence matrices Edges/extrema per unit area 3) Spectral methods Statistical approaches to describing texture: 1. Histogram properties: Consider a given pixel and a neighborhood surrounding that pixel. (For example, a square neighborhood of 11 11 or 31 31). We put all the pixels in the neighborhood into a histogram. Range = Difference between max and min values in the histogram. Histogram moments: the th moment about the mean is is the probability of getting intensity, estimated as the histogram bin count for where intensity, divided by the number of pixels in the neighborhood. Note that and!#. is the variance. These have the limitation that they carry no information regarding the relative position of pixels. If you type help stats you will find that the Statistics Toolbox in Matlab lists a number of different descriptive statistics which can be computed from a histogram (here are just a few of them): Descriptive Statistics. geomean - Geometric mean. harmmean - Harmonic mean. iqr - Interquartile range. mad - Median Absolute Deviation. prctile - Percentiles. range - Range. trimmean - Trimmed mean. var - Variance. 2

As a point of terminology: when I refer to computing some descriptive statistic such as variance from a histogram of pixel values for a window of size, this means computing the variance of the 9 numbers in the window. It does not mean to generate a histogram of length 26 for the 9 values, and from that to compute the variance of the histogram entries (in which case you are computing the variance of 26 numbers, of which 247 or more have the value zero). 2. The Hurst operator Consider a pixel, and its neighbors at increasing distances. h g h f e d e f h e c b c e h g d b a b d g h e c b c e h f e d e f h g h We form a table showing the range (max value to and including that distance). Pixel class Number Distance Range a 1 b 4 1 c 4 1.4 d 4 2 e 8 2.24 f 4 2.83 g 4 3 h 8 3.16 min value) for pixels at each distance (up A Hurst plot is constructed showing the log(range) vs. log(distance). A line is fit to the points (not including point a), and the slope and intercept of the line can both be useful texture features. 3

3. Gray-level co-occurrence matrix also called the Spatial gray-level-dependence (SGLD) matrix. Consider the image (below left). If we use the position operator 1 pixel to the right and 1 pixel down then we get the gray-level co-occurrence matrix (below right). For example, the first entry comes from the fact that 4 times a appears below and to the right of another. The factor 1/16 is because there are 16 pairs entering into this matrix, so this normalizes the matrix entries to be estimates of the co-occurrence probabilities. where an entry is a count of the number of times that and For statistical confidence in the estimation of the joint probability distribution, the matrix must contain a reasonably large average occupancy level. Achieved either by (a) restricting the number of amplitude quantization levels (causes loss of accuracy for low-amplitude texture), or (b) using large measurement window. (causes errors if texture changes over the large window). Typical compromise: 16 gray levels and window size of 3 or pixels on each side. Now we can analyze : maximum probability entry element difference moment of order :! This descriptor has relatively low values when the high values of are near the main diagonal. For this position operator, high values near the main diagonal would indicate that bands of constant intensity running 1 pixel to the right and 1 down are likely. When, it is called the contrast: Contrast Entropy $#&%(' ) This is a measure of randomness, having its highest value when the elements of are all equal. In the case of a checkerboard, the entropy would be low. Energy ) Homogeneity /2143,-. 1 * + + 4

Problems associated with the co-occurrence matrix methods: 1. they require a lot of computation (many matrices to be computed) 2. features are not invariant to rotation or scale changes in the texture Sample Question on GLCMs Here are 4 different texture patches of size 96x96 pixels. All the pixels in the patch (quantized to 16 levels) were used to form GLCMs shown below. The position operator was one down and one to the right. Decide which texture patch gave rise to each GLCM. Note that 3 of the plots show perspective views of the GLCM from the vantage point of the (,) position. However, one of the plots has the (,) matrix coordinate position placed in the upper left corner since that provides a better view. So check the axis labels. 3 2 2 1 1 2 14 12 1 8 6 4 2 1 2 2 1 1 1 1 1 2 1 1 9 2 8 7 2 6 1 4 3 1 2 1 2 2 1 2 1 2 1 1 1 1 1 1

Autocorrelation Methods The autocorrelation function measured at a point ( over a window of size is defined as / 3 / 3 for pixel lags. Presumably, a region of coarse texture will exhibit a higher correlation for a fixed shift than a region of fine texture. Thus texture coarseness should be proportional to the spread of the autocorrelation function. How to measure spread of ACF? The general form is $3 where $3 $3 Computation is only over 1/2 of the ACF because of its symmetry. Features of potential interest include: the profile spreads S(2,) and S(,2), the cross-relation S(1,1), and the second-degree spread S(2,2). The ACF width is a good measure of subjective coarseness. (High correlations found between subjective ranking of texture coarseness and the ACF width measured by the distance to the 1/e point for an ACF assumed to be circularly symmetric.) 6

Structural approaches to measuring texture: 1. Microstructure approaches to describing texture: General concept of microstructure methods: Use many little filters to look for many different kinds of little patterns. Measure the energy for each. That is, see how much the local texture is like each little pattern. One can compute various microstructure arrays by convolving the image with characteristic little masks: These 9 operators (called the Laws operators) form a basis set that can be generated from all outer product combinations of the three vectors: After convolving with each of the masks, we measure the energy in the microstructure arrays, perhaps by forming a moving window standard deviation, over a window that contains a few cycles of the repetitive texture. Or, for example, the texture feature at each pixel in the image is the average of the absolute values of the microstructure array in a local window around the pixel. This is termed a texture energy transform and is analogous to the Fourier power spectrum. 7

2. Run lengths: A gray level run length primitive is a maximal collinear connected set of pixels all having the same gray tone. Gray level runs can be characterized by gray tone of the run length of the run direction of the run. For example, we might quantize the directions to 4 (horizontal, vertical, diagonal up, diagonal down), and quantize the gray tones to. Let be the number of different length of runs that occur. In fact, is taken equal to the longest run that occurs. Let be the number of times there is a run of length having gray tone. The statistics we will define will make use of a normalization factor which is the total number of runs: For each of the 4 directions, we compute some statistics. For example: (a) Short run emphasis: (b) Long run emphasis: (c) Gray level non-uniformity: (d) Run length non-uniformity: In 2-dimensional form, the gray-level run-length primitive is a maximal connected set of pixels all having the same gray level. These maximal homogeneous sets have properties such as 8

number of pixels maximum or minimum diameters gray level angular orientation of max or min diameter 3. Edge per unit area: The gradient can be calculated by any edge operator. For a specified window centered on a pixel, the distribution of gradient magnitudes is determined, and the mean value taken. The mean of the distribution is the amount of edge per unit area associated with a given pixel. The image in which each pixel s value is the edge per unit area is actually a defocused gradient image. 4. Extrema per unit area: The number of pixels, per unit area, that are larger than or smaller than all of their 4- neighbors.. Generalized co-occurrence matrices: As texture primitives become larger, analysis using statistics computed from regular cooccurrence matrices will depend more on intensity transition within a primitive, and less on the structural organization of the primitives. The GCM concept is to replace the texture by another image which indicates the positions of certain local properties, e.g., of edges, in the original texture image. This 2nd image can be obtained by convolving the original image with a set of masks and finding the peaks. Each point corresponding to a local maxima has some description attached to it. Example: we code the orientations of the local maxima of some gradient operation: V H H H V V V L H H H R R R L R V V R H H H H H = horizontal V = vertical L = left diagonal R = right diagonal blank = no local maxima there 9

Now we select a spatial constraint predicate F, for example, is TRUE iff the distance between pixels and is less than. Suppose we choose. The (R,H)th position of the generalized co-occurrence matrix is equal to the sum of the number of horizontal points within (city-block) distance 2 of right-diagonal points. This will result in the following GCM: H V L R H 22 11 V 11 6 1 4 L 1 3 R 4 3 4 More generally, we define a GCM as follows: Let where is the location and the description of the th local maxima that the have been quantized in some way. F is a spatial constraint predicate, e.g., if is the nearest neighbor of Then the entry of the GCM is a count of the number of pairs and the descriptions of respectively. with. We assume Spectral approaches to describing texture: For periodic or almost periodic 2-D texture patterns: The heights of prominent peaks in the spectrum give the principal directions of the texture patterns The locations of these peaks indicate fundamental spatial periods Eliminating these peaks via filtering leaves nonperiodic image elements, which we can then attempt to describe using statistical techniques. 1