Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Similar documents
Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

Color, Texture and Segmentation. CSE 455 Linda Shapiro

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Schedule for Rest of Semester

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

CS 534: Computer Vision Texture

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

TEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

Digital Image Processing. Lecture # 15 Image Segmentation & Texture

Frequency analysis, pyramids, texture analysis, applications (face detection, category recognition)

Region-based Segmentation

CS 534: Computer Vision Texture

Content-Based Image Retrieval Readings: Chapter 8:

October 17, 2017 Basic Image Processing Algorithms 3

Lecture 4: Spatial Domain Transformations

CHAPTER 4 TEXTURE FEATURE EXTRACTION

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

Chapter 3: Intensity Transformations and Spatial Filtering

Content-Based Image Retrieval. Queries Commercial Systems Retrieval Features Indexing in the FIDS System Lead-in to Object Recognition

Scale Invariant Feature Transform

Segmentation and Grouping

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Content-based Image Retrieval (CBIR)

CoE4TN4 Image Processing

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Content-based Image and Video Retrieval. Image Segmentation

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Scale Invariant Feature Transform

TEXTURE ANALYSIS USING GABOR FILTERS

Motion Estimation and Optical Flow Tracking

Content-Based Image Retrieval Readings: Chapter 8:

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Textural Features for Image Database Retrieval

5. Feature Extraction from Images

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Computer Vision I - Basics of Image Processing Part 2

Supervised texture detection in images

Digital Image Processing COSC 6380/4393

Chapter 11 Representation & Description

Content Based Image Retrieval

Implementation of Texture Feature Based Medical Image Retrieval Using 2-Level Dwt and Harris Detector

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Color Image Segmentation

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC)

Introduction to Digital Image Processing

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Anno accademico 2006/2007. Davide Migliore

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Image Segmentation. Selim Aksoy. Bilkent University

Image Segmentation. Selim Aksoy. Bilkent University

SIFT - scale-invariant feature transform Konrad Schindler

A Quantitative Approach for Textural Image Segmentation with Median Filter

Texture. D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Previously

Texture. Announcements. 2) Synthesis. Issues: 1) Discrimination/Analysis

Statistical Texture Analysis

The goals of segmentation

Motivation. Gray Levels

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

TEXTURE ANALYSIS USING GABOR FILTERS FIL

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

An Introduction to Content Based Image Retrieval

Computer Vision I - Filtering and Feature detection

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Digital Image Processing

Analysis of Planar Anisotropy of Fibre Systems by Using 2D Fourier Transform

Texture Segmentation

Edge and local feature detection - 2. Importance of edge detection in computer vision

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Texture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis

Local Features: Detection, Description & Matching

Lecture 6 Linear Processing. ch. 5 of Machine Vision by Wesley E. Snyder & Hairong Qi. Spring (CMU RI) : BioE 2630 (Pitt)

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Texture Segmentation by Windowed Projection

2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

Motion illusion, rotating snakes

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

Introduction to Medical Imaging (5XSA0) Module 5

Computer Vision for HCI. Topics of This Lecture

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Local Feature Detectors

Image features. Image Features

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

EE795: Computer Vision and Intelligent Systems

Texture and Shape for Image Retrieval Multimedia Analysis and Indexing

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

One image is worth 1,000 words

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

School of Computing University of Utah

Transcription:

Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

Problem with Structural Approach How do you decide what is a texel? Ideas?

Natural Textures from VisTex grass What/Where are the texels? leaves

The Case for Statistical Texture Segmenting out texels is difficult or impossible in real images. Numeric quantities or statistics that describe a texture can be computed from the gray tones (or colors) alone. This approach is less intuitive, but is computationally efficient. It can be used for both classification and segmentation.

Some Simple Statistical Texture Measures 1. Edge Density and Direction Use an edge detector as the first step in texture analysis. The number of edge pixels in a fixed-size region tells us how busy that region is. The directions of the edges also help characterize the texture

Two Edge-based Texture Measures 1. edgeness per unit area Fedgeness = { p gradient_magnitude(p) threshold} / N where N is the size of the unit area 2. edge magnitude and direction histograms Fmagdir = ( Hmagnitude, Hdirection ) where these are the normalized histograms of gradient magnitudes and gradient directions, respectively.

Example Original Image Frei-Chen Thresholded Edge Image Edge Image

Local Binary Pattern Measure For each pixel p, create an 8-bit number b1 b2 b3 b4 b5 b6 b7 b8, where bi = 0 if neighbor i has value less than or equal to p s value and 1 otherwise. Represent the texture in the image (or a region) by the histogram of these numbers. 1 2 3 8 100 101 103 40 50 80 50 60 90 4 5 1 1 1 1 1 1 0 0 7 6

Example Fids (Flexible Image Database System) is retrieving images similar to the query image using LBP texture as the texture measure and comparing their LBP histograms

Example Low-level measures don t always find semantically similar images.

Co-occurrence Matrix Features A co-occurrence matrix is a 2D array C in which Both the rows and columns represent a set of possible image values. C d(i,j) indicates how many times value i co-occurs with value j in a particular spatial relationship d. The spatial relationship is specified by a vector d = (dr,dc).

Co-occurrence Example 1 1 1 0 0 1 1 0 0 0 0 2 2 0 0 2 2 0 0 2 2 0 0 2 2 gray-tone image i j 3 d = (3,1) 0 1 2 0 1 2 1 0 3 2 0 2 0 0 1 C d co-occurrence matrix From C d we can compute N d, the normalized co-occurrence matrix, where each value is divided by the sum of all the values.

Co-occurrence Features What do these measure? sums. Energy measures uniformity of the normalized matrix.

But how do you choose d? This is actually a critical question with all the statistical texture methods. Are the texels tiny, medium, large, all three? Not really a solved problem. Zucker and Terzopoulos suggested using a χ 2 statistical test to select the value(s) of d that have the most structure for a given class of images.

Example

Laws Texture Energy Features Signal-processing-based algorithms use texture filters applied to the image to create filtered images from which texture features are computed. The Laws Algorithm Filter the input image using texture filters. Compute texture energy by summing the absolute value of filtering results in local neighborhoods around each pixel. Combine features to achieve rotational invariance.

Law s texture masks (1)

Law s texture masks (2) Creation of 2D Masks E5 L5 E5L5

9D feature vector for pixel Subtract mean neighborhood intensity from (center) pixel Apply 16 5x5 masks to get 16 filtered images F k, k=1 to 16 Produce 16 texture energy maps using 15x15 windows E k [r,c] = F k [i,j] 9 features defined as follows:

Laws Filters

Laws Process

Example: Using Laws Features to Cluster water tiger fence flag grass small flowers big flowers Is there a neighborhood size problem with Laws?

Features from sample images

Gabor Filters Similar approach to Laws Wavelets at different frequencies and different orientations

Gabor Filters

Gabor Filters

Segmentation with Color and Gabor- Filter Texture (Smeulders)

A classical texture measure: Autocorrelation function Autocorrelation function can detect repetitive patterns of texels Also defines fineness/coarseness of the texture Compare the dot product (energy) of non shifted image with a shifted image

Interpreting autocorrelation Coarse texture function drops off slowly Fine texture function drops off rapidly Can drop differently for r and c Regular textures function will have peaks and valleys; peaks can repeat far away from [0, 0] Random textures only peak at [0, 0]; breadth of peak gives the size of the texture

Fourier power spectrum High frequency power fine texture Concentrated power regularity Directionality directional texture

Blobworld Texture Features Choose the best scale instead of using fixed scale(s) Used successfully in color/texture segmentation in Berkeley s Blobworld project

Feature Extraction Input: image Output: pixel features Color features Texture features Position features Algorithm: Select an appropriate scale for each pixel and extract features for that pixel at the selected scale Original image feature extraction Pixel Features Polarity Anisotropy Texture contrast

Texture Scale Texture is a local neighborhood property. Texture features computed at a wrong scale can lead to confusion. Texture features should be computed at a scale which is appropriate to the local structure being described. The white rectangles show some sample texture scales from the image.

Scale Selection Terminology Gradient of the L* component (assuming that the image is in the L*a*b* color space) : I Symmetric Gaussian : G σ (x, y) = G σ (x) * G σ (y) I x I y Second moment matrix: M σ (x, y)= G σ (x, y) * ( I)( I) T I x2 I x I y I x I y I y 2 Notes: G σ (x, y) is a separable approximation to a Gaussian. σ is the standard deviation of the Gaussian [0,.5, 3.5]. σ controls the size of the window around each pixel [1 2 5 10 17 26 37 50]. M σ (x,y) is a 2X2 matrix and is computed at different scales defined by σ.

Scale Selection (continued) Make use of polarity (a measure of the extent to which the gradient vectors in a certain neighborhood all point in the same direction) to select the scale at which M σ is computed Edge: polarity is close to 1 for all scales σ Texture: polarity varies with σ Uniform: polarity takes on arbitrary values

Scale Selection (continued) polarity p σ n is a unit vector perpendicular to the dominant orientation. Example: x = [-1 -.6] n=[1 1] x = [1.6] The notation [x]+ means x if x > 0 else 0 The notation [x]- means x if x < 0 else 0 We can think of E + and E - as measures of how many gradient vectors in the window are on the positive side and how many are on the negative side of the dominant orientation in the window.

σkσ k Scale Selection (continued) Texture scale selection is based on the derivative of the polarity with respect to scale σ. Algorithm: 1. Compute polarity at every pixel in the image for σ k = k/2, (k = 0,1 7). 2. Convolve each polarity image with a Gaussian with standard deviation 2k to obtain a smoothed polarity image. 3. For each pixel, the selected scale is the first value of σ for which the difference between values of polarity at successive scales is less than 2 percent.

Texture Features Extraction Extract the texture features at the selected scale Polarity (polarity at the selected scale) : p = p σ* Anisotropy : a = 1 λ2 / λ1 λ1 and λ2 denote the eigenvalues of M σ λ2 / λ1 measures the degree of orientation: when λ1 is large compared to λ2 the local neighborhood possesses a dominant orientation. When they are close, no dominant orientation. When they are small, the local neighborhood is constant. Local Contrast: C = 2(λ1+λ2) 3/2 A pixel is considered homogeneous if λ1+λ2 < a local threshold

Blobworld Segmentation Using Color and Texture

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

References Chad Carson, Serge Belongie, Hayit Greenspan, and Jitendra Malik. "Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying." IEEE Transactions on Pattern Analysis and Machine Intelligence 2002; Vol 24. pp. 1026-38. W. Forstner, A Framework for Low Level Feature Extraction, Proc. European Conf. Computer Vision, pp. 383-394, 1994.