Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Similar documents
ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

Schedule for Rest of Semester

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

Filtering and Enhancing Images

TEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?

CS 534: Computer Vision Texture

Edge and local feature detection - 2. Importance of edge detection in computer vision

SIFT - scale-invariant feature transform Konrad Schindler

CHAPTER 4 TEXTURE FEATURE EXTRACTION

Lecture 6: Edge Detection

CoE4TN4 Image Processing

Segmentation and Grouping

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

6. Applications - Text recognition in videos - Semantic video analysis

Reconstruction of Images Distorted by Water Waves

Practical Image and Video Processing Using MATLAB

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

Texture Segmentation

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Digital Image Processing. Image Enhancement in the Frequency Domain

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Digital Image Processing Chapter 11: Image Description and Representation

Scale Invariant Feature Transform

Sampling and Reconstruction

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

Chapter 3: Intensity Transformations and Spatial Filtering

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

Region-based Segmentation

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

Sobel Edge Detection Algorithm

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Image features. Image Features

EN1610 Image Understanding Lab # 3: Edges

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Scale Invariant Feature Transform

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

VC 11/12 T14 Visual Feature Extraction

5. Feature Extraction from Images

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

EE795: Computer Vision and Intelligent Systems

Image Processing and Analysis

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Local Features: Detection, Description & Matching


CS4733 Class Notes, Computer Vision

Course Evaluations. h"p:// 4 Random Individuals will win an ATI Radeon tm HD2900XT

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

ECG782: Multidimensional Digital Signal Processing

A Survey on Face-Sketch Matching Techniques

Classification and Detection in Images. D.A. Forsyth

Ulrik Söderström 16 Feb Image Processing. Segmentation

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Norbert Schuff VA Medical Center and UCSF

Content Based Image Retrieval

Texture and Shape for Image Retrieval Multimedia Analysis and Indexing

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Image and Multidimensional Signal Processing

Invariant Features of Local Textures a rotation invariant local texture descriptor

Local Feature Detectors

Digital Image Processing

The. Handbook ijthbdition. John C. Russ. North Carolina State University Materials Science and Engineering Department Raleigh, North Carolina

TEXTURE CLASSIFICATION METHODS: A REVIEW

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Topic 4 Image Segmentation

Nonparametric Clustering of High Dimensional Data

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

ELEC Dr Reji Mathew Electrical Engineering UNSW

CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR)

Content based Image Retrieval Using Multichannel Feature Extraction Techniques

ECG782: Multidimensional Digital Signal Processing

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Frequency analysis, pyramids, texture analysis, applications (face detection, category recognition)

Examination in Image Processing

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Face Recognition under varying illumination with Local binary pattern

Linear Operations Using Masks

Digital Image Processing. Lecture # 15 Image Segmentation & Texture

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Prewitt, Sobel and Scharr gradient 5x5 convolution matrices

Anno accademico 2006/2007. Davide Migliore

EECS490: Digital Image Processing. Lecture #19

1.Some Basic Gray Level Transformations

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks

Image Processing

MORPH-II: Feature Vector Documentation

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

Lecture 4 Image Enhancement in Spatial Domain

Point and Spatial Processing

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

COMPUTER AND ROBOT VISION

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

Human detection using histogram of oriented gradients. Srikumar Ramalingam School of Computing University of Utah

Transcription:

Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual pixels. Since the repetitive local arrangement of intensity determines the texture, we have to analyze neighborhoods of pixels to measure texture properties. One possible approach is to perform local Fourier transforms of the image. Then we can derive information on the contribution of different spatial frequencies and the dominant orientation(s) in the local texture. For both kinds of information, only the power (magnitude) spectrum needs to be analyzed. 2 Prior to the Fourier transform, multiply the image with a Gaussian function to avoid horizontal and vertical phantom lines. In the power spectrum, use ring filters of different radii to extract the frequency band contributions. Also in the power spectrum, apply wedge filters at different angles to obtain the information on dominant orientation of edges in the texture. 3 The resulting frequency and orientation data can be normalized, for example, so that the sum across frequency or orientation bands is. This effectively turns them into histograms that are less affected by monotonic gray-level changes caused by shading etc. However, it is recommended to combine frequencybased approaches with space-based approaches. 4 5 6

A simple and popular method for this kind of analysis is the computation of gray-level co-occurrence matrices. To compute such a matrix, we first separate the intensity in the image into a small number of different levels. For example, by dividing the usual brightness values ranging from to 255 by 64, we create the levels,, 2, and 3. Then we choose a displacement vector d = [d i, d j ]. The gray-level co-occurrence matrix P(a, b) is then obtained by counting all pairs of pixels separated by d having gray levels a and b. Afterwards, to normalize the matrix, we determine the sum across all entries and divide each entry by this sum. This co-occurrence matrix contains important information about the texture in the examined area of the image. 7 8 Example (2 gray levels): local texture patch d = (, ) displacement vector /25 2 9 4 co-occurrence matrix It is often a good idea to use more than one displacement vector, resulting in multiple cooccurrence matrices. The more similar the matrices of two textures are, the more similar are usually the textures themselves. This means that the difference between corresponding elements of these matrices can be taken as a similarity measure for textures. Based on such measures we can use texture information to enhance the detection of regions and contours in images. 9 For a given co-occurrence matrix P(a, b), we can compute the following six important characteristics: 2 2

Classification Performance You should compute these six characteristics for multiple displacement vectors, including different directions. The maximum length of your displacement vectors depends on the size of the texture elements. 3 4 Law s measures use a set of convolution filters to assess gray level, edges, spots, ripples, and waves in textures. This method starts with three basic filters: averaging: L 3 = (, 2, ) first derivative (edges): E 3 = (-,, ) second derivative (curvature): S 3 = (-, 2, -) Convolving these filters with themselves and each other results in five new filters: L 5 = (, 4, 6, 4, ) E 5 = (-, -2,, 2, ) S 5 = (-,, 2,, -) R 5 = (, -4, 6, -4, ) W 5 = (-, 2,, -2, ) 5 6 Now we can multiply any two of these vectors, using the first one as a column vector and the second one as a row vector, resulting in 5 5 Law s masks. For example: Now you can apply the resulting 25 convolution filters to a given image. The 25 resulting values at each position in the image are useful descriptors of the local texture. Law s texture energy measures are easy to apply, efficient, and give good results for most texture types. However, co-occurrence matrices are more flexible; for example, they can be scaled to account for coarse-grained textures. 7 8 3

Clearly, local intensity gradients are important in describing textures. However, the orientation of these gradients may be more important than their magnitude (steepness). After all, changing the contrast of a picture does not fundamentally change the texture, while it modifies the magnitude of all gradients. The technique of local binary patterns uses histograms to represent the relative frequencies of different gradients. Here, the gradient at a given pixel p is computed by comparing its intensity with that of a number of neighboring pixels. Those could be immediate neighbors or be located at larger distances. Such neighborhoods are characterized by the number of neighbors and their distance from p ( radius ). For example, a popular choice is an (8, 2) neighborhood, which uses 8 neighbors at a radius of 2 pixels. 9 2 2 22 When characterizing gradients, we are typically only interested in uniform binary patterns. These are the patterns that contain at most two transitions from to or vice versa when we read them along the full circle. The example pattern is uniform, because it has only two transitions, which are between positions and 2 and between positions 6 and 7 in the binary string. The pattern, on the other hand, has four transitions and is thus not uniform. The total number of uniform patterns is 58. We build a histogram with 59 bins, with each of the first 58 bins indicating how often each of the uniform patterns was found within the image area P that we want to analyze. The 59th bin shows how many non-uniform patterns were encountered. We normalize the histograms by dividing each of its 59 entries by P. The resulting 59-dimensional vector is the LBP descriptor of the texture patch. 23 24 4

Classification Performance Texture Segmentation Benchmarks Benchmark image for texture segmentation an ideal segmentation algorithm would divide this image into five segments. For example, a texture-descriptor based variant of splitand-merge may be able to achieve good results. 25 26 5