Multiscale Image Analysis with Stochastic Texture Differences

Similar documents
Stochastic Texture Difference for Scale-Dependent Data Analysis

Edge and local feature detection - 2. Importance of edge detection in computer vision

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

Lecture 6: Edge Detection

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Computer Vision I - Filtering and Feature detection

Lecture 7: Most Common Edge Detectors

AUDIOVISUAL COMMUNICATION

Image Processing and Analysis

MRT based Fixed Block size Transform Coding

EECS490: Digital Image Processing. Lecture #19

Supplemental Document for Deep Photo Style Transfer

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Histograms of Oriented Gradients

SIMULATIVE ANALYSIS OF EDGE DETECTION OPERATORS AS APPLIED FOR ROAD IMAGES

CHAPTER 3 SURFACE ROUGHNESS

Examination in Image Processing

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

THE preceding chapters were all devoted to the analysis of images and signals which

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Segmentation and Grouping

Digital Image Processing

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

A MULTI-RESOLUTION APPROACH TO DEPTH FIELD ESTIMATION IN DENSE IMAGE ARRAYS F. Battisti, M. Brizzi, M. Carli, A. Neri

EE795: Computer Vision and Intelligent Systems

DENOISING OF COMPUTER TOMOGRAPHY IMAGES USING CURVELET TRANSFORM

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei

Edge Detection. EE/CSE 576 Linda Shapiro

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Capturing, Modeling, Rendering 3D Structures

Filtering Images. Contents

Image Acquisition + Histograms

Digital Image Fundamentals

Image Processing

Diffraction through a single slit

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Image Analysis Lecture Segmentation. Idar Dyrdal

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

MRO CRISM TRR3 Hyperspectral Data Filtering

Analysis and extensions of the Frankle-McCann

Estimating the wavelength composition of scene illumination from image data is an

Image Enhancement Techniques for Fingerprint Identification

Norbert Schuff VA Medical Center and UCSF

Digital filters. Remote Sensing (GRS-20306)

Logical Templates for Feature Extraction in Fingerprint Images

Global Probability of Boundary

Color, Edge and Texture

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

TEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

Computer Vision I - Basics of Image Processing Part 2

Image Gap Interpolation for Color Images Using Discrete Cosine Transform

Obtaining Feature Correspondences

RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE

Feature Detectors and Descriptors: Corners, Lines, etc.

Compression Artifact Reduction with Adaptive Bilateral Filtering

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

SECTION 5 IMAGE PROCESSING 2

EE795: Computer Vision and Intelligent Systems

Error-Diffusion Robust to Mis-Registration in Multi-Pass Printing

Image Processing Via Pixel Permutations

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

HISTOGRAMS OF ORIENTATIO N GRADIENTS

SSIM Image Quality Metric for Denoised Images

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Introduction to color science

Digital Image Processing COSC 6380/4393

Sampling and Reconstruction

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Characterization of the anisotropy in color textures

Multiple comparisons problem and solutions

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS

An Image Curvature Microscope

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability

Image Processing: Final Exam November 10, :30 10:30

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Lecture 4: Spatial Domain Transformations

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

An Introduction to Content Based Image Retrieval

Lectures Remote Sensing

Fuzzy Inference System based Edge Detection in Images

Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection

Lecture: Edge Detection

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

VC 11/12 T14 Visual Feature Extraction

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT

Image denoising using curvelet transform: an approach for edge preservation

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia

Computer Vision I - Basics of Image Processing Part 1

Transcription:

Multiscale Image Analysis with Stochastic Texture Differences Nicolas Brodu, Hussein Yahia INRIA Bordeaux Sud-Ouest 16 / 01 / 2014

The Idea Current Pixel Left neighborhood Right neighborhood

The Idea Current Pixel How do textures compare in the left and right neighborhoods? Left neighborhood Right neighborhood

The Idea Current Pixel How do textures compare in the left and right neighborhoods? Left neighborhood Right neighborhood Can we define a kind of Texture Gradient, sensitive to texture changes instead of gray level changes?

Texture Representation Probability distribution H to visit each pixel: Half-Gaussian dev. λ = length scale for the neighborhood Random path: Start point drawn according to H Transitions drawn so H is the limit distribution of the implied Markov Chain Average spatial extent = λ average path length = m λ

Texture Representation Pixel values v V (e.g. [0..255]) Series of pixel values s Vm m=13 s = (233, 194, 120, 120, 134, 191, 170, 191, 134, 133, 133, 108, 159)

Texture Representation Texture = Probability distribution over V m Distribution of sequences Collectively, these sequences characterize how pixel values evolve in the texture Observed series = samples n Collect n observed series Estimator for the probability distribution Use a Reproducing Kernel Hilbert Space ℋ Use a characteristic kernel k such that k(s, ) ℋ n 1 Empirical estimator: P n i=1 k(si, )

Comparing textures Texture on the left = P, on the right = Q Use the RKHS norm: d(p,q)= P-Q ℋ= P-Q, P-Q ℋ d²(p,q) ( i,j k(si,sj) + i,j k(ti,tj) - 2 i,j k(si,tj))/n² with {s} and {t} samples from P and Q This is the MMD test (Gretton et al., 2012) Beats χ² or Kolmorgorov-Smirnov esp. for small n m Error in O(n ¹ ²) does not depend on dim(v ), just n. - / Valid for any kernel not limited to gray scale Vector data (RGB, hyperspectral), strings, graphs

Scalar pixels (e.g. gray scale) Scaling the data: si' = si /κ κ is the characteristic data scale, e.g. gray level difference, at which the texture is best described small κ : sensitive to small variations, but large gradients give similar k(s,t) values large κ : distinguish large gray level gradients, but small variations (e.g. noise) are ignored The inverse quadratic kernel m k(s,t) = 1/(1+ s'-t' ²), using the norm in V Characteristic and faster than the Gaussian kernel Normalize by m: comparable kernel values λ 1 m

Combining directions Diagonals Use the same scale λ and adapt the Markov chain Combining directions Product space ℋL/R ℋU/D ℋDL/UR ℋDR/UL Norm is d(x)² = dl/r(x)²+du/d(x)²+ddl/ur(x)²+ddr/ul(x)² for each pixel x, LRUD = Left Right Up Down, + diagonals Easy extensions if need be Any angle θ, voxels / higher dimensions, anisotropy

Result: an edge detector Original gray scale image d(x) for each pixel x with λ=1.5, κ=0.36 (for v=0..1) white/black is min/max d(x)

that can analyze at scales JPEG quantization artifacts Original λ=3, κ=1/256 Large contrast differences: coat, tripod, handle Fine texture with white / med-grey contrast Edges at small λ: sharp, but some JPEG blocks still visible λ too small for the grass texture Edges at small κ: sensitive to small gray diff.: JPEG artifacts, texture within the coat ignore med/large diff.: no grass, no coat edges, no tripod poles Edges at large κ: Ignore JPEG artifacts Highlight large contrasts λ=1, κ=1 λ=3, κ=1 Edges at medium λ: smoother grass texture < λ is matched

Application to remote sensing Spatial (λ) and data (κ) scales are known a priori Small κ is not necessarily noise: e.g. large trends = sensor drift and small variations carry information Over- or sub- sampled signals: λ should match the characteristic physical scale, not the sampling rate Example: Sea Surface Temperature Left: 8-day composite MODIS data blue = -1.2 C to red = 31.5 C black = land masses (no data) Right: analysis at: λ 75km (at center) κ 1 C typical oceanic current scales

Detecting characteristic scales Finding λ and κ without a priori information Analyse for a given pair of scales λ and κ Retain 20% of the most discriminative points Reconstruct the image from these points Compare with the original Hypothesis IF the points carry most of the information in the image. THEN the reconstruction will be good. Reconstruction accuracy as a proxy for good λ, κ Accuracy using the Peak Signal to Noise Ratio (PSNR) Accuracy using the Structural SIMilarity index (SSIM)

Multiscale analysis: PSNR Definite zones of high accuracy best λ, κ Local maxima (cameraman, house) objects in image have properties Zones at low κ (s.s.t, house) have high PSNR but match noise: house: the brick texture, sea surface temperature (s.s.t): noise at 0.1 C

Multiscale analysis: SSIM Same general transitions => irrelevant λ, κ below Zones of local maxima SSIM PSNR Pareto front for best λ, κ. Common maxima = best λ, κ? Something special for Barbara at λ 2.5 and large κ, in both PSNR / SSIM

Barbara, λ 2.5, large gradient? Quite obvious! but a nice validation of the method

Reconstruction from 20% points λ=1.5, κ=0.12, PSNR=17.6, SSIM=0.73 λ=3.5, κ=0.64, PSNR=17.7, SSIM=0.67 Texture edges preserved, details λ smoothed out!

Colored Textures The method is valid for any kernel acting on Vm Especially vector data: Color spaces, hyperspectral, etc. For RGB triplets v = {r,g,b} V Conversion to Lab space with D65 white point: ℓ(v) ℒ Using one of the two operators: δ1(v,w) = ℓ(v)-ℓ(w) ℒ : Lab is perceptually uniform norm in Lab is presumably a sensitivity to color difference δ2(v,w) = ΔE(ℓ(v)-ℓ(w)) : CIE DE 2000 updated formula for a better perceptual uniformity Then apply an updated kernel k1,2(s,t) = 1/(1+ m1 (δ1,2(v,w)/κ)² )

Color Results

Summary Multiscale Image Analysis Find or use the characteristic spatial (λ) and data value (κ) scales within the image With Stochastic Texture Differences Statistical description of the texture Norm of a difference (in Hilbert space) behaves like a norm of a gradient (=difference) but for textures! And for vector data Shown with RGB, but anything with a kernel is OK N. Brodu, H. Yahia, Multiscale image analysis with stochastic texture differences, to appear online in 2015.