Motivation. Gray Levels

Similar documents
Motivation. Intensity Levels

Lecture 4 Image Enhancement in Spatial Domain

Introduction to Digital Image Processing

Intensity Transformations and Spatial Filtering

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations

Chapter - 2 : IMAGE ENHANCEMENT

IMAGING. Images are stored by capturing the binary data using some electronic devices (SENSORS)

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava

Lecture 4: Spatial Domain Transformations

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Chapter 3: Intensity Transformations and Spatial Filtering

Image Enhancement: To improve the quality of images

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Edge and local feature detection - 2. Importance of edge detection in computer vision

Basic relations between pixels (Chapter 2)

Sampling and Reconstruction

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

EE795: Computer Vision and Intelligent Systems

Intensity Transformation and Spatial Filtering

Lecture 3 - Intensity transformation

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers

Introduction to Digital Image Processing

1.Some Basic Gray Level Transformations

Digital Image Processing. Lecture # 3 Image Enhancement

Ulrik Söderström 17 Jan Image Processing. Introduction

Filters. Advanced and Special Topics: Filters. Filters

Digital Image Processing

Image Enhancement 3-1

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Lecture Image Enhancement and Spatial Filtering

Digital Image Processing COSC 6380/4393

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Image Enhancement in Spatial Domain (Chapter 3)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CoE4TN3 Medical Image Processing

EN1610 Image Understanding Lab # 3: Edges

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

Digital Image Processing. Image Enhancement in the Spatial Domain (Chapter 4)

Image Restoration and Reconstruction

Outline 7/2/201011/6/

Intensity Transformations. Digital Image Processing. What Is Image Enhancement? Contents. Image Enhancement Examples. Intensity Transformations

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Digital Image Analysis and Processing

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Edge and corner detection

Filtering and Enhancing Images

CSE 152 Lecture 7. Intro Computer Vision

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Image Differentiation

Digital Image Fundamentals

Image Restoration and Reconstruction

Digital image processing

EE795: Computer Vision and Intelligent Systems

Original grey level r Fig.1

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

The SIFT (Scale Invariant Feature

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Unit - I Computer vision Fundamentals

Image Registration Lecture 4: First Examples

Review of Filtering. Filtering in frequency domain

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Digital Image Processing COSC 6380/4393

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Selected Topics in Computer. Image Enhancement Part I Intensity Transformation

Digital Image Processing. Image Enhancement (Point Processing)

Gray level histograms

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

Computer Vision I - Filtering and Feature detection

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Capturing, Modeling, Rendering 3D Structures

Basic Algorithms for Digital Image Analysis: a course

ECG782: Multidimensional Digital Signal Processing

Image Acquisition Image Digitization Spatial domain Intensity domain Image Characteristics

Segmentation and Grouping

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

Implementing the Scale Invariant Feature Transform(SIFT) Method

Schedule for Rest of Semester

Digital Image Fundamentals. Prof. George Wolberg Dept. of Computer Science City College of New York

Statistical image models

Computer Vision I - Basics of Image Processing Part 1

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Lecture No Image Enhancement in SpaPal Domain (course: Computer Vision)

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Transcription:

Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding to certain brightness. : Digital Image Proessing (Second Semester, 2015 16) http://www.eee.hku.hk/ elec4245 1 / 43 2 / 43 Motivation Gray Levels A/D converter D/A converter sensor digital image display finite dynamic range finite representation finite dynamic range These numbers are called intensity, or gray levels must be nonnegative must fall within a range of discrete values (dynamic range) are measured by the number of bits. How many gray levels are enough? Often, 8-bit. 2 8 = 256 levels Your computer likes it: 8 bits = 1 byte For an image of size X Y, your computer can store it with XY bytes (each pixel needs 1 byte to store its intensity). In reality, we need much less, due to compression. But other values exist: Printing: We may only have 1-bit ( ink or no ink at a specific location) High dynamic range (HDR) imaging: With better sensors and displays, we may record and show a wider range What s the limit? Our eyes can see about 14 orders of magnitude! 3 / 43 4 / 43

Gray Levels 8-bits 7-bits 6-bits 5-bits 1 2 4-bits 3-bits 2-bits 1-bit 3 5 / 43 Gray level mapping We will focus on discussing gray level images as the notations and concepts are much easier to understand. For color images, we can always perform such operations on the luminance channel. (more later) Let the input image be represented by I in (x, y). We process the image, and the output is represented by I out (x, y). The simplest kind of processing is a point-wise operation: I out (x, y) = T { I in (x, y) } where T can be a one-to-one mapping (reversible) can be a many-to-one mapping (irreversible) cannot be a one-to-many mapping For every pixel, we change the intensity from value input to output 6 / 43 Gray-level mapping The algorithm can be represented by an input-output plot output intensity input intensity It can usually be implemented as a look-up table (LUT) for maximum efficiency. 7 / 43 8 / 43

Gray-level mapping Gray-level mapping output output LUT is most flexible. But conceptually, let s consider formulas: I out (x, y) = { 0 Iin (x, y) < T 255 I in (x, y) T Threshold (1) I out (x, y) = 255 I in (x, y) Negative (2) Threshold input Negative input I out (x, y) = c log [ 1 + I in (x, y) ] Logarithm (3) output output I out (x, y) = c [ I in (x, y) ] γ Power-law (4) Pick c and γ so that I out (x, y) is within [0, 255]. Logarithm input Power-law input 9 / 43 Threshold 10 / 43 Threshold Output is a binary image T can be set at the mid-point of the intensity range (i.e., 128), but any other number is also fine. Theoretically, we lost 7/8 of the total information! But surprisingly, we retain most of the useful information. Thresholding is often used as part of a computer vision process, e.g., in pattern recognition or defect detection. Modified 11 / 43 12 / 43

Negative Logarithm Modified Not used often. For ordinary images it would look funny. Modified More useful for images we don t normally see, such as medical images. 13 / 43 Power-law Bit-plane slicing Example (γ = 1.5): Represent each pixel value in binary, and then create a binary image for each bit. Each such image is called a bit-plane. plane 8 180 = 53 =... 14 / 43 plane 1 1 0 1 1 0 1 0 0 0 0 1 1 0 1 0 1 Modified Plane 8 is most significant while plane 1 is least significant 15 / 43 16 / 43

Bit-plane slicing Multiple bit-planes Bit 8 Bit 7 Bit 6 Bit 5 Bit 8 Bit 8 7 Bit 8 6 Bit 8 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 8 4 Bit 8 3 Bit 8 2 Bit 8 1 17 / 43 18 / 43 Application: Watermarking Application: Watermarking Replace bit-plane 1 with another binary image as digital watermark Bit 8 2 Bit 1 This is one form of digital watermarking: hiding information digitally Often used for authentication: for example, to show that a certain picture is owned by you A fancy word steganography: art or practice of concealing a message, image, or file within another message, image, or file = + The method using bit-plane slicing is simple, easy to implement, and the watermark is easy to detect Slicing this image: Bit 8 Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 19 / 43 Drawback: the watermark is not robust: it can easily be destroyed or replaced There are much more sophisticated schemes 20 / 43

Histogram 1 2 3 Each pixel has a value (intensity). By collecting all the pixels together, we can form a histogram. The spatial information is lost! The histogram can give us a vague idea of the intensity concentrations. 21 / 43 Histogram 22 / 43 Histogram Too dark Too bright Equalized The histogram can be helpful to provide the curve for gray-level mapping. : output image has (roughly) the same number of pixels of each gray level (hence equalized ) Good thing: make use of all available gray levels to the maximum extent Reality: this is only approximate because we are not allowed one-to-many mapping (see the next example) Conceptually: (for 8-bit) lowest 1/256 intensity of all pixels map to intensity 0; next 1/256 map to intensity 1; next map to 2, etc. Mainly works when the illuminating condition has problem 23 / 43 24 / 43

Example: 3-bit image, 64 64 pixels. Assume the following: gray level number of pixels 0 790 1 1023 2 850 3 656 4 329 5 245 6 122 7 81 1 Gray levels: [0,..., 7], total 4096 pixels 2 Proportion of input pixels at level 0: 790/4096 0.19. We need to fill the entire range of 0 to 7, so such pixels should map to 0.19 7 1.33. Round to the nearest integer, we map them to 1. 3 Proportion of input pixels at level 0 and 1: (790 + 1023)/4096 0.44. Level 1 should map to 0.44 7 3.08 3. 4 Proportion of input pixels at level 0 to 2: (790 + 1023 + 850)/4096 0.65. Level 2 should map to 0.65 7 4.55 5. 5 Similarly: Level 3 6, Level 4 6, Level 5 7, Level 6 7, Level 7 7 25 / 43 26 / 43 Generally, Assume L levels, and j = 0,..., L 1; image is of size M N. Let n j denote the number of pixels at level j. We compute, for each k, s k = L 1 MN k n j k = 0, 1,..., L 1 (5) j=0 so each s k is the ideal output level for an input level k. We are limited to integer output levels, so we quantize s k. count input intensity output intensity 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 input intensity count output intensity Note that output histogram is roughly flat, but not strictly. 27 / 43 28 / 43

Adaptive histogram equalization Modification: adaptive histogram equalization based on a portion of the image, e.g., every non-overlapping 16 16 block (tile). Limit contrast expansion in flat regions by clipping values. Smooth blending (bilinear interpolation) between neighboring tiles. Research: What is undesirable, and how to improve the algorithm? 29 / 43 Global equalization Adaptive equalization 30 / 43 Pointwise operations 1 We can perform point-by-point (also known as pointwise) operations to combine several images. Assume the images are of the same size: 2 3 addition: I(x, y) = a(x, y) + b(x, y) subtraction: I(x, y) = a(x, y) b(x, y) multiplication: division: 31 / 43 I(x, y) = a(x, y) b(x, y) a(x, y) I(x, y) = b(x, y) 32 / 43

Addition and averaging Addition and averaging Assume each image is corrupted by additive white Gaussian noise: fi (x, y) = g(x, y) + ni (x, y) Average images to reduce noise 1 image 8 images (6) g(x, y) is the ideal noise-free image fi (x, y) is what we capture (subscript i to denote the ith one) ni (x, y) is the noise. Every pixel of the noise follows a Gaussian distribution with mean zero and the same standard deviation σ. The standard deviation (or variance σ2 ) of the noise indicates how severe the image is corrupted. We use the expected value E, such that 32 images E[ni (x, y)] = 0 E[n2i (x, y)] 33 / 43 (8) 34 / 43 Addition and averaging Addition and averaging Example σ2 = 0.001 2552 =σ (7) 2 Assume we now have images, f1 (x, y),..., f (x, y) σ2 = 0.01 2552 σ2 = 0.1 2552 1X fi (x, y) fe(x, y) = i=1 Noise in one image: E[( f1 (x, y) g(x, y))2 ] = E[n21 (x, y)] = σ2 35 / 43 (9) 36 / 43

Addition and averaging Subtraction Spot the difference: Noise in the averaged image: i=1 No defect, f1 (x, y) 1X 1X E[( fe(x, y) g(x, y))2 ] = E[( fi (x, y) g(x, y))2 ] With defect f2 (x, y) (10) i=1 1X = E[( ni (x, y))2 ] (11) 1 X = 2 E[(ni (x, y))2 ] i=1 (12) 1 1 2 2 σ σ = 2 (13) i=1 = Eq. (12) is valid provided E[ni n j ] = 0 when i, j 37 / 43 38 / 43 Subtraction Multiplication Take the difference: f1 (x, y) f2 (x, y) No alignment Properly aligned and thresholded We can think about how an image is formed: (the imaging process) f (x, y) = i(x, y)r(x, y) (14) i(x, y) is the illumination source: 0 < i(x, y) < r(x, y) is the reflectance: 0 < r(x, y) < 1 Some images are formed with transmission (e.g. x-ray), then r(x, y) is the transmissivity f (x, y) are confined to the available dynamic range when captured by a detector Research: How to align? 39 / 43 40 / 43

Other combinations High dynamic range (HDR) imaging: combining images from different exposures Other combinations Removing occlusion (source: Herley, Automatic occlusion removal from minimum number of images, ICIP 2005) 41 / 43 42 / 43 Summary We looked at image enhancement with one or more images as input. We consider each pixel location as unrelated to its neighbors. Next: We look at image processing that involves the neighbor pixels. 43 / 43