C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

Similar documents
Computer Vision & Digital Image Processing. Image segmentation: thresholding

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Lecture 4 Image Enhancement in Spatial Domain

Introduction to Digital Image Processing

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Region-based Segmentation

Lecture 4: Spatial Domain Transformations

Ulrik Söderström 16 Feb Image Processing. Segmentation

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Topic 4 Image Segmentation

Chapter - 2 : IMAGE ENHANCEMENT

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Intensity Transformation and Spatial Filtering

Motivation. Gray Levels

Motivation. Intensity Levels

Sampling and Reconstruction

EECS490: Digital Image Processing. Lecture #22

EE795: Computer Vision and Intelligent Systems

Basic Algorithms for Digital Image Analysis: a course

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN

Computer Vision I - Basics of Image Processing Part 1

Digital Image Analysis and Processing

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Digital Image Processing

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Part 3: Image Processing

Basic relations between pixels (Chapter 2)

Effective Features of Remote Sensing Image Classification Using Interactive Adaptive Thresholding Method

Lecture 9: Hough Transform and Thresholding base Segmentation

Processing of binary images

Comparative Study of Linear and Non-linear Contrast Enhancement Techniques

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations

Chapter 3: Intensity Transformations and Spatial Filtering

Edge and local feature detection - 2. Importance of edge detection in computer vision

Image restoration. Restoration: Enhancement:

MR IMAGE SEGMENTATION

Point Operations. Prof. George Wolberg Dept. of Computer Science City College of New York

Selected Topics in Computer. Image Enhancement Part I Intensity Transformation

EE 701 ROBOT VISION. Segmentation

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Lecture 6: Edge Detection

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers

Segmentation and Grouping

Point operation Spatial operation Transform operation Pseudocoloring

Introduction to Medical Imaging (5XSA0) Module 5

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Image Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed

Arithmetic/Logic Operations. Prof. George Wolberg Dept. of Computer Science City College of New York

CS 664 Segmentation. Daniel Huttenlocher

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

Morphological Image Processing

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

EE795: Computer Vision and Intelligent Systems

Chapter 10: Image Segmentation. Office room : 841

Image Restoration and Reconstruction

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

CS4733 Class Notes, Computer Vision

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Digital Image Processing

Recall: Derivative of Gaussian Filter. Lecture 7: Correspondence Matching. Observe and Generalize. Observe and Generalize. Observe and Generalize

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

EELE 5310: Digital Image Processing. Lecture 2 Ch. 3. Eng. Ruba A. Salamah. iugaza.edu

Lecture 2 Image Processing and Filtering

EELE 5310: Digital Image Processing. Ch. 3. Eng. Ruba A. Salamah. iugaza.edu

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing

PSD2B Digital Image Processing. Unit I -V

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Islamic University of Gaza Faculty of Engineering Computer Engineering Department

CSCI 4974 / 6974 Hardware Reverse Engineering. Lecture 20: Automated RE / Machine Vision

Image Restoration and Reconstruction

Filtering and Enhancing Images

Image Enhancement: To improve the quality of images

Lecture 7: Most Common Edge Detectors

Stereo imaging ideal geometry

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

CITS 4402 Computer Vision

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Lecture: Edge Detection

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

Transcription:

T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations I For students of HI 5323 Image Processing Willy Wriggers, Ph.D. School of Health Information Sciences http://biomachina.org/courses/processing/03.html

1. Introduction There are variety of ways to classify and characterize image operations Reasons for doing so: understand what type of results we might expect to achieve with a given type of operation what might be the computational burden associated with a given operation

Types of Image Operations The types of operations that can be applied to digital images to transform an input image a[m,n] into an output image b[m,n] (or another representation) can be classified into three categories Operations Point Local Global characterizations the output value at a specific coordinate is dependent only on the input value at that same coordinate the output value at a specific coordinate is dependent on the input values in the neighborhood of that same coordinate the output value at a specific coordinate is dependent on all the values in the input image Generic Complexity/Pixel constants P N 2 2 Types of image operations. Image size = N x N; neighborhood size = P x P www.ph.tn.tudelft.nl/courses/fip/noframes/fip-characte

Types of Image Operations An input image a[m,n] and an output image b[m,n] (or another representation) www.ph.tn.tudelft.nl/courses/fip/noframes/fip-characte

Type of Neighborhoods Many of the things we ll do involve using neighboring samples Who is my neighbor Common approaches: 4-connected (N, S, E, W) 8-connected (add NE, SE, SW, NW)

Neighborhoods Neighborhood operations play a key role in modern digital image processing It is therefore important to understand how images can be sampled and how that relates to the various neighborhoods that can be used to process an image Rectangular sampling - In most cases, images are sampled by laying a rectangular grid over an image Hexagonal sampling - An alternative sampling scheme Both sampling schemes have been studied extensively and both represent a possible periodic tiling of the continuous image space

Types of Neighborhoods Local operations produce an output pixel value b[m=mo,n=no] based upon the pixel values in the neighborhood of a[m=mo,n=no] Some of the most common neighborhoods are the 4-connected neighborhood and the 8-connected neighborhood in the case of rectangular sampling and the 6-connected neighborhood in the case of hexagonal sampling 4-connected neighborhood 8-connected neighborhood 6-connected neighborhood www.ph.tn.tudelft.nl/courses/fip/noframes/fip-characte

2. Point-Based Image Arithmetic Image-Image Operations: C[x, y] = f (A[x, y], B[x, y]) Operates on each corresponding point from two (or more) images (Usually) requires that both images have the same dimensions: both in their domain and range web.engr.oregonstate.edu/~enm/cs519

Image Addition Used to create double-exposures C[x, y] = A[x, y] + B[x, y] + = web.engr.oregonstate.edu/~enm/cs519

Image Averaging Average multiple images (frames) of the same scene together Useful for removing noise + +... = web.engr.oregonstate.edu/~enm/cs519

Image Subtraction Useful for finding changes between two images of (basically) the same scene. C[x, y] = A[x, y] B[x, y] More useful to use absolute difference C[x, y] = A[x, y] B[x, y] web.engr.oregonstate.edu/~enm/cs519

What s changed? Background Subtraction (absolute difference) - = web.engr.oregonstate.edu/~enm/cs519

Motion Use differencing to identify motion in an otherwise unchanging scene I.e.: object motion, not camera motion web.engr.oregonstate.edu/~enm/cs519

Digital Subtraction Angiography Medical imaging technique used to see blood vessels: Take one X-ray Inject a contrast agent Take another X-ray (and hope the patient hasn t moved, or even breathed too much) Subtract the first (background) from the second (background + vessels) web.engr.oregonstate.edu/~enm/cs519

Multiplication Useful for masking and alpha blending: C[x, y] = A[x, y] B[x, y] = web.engr.oregonstate.edu/~enm/cs519

Alpha Blending Addition of two images, each with fractional (0..1) masking weights Useful for transparency, compositing, etc. Color images are often stored as RGBα (or RGBA) web.engr.oregonstate.edu/~enm/cs519

3. Single Image Point Operations Simplest kind of image enhancement Also called level operations Process each point independently of the others Remaps each sample value: g = f(g) where - g is the input value (graylevel) - g is the new (processed) result - f is a level operation

Simplest level operation: for some constant (bias) b - b > 0 Brighter Image - b < 0 Darker Image Adding a Constant f(g) = g + b

Amplification (Gain) Another simple level operation is amplification (multiplication): f(g) = ag for some constant gain (amplification) a - a > 1 Amplifies signal (louder, more contrast) - a < 1 Diminishes signal (softer, less contrast)

Linear Level Operators Linear operator combine gain (multiplication) and offset (addition): where - a is the gain - b is the bias f(g) = ag + b

Negative Computing the negative of the signal/image: f(g) = g Or, to keep the range positive: f(g) = g max g where g [0, g max ] f(g) This is simply a line with slope = 1 g

Thresholding a signal: Thresholding f(g) = 0 if g < T 1 otherwise for some intensity threshold T f(g) T g

Quantization Quantization is choosing different finite values to represent each value of a (possibly analog) input signal Quantization is usually monotonic: g 1 g 2 implies f(g 1 ) f(g 2 ) Can be thought of as multi-level thresholding: f(g) = q 1 if g min g < T 1 q 2 if T 1 g < T 2 q 3 if T 2 g < T 3. q n if T n-1 g < g max

Logarithm Used to consider relative changes g 1 /g 2 instead of absolute ones g 1 g 2 : f(g) = log(g) Useful when the dynamic range is large Examples: Apparent brightness Richter scale Human vision

Exponential Can be used to undo logarithmic processing: f(g) = e g

Contrast Enhancement Any time we use level operations to make one level more distinguishable from another we call it contrast enhancement If number of levels stays fixed, contrast enhancement trades off decreased contrast in one part of our signal range for increased contrast in a range we re interest in If we plot our level operation as a function: All sections where the slope is greater than one increase the contrast in that intensity range All sections where the slope is less than one dimish the contrast in that intensity range

Windowing Windowing is contrast enhancement of one part of the signal range Example: mapping [0, 4095] input to [0, 255] display The simplest mapping is: 256 4096 f(g) = g = g/64

Windowing (cont.) Suppose we re interest mainly in the range [500,1200] Better mapping: f(g) = 0 if g < 500 255(g 500)/(1200 500) if 500 g 1200 255 if g > 1200 Windowing is often a continuous piecewise-linear mapping See also histogram equalization (nonlinear mapping)!

4. Segmentation A technique that is used to find the objects of interest. Segmentation can divide the image into regions segment foreground from background. Why? Industrial inspection Character recognition Tracking objects Geographical applications Medical analyses What? Histogram-based - Thresholding Edge-based Edge detection Region-base - Edge growing There is no universally applicable segmentation technique that will work for all images No segmentation technique is perfect

Thresholding Thresholding is used to segment an image by setting all pixels whose intensity values are above a threshold to a foreground value and all the remaining pixels to a background value If we we are interested in light objects on a dark background if a[m,n] T a[m,n] = object = 1 else a[m,n] = background = 0 If we we are interested in dark objects on a light background if a[m,n] < T a[m,n] = object = 0 else a[m,n] = background = 1 Where a[m,n] is a image, T is called gray level threshold Thresholding is simplest and most widely used method to improve the result of segmentation

Thresholding Value Thresholding can be viewed as an operation that involves tests against a function T of the form: T = T(x,y, p(x,y), f(x,y)) Where thresholding value can be a function of position (x,y), local neighborhood p(x,y), and pixel value f(x,y) Global thresholding Adaptive (dynamic) thresholding T = T(f(x,y)) T = T(x,y, p(x,y), f(x,y)) Intensity histograms are a tool which simplify the selection of thresholds

Global threshold (constant): Global Thresholding To partition the image histogram by using a single threshold T Apply on a image whose intensity histogram has distinctive peaks How to estimate threshold Automatic: thresholding value is calculated from the histogram (assume a bimodal histogram) Manual: fixed thresholding value. E.g. T = 128 For image of simple object on a contrasting background, placing the threshold at the dip of the bimodal histogram minimizes the sensitivity of the measured area to threshold variations

Global Thresholding Example input T = 120 output www.dai.ed.ac.uk/hipr2/threshld

Adaptive Thresholding Change the threshold dynamically over the image Two ways to find the threshold value The region-oriented thresholding The local thresholding The assumption behind both methods is that smaller image regions are more likely to have approximately uniform illumination, thus being more suitable for thresholding

Region-Oriented Thresholding Region-oriented thresholding Divide an image into an array of overlapping subimages and then find the optimum threshold for each subimage by investigating its histogram The threshold for each single pixel is found by interpolating the results of the subimages Tradeoff - it is computational expensive

The local thresholding Local Thresholding Statistically examine the intensity values of the local neighborhood of each pixel The functions include: - the mean of the local intensity distribution T = mean - The median value T = median - The mean of the minimum and maximum values T = minimum + maximum 2 The size of the neighborhood has to be large enough to cover sufficient foreground and background pixels, but not too large

Local Thresholding Example With global thresholding with T= 87 original With local thresholding by mean of 7X7 neighborhood www.dai.ed.ac.uk/hipr2/threshld

Optimal Threshold Selection The method based on approximation of the histogram of an image using a weighted sum of two or more probability densities with normal distributions The threshold is set as the closest gray level corresponding to the probability between the maxima of two or more normal distributions, which results in minimum error segmentation ct.radiology.uiowa.edu/~jiangm/courses/dip/html/node121

Threshold Selection The chances of selecting a good threshold are increased if the histogram peaks are Tall Narrow Symmetric Separated by deep valleys One way to improve the shape of histograms is to consider only those pixels that lie on or near the boundary between objects and the background

Resources Textbooks: Kenneth R. Castleman, Digital Image Processing, Chapter 6,7,18 John C. Russ, The Image Processing Handbook, Chapter 3,4,6,7

Reading Assignment Textbooks: Kenneth R. Castleman, Digital Image Processing, Chapter 8, 9 John C. Russ, The Image Processing Handbook, Chapter 5