Digital Image Processing Lectures 1 & 2

Similar documents
An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Image Processing. Ch1: Introduction. Prepared by: Hanan Hardan. Hanan Hardan 1

Unit - I Computer vision Fundamentals

Topic 5 Image Compression

CP467 Image Processing and Pattern Recognition

Digital Image Processing

Ulrik Söderström 17 Jan Image Processing. Introduction

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

What is computer vision?

COMPUTER VISION. Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Block Diagram. Physical World. Image Acquisition. Enhancement and Restoration. Segmentation. Feature Selection/Extraction.

IMAGE DE-NOISING IN WAVELET DOMAIN

Digital Image Processing COSC 6380/4393

Image restoration. Restoration: Enhancement:

MULTIDIMENSIONAL SIGNAL, IMAGE, AND VIDEO PROCESSING AND CODING

Image Processing Lecture 10

Introduction to Medical Image Processing

COMP 102: Computers and Computing

Image Processing (IP)

IT Digital Image ProcessingVII Semester - Question Bank

Introduction to Image Super-resolution. Presenter: Kevin Su

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

Image Restoration. Diffusion Denoising Deconvolution Super-resolution Tomographic Reconstruction

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

MEDICAL IMAGE ANALYSIS

Image Processing, Analysis and Machine Vision

Advance Shadow Edge Detection and Removal (ASEDR)

Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

UNIT 22. Remote Sensing and Photogrammetry. Start-up. Reading. In class discuss the following questions:

CMPT 365 Multimedia Systems. Media Compression - Image

EE795: Computer Vision and Intelligent Systems

CSE 455: Computer Vision Winter 2007

IMAGE COMPRESSION TECHNIQUES

IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Non-Blind Deblurring Using Partial Differential Equation Method

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

Blur Space Iterative De-blurring

Time-resolved PIV measurements with CAVILUX HF diode laser

Being edited by Prof. Sumana Gupta 1

Sensor Modalities. Sensor modality: Different modalities:

Direction-Length Code (DLC) To Represent Binary Objects

An Approach for Reduction of Rain Streaks from a Single Image

Fabric Defect Detection Based on Computer Vision

A COMPARISON OF WAVELET-BASED AND RIDGELET- BASED TEXTURE CLASSIFICATION OF TISSUES IN COMPUTED TOMOGRAPHY

Why study Computer Vision?

3D Modeling of Objects Using Laser Scanning

IMAGE DENOISING USING FRAMELET TRANSFORM

Interactive Progressive Encoding System For Transmission of Complex Images

Central Slice Theorem

VIDEO SIGNALS. Lossless coding

Image Processing and Analysis

Tracking. Hao Guan( 管皓 ) School of Computer Science Fudan University

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

An introduction to 3D image reconstruction and understanding concepts and ideas

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

FIELD PARADIGM FOR 3D MEDICAL IMAGING: Safer, More Accurate, and Faster SPECT/PET, MRI, and MEG

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING

Hybrid Approach for MRI Human Head Scans Classification using HTT based SFTA Texture Feature Extraction Technique

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

3.5 Filtering with the 2D Fourier Transform Basic Low Pass and High Pass Filtering using 2D DFT Other Low Pass Filters

CoE4TN3 Medical Image Processing

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Three-Dimensional Computer Vision

Motion Blur Image Fusion Using Discrete Wavelate Transformation

f(x,y) is the original image H is the degradation process (or function) n(x,y) represents noise g(x,y) is the obtained degraded image p q

INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY

Image Restoration. Yao Wang Polytechnic Institute of NYU, Brooklyn, NY 11201

3D graphics, raster and colors CS312 Fall 2010

Other Reconstruction Techniques

A Novel Approach to Saliency Detection Model and Its Applications in Image Compression

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

PEE Processamento Digital de Imagens

Overview of Computer Vision. CS308 Data Structures

Video Compression MPEG-4. Market s requirements for Video compression standard

MPEG-l.MPEG-2, MPEG-4

Tomographic Reconstruction

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

Compression of Stereo Images using a Huffman-Zip Scheme

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

CS595:Introduction to Computer Vision

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Segmentation of Images

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1

Regularized Energy Minimization Models in Image Processing. Tibor Lukić. Faculty of Technical Sciences, University of Novi Sad, Serbia

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Fundamentals of Digital Image Processing

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Transcription:

Lectures 1 & 2, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013

Introduction to DIP The primary interest in transmitting and handling images in digital forms goes back to 1920 s. However, due to lack of adequate computer systems and vast storage requirements the interest in this area was not fully explored until mid 1960 s after the successful Apollo mission and other space programs. The serious work in the area of digital image processing (DIP) began at JPL when the need for processing lunar images was felt after the Apollo mission. DIP, originally established to analyze and improve lunar images is rapidly growing into a wealth of new applications, due to the enormous progress made in both algorithm development and computer engineering. At present, there is no technical area that is not affected one way or another by DIP. The most important fields of growth appear to emerge in the areas of medical image processing (e.g. surgical simulators and tele-surgery), data communication and compression (e.g., 3D TV, mobile devices), remote sensing (e.g., meteorological, environmental and military), computer vision (e.g., robotics, autonomous systems and UAVs), etc.

Currently the emphasis is being shifted towards real-time intelligent DIP systems. For the years ahead, trends in computer engineering, especially in parallel/pipeline processing technologies, coupled with new emerging applications indicate no limitation for the horizon of the DIP area. Applications: 1 Medical: Automatic detection/classification of tumors in medical images (e.g., X-ray, MRI, CT-scan, and ultrasound), chromosome identification, etc. 2 Computer Vision: Identification of parts in assembly lines, robotics, tele-operation, autonomous systems, etc. 3 Remote Sensing: Meteorology and climatology, tracking of earth resources, geographical mapping; prediction of agricultural crops, urban growth and weather, flood, fire control, etc. 4 Radar and Sonar: Detection and recognition of various targets, aircraft or missile guidance or maneuvering, etc. 5 Image Transmission: 3DTV and mobile systems, teleconferencing, communications over computer networks, sensor networks, space and satellite, etc. 6 Office Automation: Document (text and image) storage, retrieval and reproduction. 7 Identification Systems: Facial, Iris, finger-print-based ID systems, airport security systems, etc.

Digital Image: A sampled and quantized version of a 2-D function that has been acquired by optical or other means, sampled at equally spaced rectangular grid pattern, and quantized in equal intervals of amplitudes. A typical image processing system is: DIP involves handling, transmitting, enhancing and analyzing digital images with the aid of digital computers. This calls for manipulation of 2-D signals. There are generally three types of processing applied to an image, namely low-level,intermediate-level, and high-level processing, which produce a modified version of that image.

1. Image Representation and Modeling: An image can be represented either in spatial or transform domains. An important consideration here is fidelity or intelligibility criterion for measuring quality of an image, e.g., contrast (gray level diversity), spatial frequencies, edge sharpness, etc. Images represented in the spatial domain directly relate to the type and physical nature of imaging sensors; e.g., luminance of object in a scene for pictures taken by camera; absorption characteristics of body tissue for X-ray images; radar cross-section of a target for radar imaging; temperature profile of an object for infrared imaging; and gravitational field in geophysical imaging. Linear statistical models are also used to represent images in the spatial domain. These models allow development of algorithms useful for an ensemble set of images rather than a single image. Alternatively, 2-D unitary transforms are applied to extract such attributes as spectral content, power spectra, energy, texture, or any other salient features prior to filtering, data compression, and object recognition.

-Cont 2. Image Enhancement (low-level): No imaging system generates images of perfect quality. In image enhancement the aim is to manipulate an image in order to improve its quality. This requires an intelligent human evaluator to recognize and extract useful information from an image. Since the human subjective judgement may either be wise or fickle, certain difficulties may arise. Thus, a careful study requires subjective testing of a group of human viewers. The psychophysical aspect should always be considered in this process. Examples of image enhancement are: Contrast & gray scale improvement Spatial frequency enhancement Pseudo-coloring Noise removal and filtering Edge sharpening Magnification and Zooming

-Cont 3. Image Restoration (low-level): As in image enhancement, ultimate goal here is to improve the quality of an image in some sense. Image restoration involves recovering or estimating an image that has been degraded by some deterministic and/or stochastic phenomena. Blur is a deterministic phenomenon which is caused by atmospheric turbulence (satellite imaging), relative motion between the camera and the object, defocusing, etc. Noise, on the other hand, is a stochastic phenomenon which corrupts the images additively or multiplicatively. Sources of additive noise are: imperfection of sensors, thermal noise, and channel noise. Examples of multiplicative noise are speckle in coherent imaging systems such as synthetic aperture radar/sonar (SAR/SAS), lasers, and ultrasound images and also film grain noise. Image restoration techniques aim at modeling the degradation and applying an appropriate process to recover the original image.

-Cont Among typical restoration methods are: Image estimation and noise smoothing Deblurring Inverse filtering 2-D Wiener and Kalman filters Image reconstruction can also be viewed as a special class of restoration where two or higher dimensional objects are reconstructed from several projections, e.g., medical CT-scan, astronomy, radar imaging and NDE. Typical methods are: Radon Transform Projection theorem Reconstruction algorithms

-Cont 4. Image Transforms (Intermediate-Level): This process involves mapping digital images to the transform domain using unitary transforms such as 2-D DFT, 2-D Discrete Cosine transform (DCT), 2-D Discrete Wavelet Transform (DWT), and 2-D KL transform. In the transform domain certain useful image characteristics, which cannot typically be ascertained in the spatial domain, are revealed. Image transformation performs both feature extraction and dimensionality reduction which are crucial for various applications. 5. Image Data Compression and Coding: In many applications one needs to significantly reduce the number of bits to represent images before transmission or storage images since bits is tremendous. For example, NOAA s Geostationary (GOES-15) satellite has five spectral channels (1 visible and 4 IR) each capturing 10-bit 3K 3K pixel images every 1/2 hour. Storage and/or transmission of such huge amount of data require large capacity and/or bandwidth which could not be afforded.

-Cont Straight digitalization requires 8 bits per pixel. Using frame coding technique (DPCM) or transform coding one can reduce this to 1-2 bits/pixel while preserving the quality of images. For video, motion-compensated coding detects and estimates motion parameters from image sequences and motion-compensated frame differences are transmitted. An area of active research is stereo-video sequence compression and coding for 3D TV, wearable devices, and virtual reality applications. Some of the typical schemes are: Pixel-by-pixel coding Predictive coding Transform coding Hybrid coding Frame-to-frame coding Vector quantization

-Cont 6. Image Analysis and Computer Vision (High-level): Involves: (a) segmentation, (b) feature extraction and (c) classification/recognition. Segmentation is used to isolate desired objects from the scene so that features can easily be extracted, e.g., separation of targets from background. A representative set of features is extracted from the segmented objects. Quantitative evaluation of these features allow classification and description of the isolated object. Overall goal here is to build an automatic recognition system that can derive symbolic descriptions from a scene. Pattern recognition can be regarded as the inverse of computer graphics. It starts with a picture or scene and transforms it into an abstract description; a set of numbers, a string of symbols or a graph. The differences and similarities of computer graphics, image processing and pattern recognition are shown above.

-Cont A complete image processing or computer vision system is shown below. Sensor: Captures/aquires image data. Preprocessor: Improves the contrast, removes noise, sharpen edges, etc. Segmentation: Isolates objects of interest from the background and clutter. Feature Extraction:Extracts a representative set of features from segmented objects. Classifier: Categorizes each object or region and extracts high-level attributes from them. Structural Analyzer: Determines relationships among classified primitives. The output is description of the original scene. World Model: Guides each stage of the system the results of which can be used in turn to refine the world model. A world model is constructed incorporating sufficient a priori information about the scene before analyze the image.

-Cont Typical operations applied in image analysis are: Edge extraction Line extraction Texture discrimination Shape recognition Object recognition A more complete image analysis system is shown below:

-Cont The common questions underlying these areas are: 1 How do we describe or characterize images? 2 What mathematical methods do we want to use on an image? 3 How do we implement these algorithms? 4 How do we evaluate the quality of the processed images? Goals of this course: Fundamental concepts Image processing mathematics Basic algorithms State-of-the-art Applications