EE795: Computer Vision and Intelligent Systems

Similar documents
ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing

Lecture 2 Image Processing and Filtering

EE795: Computer Vision and Intelligent Systems

Motivation. Gray Levels

1/12/2009. Image Elements (Pixels) Image Elements (Pixels) Digital Image. Digital Image =...

EE795: Computer Vision and Intelligent Systems

Computer Vision I - Basics of Image Processing Part 1

Introduction to Homogeneous coordinates

Ulrik Söderström 17 Jan Image Processing. Introduction

Motivation. Intensity Levels

Digital Image Processing. Introduction

3D graphics, raster and colors CS312 Fall 2010

Computer Vision. The image formation process

Digital Image Fundamentals II

Homework #1. Displays, Alpha Compositing, Image Processing, Affine Transformations, Hierarchical Modeling

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Project report Augmented reality with ARToolKit

CSE 252B: Computer Vision II

Digital Image Processing COSC 6380/4393

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

Image Processing. CSCI 420 Computer Graphics Lecture 22

Image Processing. Alpha Channel. Blending. Image Compositing. Blending Errors. Blending in OpenGL

Filtering and Enhancing Images

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Computer Graphics. Instructor: Oren Kapah. Office Hours: T.B.A.

Chapter 18. Geometric Operations

Lecture 4: Spatial Domain Transformations

Edge and local feature detection - 2. Importance of edge detection in computer vision

Lecture 6: Edge Detection

Practical Image and Video Processing Using MATLAB

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Dense Image-based Motion Estimation Algorithms & Optical Flow

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

CS4733 Class Notes, Computer Vision

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

CS 231A Computer Vision (Autumn 2012) Problem Set 1

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

3. (a) Prove any four properties of 2D Fourier Transform. (b) Determine the kernel coefficients of 2D Hadamard transforms for N=8.

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling

Image warping , , Computational Photography Fall 2017, Lecture 10

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

INTRODUCTION TO IMAGE PROCESSING (COMPUTER VISION)

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

Statistical image models

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical modeling, Projections

Experiments with Edge Detection using One-dimensional Surface Fitting

Display. Introduction page 67 2D Images page 68. All Orientations page 69 Single Image page 70 3D Images page 71

VC 16/17 TP5 Single Pixel Manipulation

Digital Image Fundamentals. Prof. George Wolberg Dept. of Computer Science City College of New York

Programming Fundamentals

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Multiple View Geometry

CP467 Image Processing and Pattern Recognition

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

Sampling and Reconstruction

Visualizing Images. Lecture 2: Intensity Surfaces and Gradients. Images as Surfaces. Bridging the Gap. Examples. Examples

COMPUTER VISION. Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai

3D Rendering Pipeline

Local Image preprocessing (cont d)

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Introduction to Computer Vision. Week 3, Fall 2010 Instructor: Prof. Ko Nishino

Digital Image Processing

Digital Image Processing

3D GRAPHICS. design. animate. render

Computer Graphics: Viewing in 3-D. Course Website:

The. Handbook ijthbdition. John C. Russ. North Carolina State University Materials Science and Engineering Department Raleigh, North Carolina

Digital Images. Kyungim Baek. Department of Information and Computer Sciences. ICS 101 (November 1, 2016) Digital Images 1

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Robert Collins CSE486, Penn State. Lecture 2: Intensity Surfaces and Gradients

Graphics Pipeline 2D Geometric Transformations

PSD2B Digital Image Processing. Unit I -V

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Sculpting 3D Models. Glossary

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Constructing System Matrices for SPECT Simulations and Reconstructions

Assignment #2. (Due date: 11/6/2012)

Ray Tracer Due date: April 27, 2011

Hand-Eye Calibration from Image Derivatives

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Image Processing using LabVIEW. By, Sandip Nair sandipnair.hpage.com

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

An introduction to 3D image reconstruction and understanding concepts and ideas

CS6670: Computer Vision

Digital Image Processing. Lecture # 3 Image Enhancement

Motion in 2D image sequences

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Transcription:

EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/

2 Outline Basics Image Formation Image Processing

3 Intelligent Systems Intelligence The capacity to acquire knowledge The faculty of thought and reason System A group of interacting, interrelated or interdependent elements forming a complex whole This class uses computer vision to give a system intelligence The systems should perceive, reason, learn, and act intelligently

4 Vision Signal to symbol transformation Input: Signals Vision Output: Symbols

5 Image Processing Manipulation of images Input: Image Image Processing Output: Image Examples: Photoshopping Image enhancement Noise filtering Image compression

IP Examples 6

7 Pattern Recognition Assignment of a label to input value Input: Measurement vector Pattern Recognition Output: Label ( Classification ) Examples: Classification (1/0) Regression (real valued) Labeling (multi label)

PR Examples 8

9 Computer Graphics Create realistic images ( forward problem ) Input: Mathematical model of objects and events Computer Graphics Output: Images ( synthesized ) Examples: Simulation (flight, driving) Virtual tours Video games Movies

CG Examples 10

11 Computer Vision Interpretation and understanding of images Input: 1. Image derived measurements 2. Models (prior knowledge) Computer Vision Examples: Object recognition Face recognition Lane detection Activity analysis Output: Recognition of objects and events embedded in images and video ( Semantic level classification)

12 Geometric Primitives Building blocks for description of 3D shapes Points Lines Planes Curves Surfaces Volumes

13 Points 2D pixel coordinates in an image x = x y 3D point in real world X X = Y Z Homogenous coordinates Concatenate an extra term (usually 1) to point x x = y 1 for 2D

14 2D Lines and 3D Planes 2D lines l = a, b, c Normalized line l = n x, n y, d = n, d, n = 1 n - normal vector to line d distance to origin Line equation x l = ax + by + c = 0 3D plane is extension of 2D line m = a, b, c, d

15 2D Transformations Translation x = I t x I identity matrix Rotation + translation x = R t x cos (θ) sin (θ) sin (θ) cos (θ)

16 Image Formation Incoming light energy is focused and collected onto an image plane

17 Pinhole Camera Model Simple idealized model for image formation Gives mathematical relationship between 3D world coordinates and 2D image plane coordinates

18 Image as Function Think of an image as a function, f, that maps from R 2 to R 0 < f x, y < is the intensity at a point (x, y) In reality, an image is defined over a rectangle with a finite range of values f: a, b c, d [0,1] Computationally, [0,1] range is convenient but usually we have an 8-bit quantized representation 0 < f x, y < 255 Color image is just three separate functions pasted together f x, y = [r x, y ; g x, y ; b x, y ]

19 Image as Function Multiple equivalent representations Surface Image Matrix

20 Matrix Notation Mathematical Notation starts with f(0,0) Matlab Notation starts with I(1,1) No zero indexing 0 0 N 1 y 1 1 N x M 1 x M y (3,4) I(4,3)

21 Sampling and Quantization Sampling gives fixed grid of image Quantization gives the number of output levels L

22 Quantization L = number of output levels k = number of bits per pixel Output range of image 0, L 1 = [0, 2 k ] Image storage size b = M N k Number of bits to store image with dimensions M N

23 Resolution Spatial resolution is the smallest discernible detail in an image This is controlled by the sampling factor (the size M N of the CMOS sensor) Gray-level resolution is the smallest discernible change in gray level Based on number of bits for representation

24 Pixel Neighborhood The pixel neighborhood corresponds to nearby pixels 4-neighbors Horizontal and vertical neighbors 8-neighbors Include 4-neighbors and the diagonal pixels

25 Connectivity Path exists between pixels 4-connected 8-connected

26 Image Processing Usually the first stage of computer vision applications Pre-process an image to ensure it is in a suitable form for further analysis Typical operations include: Exposure correction, color balancing, reduction in image noise, increasing sharpness, rotation of an image to straighten Digital Image Processing by Gonzalez and Woods is a great book to learn more

27 2D Signal Processing Image processing is an extension of signal processing to two independent variables Input signal, output signal General system x f y Image processing f(x, y) w g(x, y)

28 Point Operators/Processes Output pixel value only depends on the corresponding input pixel value Often times we will see operations like dividing one image by another Matrix division is not defined The operation is carried out between corresponding pixels in the two image Element-by-element dot operation in Matlab >> I3 = I1./I2 Where I1 and I2 are the same size

29 Pixel Transforms Gain and bias (Multiplication and addition of constant) g x, y = a(x, y)f x, y + b(x, y) a (gain) controls contrast b (bias) controls brightness Notice parameters can vary spatially (think gradients) Linear blend g x = 1 α f 0 x + αf 1 (x) We will see this used later for motion detection in video processing

30 Color Transforms Usually we think of a color image as three images concatenated together Have a red, green, blue slice corresponding to the notion of primary colors Manipulations of these color channels may not correspond directly with desired perceptual response Adding bias to all channels may actually change the apparent color instead of increasing brightness Need other representations of color for mathematical manipulation We will see more about color later

31 Compositing and Matting Techniques to remove an object and place it in a new scene E.g. blue screen Matting extracting an object from an original image Compositing inserting object into another image (without visible artifacts) A fourth alpha channel is added to an RGB image α describes the opacity (opposite of transparency) of a pixel Over operator C = 1 α B + αf