Computer Vision. The image formation process

Similar documents
CS6670: Computer Vision

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

CS6670: Computer Vision

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome!

Digital Image Processing. Introduction

IMAGING. Images are stored by capturing the binary data using some electronic devices (SENSORS)

Understanding Variability

Lecture 22: Basic Image Formation CAP 5415

Light. Electromagnetic wave with wave-like nature Refraction Interference Diffraction

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

Ulrik Söderström 17 Jan Image Processing. Introduction

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision.

Radiometry and reflectance

What is an Image? Image Acquisition. Image Processing - Lesson 2. An image is a projection of a 3D scene into a 2D projection plane.

CS201 Computer Vision Lect 4 - Image Formation

specular diffuse reflection.

LIGHT. Speed of light Law of Reflection Refraction Snell s Law Mirrors Lenses

Image Processing using LabVIEW. By, Sandip Nair sandipnair.hpage.com

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Lecture 1 Image Formation.

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

Image Acquisition + Histograms

Light Transport CS434. Daniel G. Aliaga Department of Computer Science Purdue University

Chapter 5 Mirror and Lenses

Chapter 5 Extraction of color and texture Comunicação Visual Interactiva. image labeled by cluster index

Image Acquisition Image Digitization Spatial domain Intensity domain Image Characteristics

Unit 3: Optics Chapter 4

Raytracing. COSC 4328/5327 Scott A. King

Measuring Light: Radiometry and Cameras

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Capturing light. Source: A. Efros

Lab 5: Diffraction and Interference

Red Orange the reflected ray. Yellow Green and the normal. Blue Indigo line. Colours of visible reflection

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well

Unit 3: Optics Chapter 4

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

1/12/2009. Image Elements (Pixels) Image Elements (Pixels) Digital Image. Digital Image =...

Ray Tracing. CSCI 420 Computer Graphics Lecture 15. Ray Casting Shadow Rays Reflection and Transmission [Ch ]

Michelson Interferometer

CS 548: Computer Vision and Image Processing Digital Image Basics. Spring 2016 Dr. Michael J. Reale

Ligh%ng and Reflectance

3 Interactions of Light Waves

SESSION 5: INVESTIGATING LIGHT. Key Concepts. X-planation. Physical Sciences Grade In this session we:

EE795: Computer Vision and Intelligent Systems

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Chapter 2 - Fundamentals. Comunicação Visual Interactiva

CS5670: Computer Vision

Global Illumination The Game of Light Transport. Jian Huang

Light & Optical Systems Reflection & Refraction. Notes

rendering equation computer graphics rendering equation 2009 fabio pellacini 1

Digital Image Processing COSC 6380/4393

3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca

PHY 112: Light, Color and Vision. Lecture 11. Prof. Clark McGrew Physics D 134. Review for Exam. Lecture 11 PHY 112 Lecture 1

Digital Image Processing

Light. Form of Electromagnetic Energy Only part of Electromagnetic Spectrum that we can really see

Introduction to color science

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

CSE 681 Illumination and Phong Shading

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

The Rendering Equation. Computer Graphics CMU /15-662, Fall 2016

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Announcements. Camera Calibration. Thin Lens: Image of Point. Limits for pinhole cameras. f O Z

Introduction. Lighting model Light reflection model Local illumination model Reflectance model BRDF

3/10/2019. Models of Light. Waves and wave fronts. Wave fronts and rays

CS 5625 Lec 2: Shading Models

Range Sensors (time of flight) (1)

DIFFRACTION 4.1 DIFFRACTION Difference between Interference and Diffraction Classification Of Diffraction Phenomena

Interference. Electric fields from two different sources at a single location add together. The same is true for magnetic fields at a single location.

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

COMP371 COMPUTER GRAPHICS

Lighting affects appearance

Visualisatie BMT. Rendering. Arjan Kok

CS4670: Computer Vision

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Light and Sound. Wave Behavior and Interactions

The Rendering Equation. Computer Graphics CMU /15-662

CS635 Spring Department of Computer Science Purdue University

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision

Page 2. Q1.Electrons and protons in two beams are travelling at the same speed. The beams are diffracted by objects of the same size.

Radiometry Measuring Light

Physics 11 - Waves Extra Practice Questions

Representing the World

BRDF Computer Graphics (Spring 2008)

PHY132 Introduction to Physics II Class 5 Outline:

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

Models of Light The wave model: The ray model: The photon model:

REAL-TIME GPU PHOTON MAPPING. 1. Introduction

At the interface between two materials, where light can be reflected or refracted. Within a material, where the light can be scattered or absorbed.

Ray Optics. Lecture 23. Chapter 34. Physics II. Course website:

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

Which row could be correct for the colours seen at X, at Y and at Z?

Transcription:

Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017

The image formation Almost all the computer vision topics we ll discuss in this course require the concept of image We now discuss how a real world scene is imaged through a digital camera, specifically: How it s acquired and stored sampling/quantization How digital images are represented 2D Functions, matrices Relationship between pixels Neighborhood, Connectivity, etc

All starts with light... To being able to see any 3D scene, our eyes or a digital camera needs to capture the light radiation reflected from scene surfaces To produce an image, the scene must be illuminated with one or more light sources

The nature of light Light has a dual nature: 1. Can behave like a particle (photon) a. Travels in a straight line b. Light can reflect off a mirror and cast shadows c. Consists of tiny bits of energy behaving like discrete packets

The nature of light Light has a dual nature: 1. Can behave like a wave a. Refraction b. Diffraction c. Has a wavelength and an amplitude

Visible light is part of electromagnetic spectrum The electromagnetic spectrum can be expressed in terms of wavelength ( ) or frequency ( )

Visible light is part of electromagnetic spectrum

Light quantities The colors that humans perceive in an object are determined by the nature of the light reflected from the object A white object reflects light uniformly among visible wavelengths Green objects reflect light primarily in the 500-570 nm range Quantities that are usually used to describe a light source: 1. Radiance 2. Luminance 3. Brightness

Light quantities Luminance (lm) is the amount of energy an observer perceives from a light source Radiance is the total amount of energy that flows from the light source (W)

Brightness Brightness is a subjective descriptor of light intensity and is one of the key factors in describing color sensation Luminance of the inner square is exactly the same. Perceived brightness is different

The BRDF When light hits a surface it is scattered and reflected. The most general way to model this interaction is through a 5-dimensional function called BRDF (Bidirectional Reflectance Distribution Function)

The BRDF Properties: Always positive Helmholtz reciprocity (can exchange i with r) Conserving energy

The BRDF BRDF of a given surface can be obtained through: Physical modeling Heuristic modeling Empirical observation

The BRDF In practice, most of the time we do not account the full BRDF but a simple combination of diffuse and specular reflection models

Diffuse reflection Diffuse reflection scatters light uniformly in all directions. The amount of light observed depends on the angle between the incident light direction and surface normal (surface area exposed to a given amount of light become larger at oblique angles, so less light is observed at each point)

Specular reflection Specular reflection depends strongly on the direction of the outgoing light (relative angle between the view direction and the specular direction )

Image sensing Incoming light radiation reflected from a 3D scene is transformed to a voltage by an imaging sensor that is sensible to a specific type of energy (wavelength). Usually, sensors are arranged in linear or 2D arrays

CCD vs CMOS A/D conversion from voltage to a digital signal can happen in 2 ways: At the end of each row/column (CCD) or directly at each sensing cell (CMOS). As a rule of thumb CMOS is in general better but CCD is faster and works better in low light

Imaging process The sensor array, coincident with the focal plane of the lens, produces outputs proportional to the integral of the light received at each sensor for a certain amount of time

Imaging function An image can be modelled as a function: The domain is a (usually rectangular) subset of the real image plane Is proportional to the amount of light energy that is collected at the image plane at coordinates x,y. The amount of energy is called intensity.

Sampling and Quantization A continuous real image is converted into a digital one through a process of sampling and quantization Sampling: Reduces the image domain to a finite set of spatial coordinates Quantization: Reduces the sensor response (function codomain) to a finite set of values

Sampling To sampling the continuous image domain into a finite set of values we usually consider an equispaced grid of values in a given area (matrix). This reflect the regular arrangement of cells in a CMOS or CCD sensor. Each sample is called a pixel

Quantization The intensity (output) of the function must also be discretized (quantized) in a finite set of values to be digitized and used in a computer. Image codomain is divided into a set of values and each f(x,y) is rounded to the closest one

Sampling and Quantization

Digital images The continuous image function is converted to a digital image after sampling and quantization. We can represent a digital image in the following ways: As a surface As a Matrix As a visual intensity array

Digital Images Intensity and Matrix representation are the most common to be used in practice. The digitization process requires that decisions be made regarding the matrix size M,N and the number L (usually a power of 2) of quantized intensity levels. M,N are related to the spatial resolution of an image L is related to the intensity resolution

Spatial resolution Spatial resolution is a measure of the smallest discernible detail in an image Usually defined as dots (pixels) per unit distance NOTE: to be meaningful, measures of spatial resolution must be stated with respect to spatial units. A 1024x1024 pixel image is not meaningful without stating the spatial dimensions encompassed by the image (the size of each pixel in the 3D world)

Spatial resolution The number of pixels of an image directly affect spatial resolution only with comparable lenses and if the subjects are taken at the same distance

Spatial resolution 1250 dpi 300 dpi 72 dpi

Intensity resolution Intensity resolution refers to the smallest discernible change in intensity level. Usually it is a power of 2 (8 bits most common) 256 levels 16 levels 4 levels 2 levels

Dynamic Range/Contrast We define the dynamic range of an imaging system as the ratio of the maximum measurable intensity to the minimum detectable intensity level. Depends by the number of bits we use and establish the lowest and highest intensity levels that a system can represent Similarly, with contrast we define the difference between the highest and the lowest intensity level of an image.

Dynamic Range/Contrast Human eye has a dynamic range of about 10^9! Even if we use techniques to acquire images at high dynamic range we will probably won t be able to display it on a computer screen... Compromises have to be done in the acquisition process to either discard dark details or saturate heavily illuminated areas

Dynamic Range/Contrast

Relationships between pixels p A pixel p at coordinates (x,y) has 4 horizontal and vertical neighbours called 4-neighbours of p.

Relationships between pixels p A pixel p at coordinates (x,y) has also 4 diagonal neighbours:

Relationships between pixels p All the neighbours of p are called 8-neighbours

Adjacency Let V be the set of intensity values used to define adjacency 4-adjacency: p,q with values from V are 4-adjacent if 8-adjacency: p,q with values from V are 8-adjacent if M-adjacency: p,q with values from V are m-adjacent if: Or: and has no pixels with values from V

Adjacency 8-adjacency m-adjacency

Path A path from a pixel p=(x,y) to q=(s,t) is a sequence of pixels with coordinates Where

Connected Component Two pixels p and q are connected if there exist a path between them. Given a pixel p, the set of all pixels connected to p is called connected component Let R be a subset of pixels in an image. R is called a region if it contains only one connected component