Coding and Modulation in Cameras
|
|
- Mildred Riley
- 6 years ago
- Views:
Transcription
1 Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs (MERL) Cambridge, MA
2 Overview Coding in Time Coded Exposure Motion Deblurring Coded Aperture Digital Refocussing Extended depth of field Optical Heterodyning Light Field Capture 4D to 2D mapping Coding in Space
3 Motion Blurred Input Photo
4 Approximate rectified crop of photo Image Deblurred by solving a linear system. No post-processing
5 Flutter Shutter Camera Raskar, Agrawal, Tumblin [Siggraph2006] Ferroelectric LCD shutter in front of the lens is turned opaque or transparent in a rapid binary sequence
6 Coded Exposure Photography: Assisting Motion Deblurring using Fluttered Shutter Short Exposure Traditional MURA Coded Shutter Captured Single Photo Deblurred Result Image is dark and noisy Result has Banding Artifacts and some spatial frequencies are lost Decoded image is as good as image of a static scene
7 Exposure choices for capturing fast moving objects Shutter On Shutter Off Time Traditional Camera Keep Shutter open for entire exposure duration The moving object creates smear Shutter On Shutter Off Time Coded Exposure Camera Flutter shutter open and closed with a psuedo-random binary sequence within exposure duration to encode the blur Shutter On Shutter Off Short Exposure Time Keep shutter open for very short duration. Avoids blur but image is dark and suffers from noise.
8
9
10 Coded Motion Blur = Preserves high spatial frequencies = Deconvolution filter frequency response is nearly flat = Deconvolution becomes a well-posed problem 0 10 Flat Ours Log Magnitude Log Magnitude pi/5 2pi/5 3pi/5 4pi/5 pi Frequency Encoding Filter (Blurring) 10 0 Flat Ours pi/5 2pi/5 3pi/5 4pi/5 pi Frequency Decoding Filter (Deblurring)
11
12 Defocus Blurred Photo
13 Digital Refocusing Refocused Image on Person
14 Coded Exposure Coded Aperture Temporal 1-D 1 broadband code Spatial 2-D 2 broadband code
15 Digital Refocusing
16 Coded Aperture Camera The aperture of a 100 mm lens is modified Insert a coded mask with chosen binary pattern Rest of the camera is unmodified
17
18
19
20 Comparison with Traditional Camera Encoded Blur Camera Traditional Camera Captured Blurred Photo Deblurred Image
21 Comparison with Small Aperture Image Small Aperture Image Captured Blurred Image Deblurred Image
22 Digital Refocusing Captured Blurred Image
23 Digital Refocusing Refocused Image on Person
24 Can we capture more information by inserting a mask?
25
26 Mask Sensor Lens Mask Sensor Scanner Lens Encoded Blur Camera bellow Heterodyne Light Field Camera
27 Lenslet-based Light Field camera
28 Stanford Plenoptic Camera [Ng et al 2005] Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses pixels lenses = pixels per lens
29
30 Digital Refocusing (Lenslet array)
31 Heterodyne Light Field Camera Lens Mask Sensor Scanner Lens bellow Canon flatbed scanner as the sensor. Mask is is ~ 1 cm in front of the sensor.
32 Capturing 4D light field 2D Sensor image Zoom in showing light field encoding
33
34
35 Movie: Digital Refocusing by taking 2D Projections of 4D Light Field
36 Movie: Different Views obtained from 2D Slices of 4D light field
37 How to Capture a 4D Signal with a 2D Sensor? Mask-based Modulation Software Demodulation Main Lens Object Mask Sensor Recovered Light Field Photographic Signal (Light Field) Carrier (High Frequency) Incident Modulated Signal Reference Carrier
38 Bright Background Object Reference Plane (x-plane) Main Lens Rays from Dark foreground Object Sensor θ Rays from Bright Background Object x x X Dark Foreground Object θ v d Primal Domain Light Field.
39 Bright Background Object Reference Plane (x-plane) Main Lens Rays from Dark foreground Object Sensor θ Rays from Bright Background Object x x X Dark Foreground Object θ v d Primal Domain Light Field.
40 Bright Background Object Reference Plane (x-plane) Main Lens Sensor θ x x X Dark Foreground Object θ v d Primal Domain Light Field. Fourier Transform Fourier Domain Light Field.
41 Bright Background Object Reference Plane (x-plane) Main Lens Rays from Dark foreground Object Sensor θ Rays from Bright Background Object x x X Dark Foreground Object θ v d Primal Domain Light Field. Fourier Transform Sensor: Inverse of 1-D Slice of FFT = 1-D Projection Inverse Fourier Transform Fourier Domain Light Field.
42 Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor Incident Light Field = Multiplication with the Light Field of the mask x X Dark Foreground Object θ Fourier Transform Convolution with the Fourier transform of the Mask Fourier Domain Light Field.
43 Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.
44 Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.
45 Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.
46 Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.
47 Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.
48 Mask Sensor Lens Mask Sensor Scanner Lens Encoded Blur Camera bellow Heterodyne Light Field Camera
49 How to Capture 4D Light Field with 2D Sensor? f θ Band-limited Light Field f θ0 f x0 Sensor Slice f x Fourier Light Field Space
50 Extra sensor bandwidth cannot capture extra dimension of the light field f θ f θ0 Extra sensor bandwidth f x0 f x
51 f θ???????????? Sensor Slice f x
52 Solution: Modulation Theorem Make spectral copies of 2D light field f θ f θ0 f x0 f x Modulation Function
53 Solution: Modulation Theorem Make spectral copies of 2D light field f θ f θ0 f x0 f x Modulation Function = Sum of Cosines Mask Tile 1/f 0
54 How to Capture a 4D Signal with a 2D Sensor? Mask-based Modulation Software Demodulation Main Lens Object Mask Sensor Recovered Light Field Photographic Signal (Light Field) Carrier (High Frequency) Incident Modulated Signal Reference Carrier
55 Scanner Lens bellow Scanner 1D sensor Mask on top of scanner Mask = 4 cosines = 9 impulses in Fourier transform = 9 angular samples
56 Sensor slice captures entire Light Field f θ f θ0 f x0 Sensor Slice f x Modulation Function Modulated Light Field
57 Computing 4D Light Field 2D Sensor image, 1629*2052 2D Fourier Transform, 1629*2052 2D FFT 9*9=81 such tiles 4D Light Field 181*228*9*9 4D IFFT Reshape 2D tiles into 4D planes 181*228*9*9
58 To recover light field from sensor slice, Reshape in Fourier tiles f θ Recovered Light Field f x
59 Coded Masks For Cameras Mask Sensor Lens Mask Sensor Scanner Lens bellow Encoded Blur Camera Heterodyne Light Field Camera Capture Coded Blur Full resolution digital refocussing Coarse, broadband mask in aperture Convolution of sharp image with mask Capture Light Field Complex refocussing at reduced resolution Fine, narrowband mask close to sensor Modulation of incoming light field by mask
60 Mask Sensor Lens Mask Sensor Scanner Lens bellow Encoded Blur Camera Heterodyne Light Field Camera
61 Fourier transform of optimal 1D mask pattern is a set of 1D impulses Optimal 1D Mask is sum of cosines Optimal 2D mask: Sum of cosines in 2D Number of angular samples = Number of impulses in Fourier transform of the mask along that dimension
62 1/f 0 Zoom in of the cosine mask Mask is the sum of cosines with four harmonics. Plot shows the cosines. Since the physical mask cannot have negative values, a constant needs to be added.
63 Mask-based Heterodyning Object Main Lens Mask Sensor α = (d/v) (π/2) f θ v d f θ0 α f x0 f x Mask Modulation Function
64 Implementation Lens Mask Sensor Scanner Lens bellow Light Field Camera We use Canon flatbed scanner as the sensor and Nikkor lens in front. The mask is placed is about 1 cm in front of the sensor.
Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography
Mitsubishi Electric Research Labs (MERL) Computational Cameras Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Amit Agrawal Mitsubishi Electric Research Labs (MERL)
More informationemerging-technology Home > News > emerging-technology > Mitsubishi Electric Develops Deblurring Flutter Shutter Camera
Page 1 of 6 Powershot SD600 Digital $201.00 Powershot S3 IS Digital $319.95 EOS Rebel XTi Black SLR $720.00 PowerShot SD700 IS $259.95 Ads by Google Camera Lens Fujifilm Camera Digital Movie Camera Camera
More informationModeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The
More informationRadiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University
Radiance Photography Todor Georgiev Adobe Systems Andrew Lumsdaine Indiana University Course Goals Overview of radiance (aka lightfield) photography Mathematical treatment of theory and computation Hands
More informationThe Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx
The Focused Plenoptic Camera Lightfield photographers, focus your cameras! Karl Marx Plenoptic Camera, Adelson 1992 Main lens focused on microlenses Plenoptic Camera, Adelson 1992 Microlenses focused on
More informationFocal stacks and lightfields
Focal stacks and lightfields http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 11 Course announcements Homework 3 is out. - Due October 12 th.
More informationA Theory of Plenoptic Multiplexing
A Theory of Plenoptic Multiplexing Ivo Ihrke ivoihrke@cs.ubc.ca The University of British Columbia ( ) now at Saarland University Gordon Wetzstein wetzste1@cs.ubc.ca The University of British Columbia
More informationComputational Photography: Real Time Plenoptic Rendering
Computational Photography: Real Time Plenoptic Rendering Andrew Lumsdaine, Georgi Chunev Indiana University Todor Georgiev Adobe Systems Who was at the Keynote Yesterday? 2 Overview Plenoptic cameras Rendering
More informationComputational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).
Computational Methods for Radiance Render the full variety offered by the direct observation of objects. (Computationally). Methods for Plenoptic 1.0 Computing with Radiance Goal: Render the full variety
More informationModeling Light. Slides from Alexei A. Efros and others
Project 3 Results http://www.cs.brown.edu/courses/cs129/results/proj3/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj3/damoreno/ http://www.cs.brown.edu/courses/cs129/results/proj3/taox/ Stereo
More informationComputational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2010 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application Introduction Pinhole camera
More informationImage restoration. Restoration: Enhancement:
Image restoration Most images obtained by optical, electronic, or electro-optic means is likely to be degraded. The degradation can be due to camera misfocus, relative motion between camera and object,
More informationPlenoptic camera and its Applications
Aum Sri Sairam Plenoptic camera and its Applications Agenda: 1. Introduction 2. Single lens stereo design 3. Plenoptic camera design 4. Depth estimation 5. Synthetic refocusing 6. Fourier slicing 7. References
More informationModeling Light. Michal Havlik
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Spring 2010 What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of
More informationSimple Spatial Domain Filtering
Simple Spatial Domain Filtering Binary Filters Non-phase-preserving Fourier transform planes Simple phase-step filters (for phase-contrast imaging) Amplitude inverse filters, related to apodization Contrast
More informationModeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2011 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power
More information3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods. John Paul Anglin
3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods by John Paul Anglin A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements
More information3D Surface Reconstruction Based On Plenoptic Image. Haiqiao Zhang
3D Surface Reconstruction Based On Plenoptic Image by Haiqiao Zhang A thesis submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Master of
More informationIMPORTANT INSTRUCTIONS
2017 Imaging Science Ph.D. Qualifying Examination June 9, 2017 9:00AM to 12:00PM IMPORTANT INSTRUCTIONS You must complete two (2) of the three (3) questions given for each of the core graduate classes.
More informationCapturing light. Source: A. Efros
Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More informationNext Generation CAT System
Next Generation CAT System Jaewon Kim MIT Media Lab Advisor: Ramesh Raskar Associate Professor of Media Arts & Sciences MIT Media Lab Reader: V. Michael Bove Principal Research Scientist of Media Arts
More informationModeling Light. Michal Havlik
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 34: Segmentation From Sandlot Science Announcements In-class exam this Friday, December 3 Review session in class on Wednesday Final projects: Slides due: Sunday,
More informationCentral Slice Theorem
Central Slice Theorem Incident X-rays y f(x,y) R x r x Detected p(, x ) The thick line is described by xcos +ysin =R Properties of Fourier Transform F [ f ( x a)] F [ f ( x)] e j 2 a Spatial Domain Spatial
More informationAnalyzing Depth from Coded Aperture Sets
Analyzing Depth from Coded Aperture Sets Anat Levin Dep. of Computer Science and Applied Math, The Weizmann Institute of Science Abstract. Computational depth estimation is a central task in computer vision
More informationComputational Photography
Computational Photography Photography and Imaging Michael S. Brown Brown - 1 Part 1 Overview Photography Preliminaries Traditional Film Imaging (Camera) Part 2 General Imaging 5D Plenoptic Function (McMillan)
More informationComputer Vision. Fourier Transform. 20 January Copyright by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved
Van de Loosdrecht Machine Vision Computer Vision Fourier Transform 20 January 2017 Copyright 2001 2017 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved j.van.de.loosdrecht@nhl.nl,
More informationTutorial: Compressive Sensing of Video
Tutorial: Compressive Sensing of Video Venue: CVPR 2012, Providence, Rhode Island, USA June 16, 2012 Richard Baraniuk Mohit Gupta Aswin Sankaranarayanan Ashok Veeraraghavan Tutorial: Compressive Sensing
More informationSeeing Temporal Modulation of Lights from Standard Cameras
Seeing Temporal Modulation of Lights from Standard Cameras Naoki Sakakibara Fumihiko Sakaue Jun Sato Nagoya Institute of Technology sakakibara@cv.nitech.ac.jp, sakaue@nitech.ac.jp, junsato@nitech.ac.jp
More informationTime-of-flight basics
Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...
More informationIntroduction to Photography
Introduction to Photography The Camera Digital Cameras The Camera (front & top) The Camera (back & bottom) Digital Camera Modes Scene Modes Landscape Photography What makes a good landscape? http://photography.nationalgeographic.com/phot
More informationUsing Your Digital Camera
Using Your Digital Camera Presented by Tallahassee Senior Center Volunteer Instructor: Dr. Mike Francis Email: drmikef@comcast.net http://www.maf1.com/dcc 1 Digital Camera Topics for Everyone Camera Components
More informationReferences Photography, B. London and J. Upton Optics in Photography, R. Kingslake The Camera, The Negative, The Print, A. Adams
Page 1 Camera Simulation Eect Cause Field o view Depth o ield Motion blur Exposure Film size, stops and pupils Aperture, ocal length Shutter Film speed, aperture, shutter Reerences Photography, B. London
More informationRodenstock Products Photo Optics / Digital Imaging
Go to Rogonar Rogonar-S Rodagon Apo-Rodagon-N Rodagon-WA Apo-Rodagon-D Accessories: Modular-Focus Lenses for Enlarging, CCD Photos and Video To reproduce a photograph as a picture on paper requires two
More information3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca
3. Image formation, Fourier analysis and CTF theory Paula da Fonseca EM course 2017 - Agenda - Overview of: Introduction to Fourier analysis o o o o Sine waves Fourier transform (simple examples of 1D
More informationIMAGE ACQUISITION FOR DIGITAL PHOTOGRAMMETRY USING OF THE SHELF AND METRIC CAMERAS
IMAGE ACQUISITION FOR DIGITAL PHOTOGRAMMETRY USING OF THE SHELF AND METRIC CAMERAS Günter Pomaska FH Bielefeld, University of Applied Sciences Artilleriestr. 9, D32427 Minden gp@imagefact.de, www.imagefact.de
More informationhttp://www.diva-portal.org This is the published version of a paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm Helsinki Stockholm,
More informationDepth Estimation with a Plenoptic Camera
Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image
More informationTheoretically Perfect Sensor
Sampling 1/67 Sampling The ray tracer samples the geometry, only gathering information from the parts of the world that interact with a finite number of rays In contrast, a scanline renderer can push all
More informationComputer Vision and Graphics (ee2031) Digital Image Processing I
Computer Vision and Graphics (ee203) Digital Image Processing I Dr John Collomosse J.Collomosse@surrey.ac.uk Centre for Vision, Speech and Signal Processing University of Surrey Learning Outcomes After
More informationTitle: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible
Title: The Future of Photography is Computational Photography Subtitle: 100 years after invention, Integral Photography becomes feasible Adobe customers are creative and demanding. They expect to use our
More informationDSLR Cameras and Lenses. Paul Fodor
DSLR Cameras and Lenses Paul Fodor Camera Principle of a pinhole camera: Light rays from an object pass through a small hole to form an image on the sensor: 2 Aperture and Focal Length Aperture is the
More informationShield Fields: Modeling and Capturing 3D Occluders
Shield Fields: Modeling and Capturing 3D Occluders Douglas Lanman,2 Ramesh Raskar,3 Amit Agrawal Gabriel Taubin 2 Mitsubishi Electric Research Laboratories (MERL) 2 Brown University 3 MIT Media Lab Figure
More informationTheoretically Perfect Sensor
Sampling 1/60 Sampling The ray tracer samples the geometry, only gathering information from the parts of the world that interact with a finite number of rays In contrast, a scanline renderer can push all
More informationCanon Singapore Pte Ltd. Registration No R
ANNEX A Key features of IXUS 310 HS: 12.1 Megapixel High- Sensitivity CMOS sensor. 24mm ultra wide-angle lens. 4.4x optical zoom lens with optical image stabilizer. f/2.0 wide aperture. 3.2 wide touch
More informationWe ll go over a few simple tips for digital photographers.
Jim West We ll go over a few simple tips for digital photographers. We ll spend a fair amount of time learning the basics of photography and how to use your camera beyond the basic full automatic mode.
More informationReconstructing Images of Bar Codes for Construction Site Object Recognition 1
Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing
More informationCSCI 1290: Comp Photo
CSCI 1290: Comp Photo Fall 2018 @ Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all of its acknowledgements. What do we see? 3D world 2D image Point of observation
More informationChapter 12-Close-Up and Macro Photography
Chapter 12-Close-Up and Macro Photography Close-up images all taken with Hipstamatic on the iphone Close-up and Macro Examples All taken with close-up filter or reverse mount lens Macro Close-up Photography
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationThe 2 nd part of the photographic triangle
The 2 nd part of the photographic triangle Shutter speed refers to the amount of time your sensor is exposed to light. In film photography shutter speed was the length of time that the film was exposed
More informationMODULE 6 Digital and Film Photography
MODULE 6 Digital and Film Photography MODULE 6 Digital and Film Photography Compared Digital & Film Photography 2003, 2007, 2008, 2010 Robert L. Jones, II OnLineAfrica Knowledge Corporation What is a
More informationMotion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationFundamentals of Photography presented by Keith Bauer.
Fundamentals of Photography presented by Keith Bauer kcbauer@juno.com http://keithbauer.smugmug.com Homework Assignment Composition Class will be February 7, 2012 Please provide 2 images by next Tuesday,
More informationMulti-slice CT Image Reconstruction Jiang Hsieh, Ph.D.
Multi-slice CT Image Reconstruction Jiang Hsieh, Ph.D. Applied Science Laboratory, GE Healthcare Technologies 1 Image Generation Reconstruction of images from projections. textbook reconstruction advanced
More informationThe CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics.
1st AO4ELT conference, 05011 (2010) DOI:10.1051/ao4elt/201005011 Owned by the authors, published by EDP Sciences, 2010 The CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics. J.M. Rodríguez-Ramos
More informationIntroduction to Computer Vision. Week 3, Fall 2010 Instructor: Prof. Ko Nishino
Introduction to Computer Vision Week 3, Fall 2010 Instructor: Prof. Ko Nishino Last Week! Image Sensing " Our eyes: rods and cones " CCD, CMOS, Rolling Shutter " Sensing brightness and sensing color! Projective
More informationDigital Image Processing. Image Enhancement in the Frequency Domain
Digital Image Processing Image Enhancement in the Frequency Domain Topics Frequency Domain Enhancements Fourier Transform Convolution High Pass Filtering in Frequency Domain Low Pass Filtering in Frequency
More informationAliasing. Can t draw smooth lines on discrete raster device get staircased lines ( jaggies ):
(Anti)Aliasing and Image Manipulation for (y = 0; y < Size; y++) { for (x = 0; x < Size; x++) { Image[x][y] = 7 + 8 * sin((sqr(x Size) + SQR(y Size)) / 3.0); } } // Size = Size / ; Aliasing Can t draw
More informationTarget User TARGET USER:
High level 8x optical Mega Zoom 36 290 mm* 3.3 Megapixel 1/2.5-type CCD with primary colour filter Rapid AF with Predictive Focus Control Low power consumption more than 500 frames using 4 AA alkaline
More informationImage Processing Applications in Real Life: 2D Fragmented Image and Document Reassembly and Frequency Division Multiplexed Imaging
Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 4-2-2018 Image Processing Applications in Real Life: 2D Fragmented Image and Document Reassembly and Frequency
More informationMinimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.
Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare
More informationMichael Moody School of Pharmacy University of London 29/39 Brunswick Square London WC1N 1AX, U.K.
This material is provided for educational use only. The information in these slides including all data, images and related materials are the property of : Michael Moody School of Pharmacy University of
More informationtwo using your LensbAby
two Using Your Lensbaby 28 Lensbaby Exposure and the Lensbaby When you attach your Lensbaby to your camera for the first time, there are a few settings to review so that you can start taking photos as
More informationChapter 3-Camera Work
Chapter 3-Camera Work The perfect camera? Make sure the camera you purchase works for you Is it the right size? Does it have the type of lens you need? What are the features that I want? What type of storage
More informationAn Intuitive Explanation of Fourier Theory
An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory
More informationDEPTH ESTIMATION FROM A VIDEO SEQUENCE WITH MOVING AND DEFORMABLE OBJECTS
DEPTH ESTIMATION FROM A VIDEO SEQUENCE WITH MOVING AND DEFORMABLE OBJECTS Manuel Martinello, Paolo Favaro School of EPS, Heriot-Watt University, Edinburgh, UK Keywords: coded aperture, depth from video,
More informationJust some thoughts about cameras. Please contact me if you have any questions.
Just some thoughts about cameras Please contact me if you have any questions. leah.rachlis@asd20.org Every student will need to have a camera to use for Digital Photography Class. It does not need to be
More informationThe Film and Digital camera. The use of photographic film was introduced by George Eastman who started
Thapa, 1 Anup Thapa Anna Voisard ENGL 21007 - A October 24, 2016 The Film and Digital camera Film/Analog Camera: The use of photographic film was introduced by George Eastman who started manufacturing
More informationHigh Dynamic Range Imaging.
High Dynamic Range Imaging High Dynamic Range [3] In photography, dynamic range (DR) is measured in exposure value (EV) differences or stops, between the brightest and darkest parts of the image that show
More informationSingle Image Motion Deblurring Using Transparency
Single Image Motion Deblurring Using Transparency Jiaya Jia Department of Computer Science and Engineering The Chinese University of Hong Kong leojia@cse.cuhk.edu.hk Abstract One of the key problems of
More informationAn Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm
An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm Jiayuan Meng, Dee A.B. Weikle, Greg Humphreys, Kevin Skadron Dept. of
More informationUnit - I Computer vision Fundamentals
Unit - I Computer vision Fundamentals It is an area which concentrates on mimicking human vision systems. As a scientific discipline, computer vision is concerned with the theory behind artificial systems
More informationHigh speed imaging with an Agfa consumer grade digital camera
Rochester Institute of Technology RIT Scholar Works Articles 6-15-1998 High speed imaging with an Agfa consumer grade digital camera Andrew Davidhazy Follow this and additional works at: http://scholarworks.rit.edu/article
More informationOn Plenoptic Multiplexing and Reconstruction
On Plenoptic Multiplexing and Reconstruction Gordon Wetzstein, Ivo Ihrke, Wolfgang Heidrich To cite this version: Gordon Wetzstein, Ivo Ihrke, Wolfgang Heidrich. On Plenoptic Multiplexing and Reconstruction.
More informationFocused Plenoptic Camera and Rendering. Focused Plenoptic Camera and Rendering. Todor Georgiev 1, a) 2, b) and Andrew Lumsdaine
Focused Plenoptic Camera and Rendering Todor Georgiev 1, a) 2, b) and Andrew Lumsdaine 1) Adobe Systems Inc. 2) Indiana University (Dated: 3 March 2010) Plenoptic cameras, constructed with internal microlens
More informationRecovering dual-level rough surface parameters from simple lighting. Graphics Seminar, Fall 2010
Recovering dual-level rough surface parameters from simple lighting Chun-Po Wang Noah Snavely Steve Marschner Graphics Seminar, Fall 2010 Motivation and Goal Many surfaces are rough Motivation and Goal
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationDEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS
DEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS CINEC 2014 Frederik Zilly, Head of Group Computational Imaging & Algorithms Moving Picture Technologies Department, Fraunhofer IIS Fraunhofer, Frederik
More informationAN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM
AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM Fernando Pérez Nava +, Jonás Philipp Lüke + Departamento de Estadística, Investigación Operativa y Computación Departamento de Física Fundamental
More informationImage Processing Lecture 10
Image Restoration Image restoration attempts to reconstruct or recover an image that has been degraded by a degradation phenomenon. Thus, restoration techniques are oriented toward modeling the degradation
More informationImaging and Deconvolution
Imaging and Deconvolution Urvashi Rau National Radio Astronomy Observatory, Socorro, NM, USA The van-cittert Zernike theorem Ei E V ij u, v = I l, m e sky j 2 i ul vm dldm 2D Fourier transform : Image
More informationThomas Abraham, PhD
Thomas Abraham, PhD (tabraham1@hmc.psu.edu) What is Deconvolution? Deconvolution, also termed as Restoration or Deblurring is an image processing technique used in a wide variety of fields from 1D spectroscopy
More informationLecture 4 Image Enhancement in Spatial Domain
Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain
More information(and what the numbers mean)
Using Neutral Density Filters (and what the numbers mean) What are ND filters Neutral grey filters that effectively reduce the amount of light entering the lens. On solid ND filters the light-stopping
More informationDigital Imaging Study Questions Chapter 8 /100 Total Points Homework Grade
Name: Class: Date: Digital Imaging Study Questions Chapter 8 _/100 Total Points Homework Grade True/False Indicate whether the sentence or statement is true or false. 1. You can change the lens on most
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent
More informationOptical image stabilization (IS)
Optical image stabilization (IS) CS 178, Spring 2014 Begun 4/29/14, finished 5/1. Marc Levoy Computer Science Department Stanford University Outline what are the causes of camera shake? how can you avoid
More informationContents. Contents. Perfecting people shots Making your camera a friend.5. Beyond point and shoot Snapping to the next level...
Contents 1 Making your camera a friend.5 What are the options?... 6 Ready for action: know your buttons.8 Something from the menu?... 10 Staying focused... 12 Look, no hands... 13 Size matters... 14 Setting
More informationRegularized Energy Minimization Models in Image Processing. Tibor Lukić. Faculty of Technical Sciences, University of Novi Sad, Serbia
20150715 NOVI SAD Regularized Energy Minimization Models in Image Processing Tibor Lukić Faculty of Technical Sciences, University of Novi Sad, Serbia SSIP Szeged, 2015 NOVI SAD NOVI SAD UNIVERSITY OF
More informationA FAMILY OF POWERFUL CAMERAS WITH A CHOICE FOR EVERYONE
-SERIES A FAMILY OF POWERFUL CAMERAS WITH A CHOICE FOR EVERYONE The Canon PowerShot G-Series line-up of cameras packs powerful features and impressive picture quality into advanced, easy-to-use compact
More information3D Shape and Indirect Appearance By Structured Light Transport
3D Shape and Indirect Appearance By Structured Light Transport CVPR 2014 - Best paper honorable mention Matthew O Toole, John Mather, Kiriakos N. Kutulakos Department of Computer Science University of
More informationHTC ONE (M8) DUO CAMERA WHITE PAPERS
HTC ONE (M8) DUO CAMERA WHITE PAPERS 1 HTC ONE (M8) DUO CAMERA WHITE PAPERS INTRODUCTION In a few short years, the smartphone has become the predominant image-capturing device for millions of people. At
More informationBuxton & District U3A Digital Photography Beginners Group Lesson 6: Understanding Exposure. 19 November 2013
U3A Group Lesson 6: Understanding Exposure 19 November 2013 Programme Buxton & District 19 September Exploring your camera 1 October You ve taken some pictures now what? (Viewing pictures; filing on your
More informationx' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8
1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer
More informationA projector-camera setup for geometry-invariant frequency demultiplexing
A projector-camera setup for geometry-invariant frequency demultiplexing The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More informationReal-time scattering compensation for time-of-flight camera
Real-time scattering compensation for time-of-flight camera James Mure-Dubois and Heinz Hügli University of Neuchâtel Institute of Microtechnology, 2000 Neuchâtel, Switzerland Abstract. 3D images from
More informationSpatial Enhancement Definition
Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing
More information