Non-line-of-sight imaging
|
|
- Arabella Miles
- 5 years ago
- Views:
Transcription
1 Non-line-of-sight imaging , , Computational Photography Fall 2017, Lecture 25
2 Course announcements Homework 6 will be posted tonight. - Will be due Sunday 10th. - Almost no coding, only data capture and using existing code. - You will need high-sensitivity cameras, so use the DSLRs you have picked up. Final project report deadline moved to December 15 th. - Originally was December 11 th.
3 Overview of today s lecture The non-line-of-sight (NLOS) imaging problem. Active NLOS imaging using time-of-flight imaging. Active NLOS imaging using WiFi. Passive NLOS imaging using accidental pinholes. Passive NLOS imaging using accidental reflectors. Passive NLOS imaging using corners.
4 Slide credits Many of these slides were directly adapted from: Shree Nayar (Columbia). Fadel Adib (MIT). Katie Bouman (MIT).
5 The non-line-of-sight (NLOS) imaging problem
6 Time-of-flight (ToF) imaging
7 Looking around the corner
8 Looking around the corner
9 Active NLOS imaging using time-of-flight imaging
10 Processing a single-pixel transient VD VS light source single-pixel transient camera intensity time
11 Processing a single-pixel transient VD VS light source single-pixel transient camera intensity t 1 = 10 ps time
12 Processing a single-pixel transient VD VS l in l out light source single-pixel transient camera intensity l nlos = t 1 c - l in - l out Where in the world can we find points creating this pathlength? t 1 = 10 ps time
13 Processing a single-pixel transient VD VS l in l out light source Ellipse with foci on the virtual source and detector and pathlength l nlos single-pixel transient camera intensity l nlos = t 1 c - l in - l out t 1 = 10 ps time
14 Processing a single-pixel transient VD VS l in l out light source single-pixel transient camera Ellipse with foci on the virtual source and detector and pathlength l nlos intensity l nlos = t 2 c - l in - l out t 2 = 20 ps time
15 Processing another single-pixel transient VD VS l in l out light source single-pixel transient camera intensity What assumption are we making here? Is there any other information we are not using? t 2 = 20 ps time
16 Processing another single-pixel transient VD VS l in Photons only interact with the NLOS scene once (3-bounce paths) l out light source single-pixel transient camera intensity What assumption are we making here? Is there any other information we are not using? t 2 = 20 ps time
17 Processing another single-pixel transient VD VS l in Photons only interact with the NLOS scene once (3-bounce paths) l out light source single-pixel transient camera What assumption are we making here? intensity Is there any other information we are not using? t 2 = 20 ps We need to also account for intensity time
18 Elliptic backprojection VD VS l in l out light source single-pixel transient camera
19 Elliptic backprojection
20 Elliptic backprojection
21 Reconstructing Hidden Rooms Visible wall Invisible walls
22 Reconstructing Hidden Rooms: One plane at a time Visible wall Invisible walls
23 Reconstructing Hidden Rooms: One plane at a time Visible wall Invisible walls
24 Reconstructing Hidden Rooms: One plane at a time Visible wall Invisible walls
25 Reconstructing Hidden Rooms: One plane at a time Visible wall Invisible walls
26 Reconstructing Hidden Rooms: One plane at a time Visible wall Invisible walls
27 Reconstructing Hidden Rooms: One plane at a time Visible wall Invisible walls
28 Reconstructing Rectangular Rooms Original room Reconstructed room Average AICP error for all the walls is TBD mm (TBD %). Normalized with average room length of 1.1m
29 Reconstructing Complex Shape and Reflectance wall hidden object camera what a regular camera sees what shape our camera sees wall hidden object camera what a regular camera sees what depth our camera sees
30 Active NLOS imaging using WiFi
31 Imaging through occlusions
32 Imaging through occlusions using radio frequencies
33 Key Idea
34 Challenges Wall refection is 10,000x stronger than reflections coming from behind the wall Tracking people from their reflections
35 Wi-Vi: Small, Low-Power, Wi-Fi Eliminate the wall s reflection Track people from reflections Gesture-based interface Implemented on software radios
36 How Can We Eliminate the Wall s Reflection?
37 Idea: transmit two waves that cancel each other when they reflect off static objects but not moving objects Wall is static disappears People tend to move detectable
38 Eliminating the Wall s Reflection Receive Antenna: x αx Transmit Antennas
39 Eliminating the Wall s Reflection Receive Antenna: y = h 1 x + h 2 αx 0 x h 1 α = -h 1 / h 2 αx h 2
40 Eliminating All Static Reflections
41 Eliminating All Static Reflections x αx h 1 h 2 y = h 1 x + h 2 αx Static objects (wall, furniture, etc.) have constant channels y = h 1 x + h 2 (- h 1 / h 2 )x 0 People move, therefore their channels change y = h 1 x + h 2 (- h 1 / h 2 )x Not Zero
42 How Can We Track Using Reflections?
43 RF source Tracking Motion θ Direction of reflection Antenna Array
44 Tracking Motion Direction of motion θ At any point in time, we have a single measurement Antenna Array
45 Tracking Motion Direction of motion θ θ Direction of motion Antenna Array
46 Tracking Motion Direction of motion θ Antenna Array Direction of motion Human motion emulates antenna array θ
47
48 A Through-Wall Gesture Interface Sending Commands with Gestures Two simple gestures to represent bit 0 and bit 1 Can combine sequence of gestures to convey longer message
49 Gesture Encoding Bit 0 : step forward followed by step backward Bit 1 : step backward followed by step forward Step Forward Step Backward
50 Gesture Decoding Matched Filter Peak Detector Bit 0 Bit 1 Gesture interface that works through walls and none-line-of-sight
51 Imaging through occlusions using radio frequencies Our output Camera image Head Left Arm Chest Right Arm Our RF sensor Lower Left Arm Lower Right Arm Feet
52 Traditional Imaging RF Imaging Cannot image through occlusions like walls Walls are transparent and can image through them Form 2D images using lenses No lenses at these frequencies
53 Imaging with RF No lens at these frequencies Antenna cannot distinguish bounces from different directions Antenna
54 Imaging with RF Beamforming: Use multiple antennas to scan reflections within a specific beam Antenna Array Extend to 3D with time-of-flight measurements by repeating this at every depth
55 Coarse-to-fine Scan Larger aperture (more antennas) means finer resolution Used antennas
56 Coarse-to-fine Scan Reflectors Used antennas
57 Coarse-to-fine Scan Scanned region Used antennas
58 Coarse-to-fine Scan Scanned regions Used antennas
59 Traditional Imaging RF Imaging Cannot image through occlusions like walls Walls are transparent and can image through them Form 2D images using lenses No lenses at these frequencies Get a reflection from all points: can image all the body No reflections from most points: all reflections are specular
60 Challenge: Don t get reflections from most points in RF Output of 3D RF Scan Blobs of reflection power
61 Challenge: Don t get reflections from most points in RF At frequencies that traverse walls, human body parts are specular (pure mirror)
62 Antennas Challenge: Don t get reflections from most points in RF At frequencies that traverse walls, human body parts are specular (pure mirror) Cannot Capture Reflection At every point in time, get reflections from only a subset of body parts
63 Solution Idea: Exploit Human Motion and Aggregate over Time RF Imaging Array
64 Solution Idea: Exploit Human Motion and Aggregate over Time RF Imaging Array
65 Human Walks toward Sensor 3m 2.5m 2m Chest (Largest Convex Reflector) Use it as a pivot: for motion compensation and segmentation
66 Human Walks toward Sensor 3m 2.5m 2m Right arm Head Left arm Feet Lower Torso Chest (Largest Convex Reflector) Use it as a pivot: for motion compensation and segmentation Combine the various snapshots
67 Human Walks toward Sensor
68 Implementation Hardware 2D Antenna Array Built RF circuit 1/1,000 power of WiFi USB connection to PC Rx Tx Software Coarse-to-fine algorithm implemented in GPU to generate reflection snapshots in real-time
69 Evaluation SENSOR SETUP Sensor is Placed Behind a Wall Snapshots are Sensor RF-Capture sensor placed behind the wall 15 participants Use Kinect as baseline when needed
70 Sample Captured Figures through Walls
71 y-axis (meters) y-axis (meters) y-axis (meters) y-axis (meters) Sample Captured Figures through Walls x-axis (meters) x-axis (meters) x-axis (meters) x-axis (meters)
72 Tracking result
73 Writing in the air Kinect (in red) Median Accuracy is 2cm
74 Passive NLOS imaging using accidental pinholes
75 What does this image say about the world outside?
76 Accidental pinhole camera
77 Accidental pinhole camera projected pattern on the wall window with smaller gap upside down view outside window window is an aperture
78 Accidental pinspeck camera
79 Passive NLOS imaging using accidental reflectors
80 Ophthalmology Anatomy, physiology [Helmholtz 09; ] Computer Vision Iris recognition [Daugman 93; ] HCI Gaze as an interface [Bolt 82; ] Computer Graphics Vision-realistic Rendering [Barsky et al. 02; ]
81 Corneal Imaging System
82 Geometric Model of the Cornea Iris Pupil Sclera Cornea R=7.6mm t b 2.18mm r L eccentricity = mm
83 Self-calibration: 3D Coordinates, 3D Orientation
84 Viewpoint Loci camera pupil X cornea Z
85 Viewpoint Loci camera pupil X cornea Z
86 Viewpoint Loci camera pupil cornea
87 Resolution and Field of View gaze direction X cornea Z
88 Resolution and Field of View resolution gaze direction X FOV cornea Z
89 Resolution and Field of View resolution gaze direction 27 person s FOV 45 X cornea Z FOV
90
91 Environment Map from an Eye
92 What Exactly You are Looking At Eye Image: Computed Retinal Image:
93
94
95 Corneal Stereo System right cornea left cornea epipolar plane camera pupil epipolar curve
96 From Two Eyes in an Image Y X Z X Epipolar Curves Reconstructed Structure (frontal and side view) Y
97 Eyes Reveal Where the person is What the person is looking at The structure of objects
98 Implications Human Affect Studies: Social Networks Security: Human Localization Advanced Interfaces: Robots, Computers Computer Graphics: Relighting [SIGGRAPH 04]
99 Dynamic Illumination in a Video
100 Point Source Direction from the Eye intensity
101 Point Source Trajectory from the Eye (deg.) (deg.) (deg.) (deg.)
102 Inserting a Virtual Object
103 Sampling Appearance using Eyes
104 Sampling Appearance using Eyes
105 Computed Point Source Trajectory intensity
106 Fitting a Reflectance Model
107 3D Model Reconstruction
108 Relighting under Novel Illumination
109 VisualEyes TM with Akira Yanagawa
110 Passive NLOS imaging using corners
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131 References Basic reading: Kirmani et al., Looking around the corner using ultrafast transient imaging, ICCV 2009 and IJCV Velten et al., Recovering three-dimensional shape around a corner using ultrafast time-offlight imaging, Nature Communications the two papers showing how ToF imaging can be used for looking around the corner. Abib and Katabi, See Through Walls with Wi-Fi!, SIGCOMM Abib et al., Capturing the Human Figure Through a Wall, SIGGRAPH Asia the two papers showing that WiFi can be used to see through walls. Torralba and Freeman, Accidental Pinhole and Pinspeck Cameras, CVPR the paper discussing passive NLOS imaging using accidental pinholes. Nishimo and Nayar, Corneal Imaging System: Environment from Eyes, IJCV the paper discussing passive NLOS imaging using accidental reflectors. Bouman et al., Turning corners into cameras: Principles and Methods, ICCV the paper discussing passive NLOS imaging using corners. Additional reading: Pediredla et al., Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner, ICCP the paper on NLOS room reconstruction using ToF imaging. Nishimo and Nayar, Eyes for relighting, SIGGRAPH a follow-up paper to the paper on corneal imaging, show how similar ideas can be used for relighting and other image-based rendering tasks.
Non-line-of-sight imaging
Non-line-of-sight imaging http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 23 Course announcements Homework 6 posted, due November 30 th. -
More informationLast Lecture. Bayer pattern. Focal Length F-stop Depth of Field Color Capture. Prism. Your eye. Mirror. (flipped for exposure) Film/sensor.
Last Lecture Prism Mirror (flipped for exposure) Your eye Film/sensor Focal Length F-stop Depth of Field Color Capture Light from scene lens Mirror (when viewing) Bayer pattern YungYu Chuang s slide Today
More informationComputational Imaging for Self-Driving Vehicles
CVPR 2018 Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta Kadambi--------Guy Satat Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationSignature redacted. Signature redacted. Capturing the Human Figure Through a Wall LIBRARIES ARCHEIV. Chen-Yu Hsu MAR
/ 0(I-eslie Capturing the Human Figure Through a Wall by Chen-Yu Hsu B.S., National Taiwan University (2013) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment
More informationComputational light transport
Computational light transport http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 19 Course announcements Homework 5 has been posted. - Due on
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationFace Cyclographs for Recognition
Face Cyclographs for Recognition Guodong Guo Department of Computer Science North Carolina Central University E-mail: gdguo@nccu.edu Charles R. Dyer Computer Sciences Department University of Wisconsin-Madison
More information3D Computer Vision 1
3D Computer Vision 1 Multiview Stereo Multiview Stereo Multiview Stereo https://www.youtube.com/watch?v=ugkb7itpnae Shape from silhouette Shape from silhouette Shape from silhouette Shape from silhouette
More informationMore computational light transport
More computational light transport http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 23 Course announcements Sign-up for final project checkpoint
More informationFocal stacks and lightfields
Focal stacks and lightfields http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 11 Course announcements Homework 3 is out. - Due October 12 th.
More information3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa
3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More informationMulti-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo
Vision Sensing Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo The Digital Michelangelo Project, Stanford How to sense 3D very accurately? How to sense
More information3D Photography: Active Ranging, Structured Light, ICP
3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar
More informationComputational Imaging for Self-Driving Vehicles
CVPR 2018 Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta Kadambi--------Guy Satat Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationHISTOGRAMS OF ORIENTATIO N GRADIENTS
HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients
More informationCIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM
CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and
More informationDepth Camera for Mobile Devices
Depth Camera for Mobile Devices Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Stereo Cameras Structured Light Cameras Time of Flight (ToF) Camera Inferring 3D Points Given we have
More informationCS 4495/7495 Computer Vision Frank Dellaert, Fall 07. Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang
CS 4495/7495 Computer Vision Frank Dellaert, Fall 07 Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang Etymology Stereo comes from the Greek word for solid (στερεο), and the term can
More informationProject 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)
Announcements Stereo Project 2 due today Project 3 out today Single image stereogram, by Niklas Een Readings Szeliski, Chapter 10 (through 10.5) Public Library, Stereoscopic Looking Room, Chicago, by Phillips,
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationMulti-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011
Multi-view Stereo Ivo Boyadzhiev CS7670: September 13, 2011 What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationGaze interaction (2): models and technologies
Gaze interaction (2): models and technologies Corso di Interazione uomo-macchina II Prof. Giuseppe Boccignone Dipartimento di Scienze dell Informazione Università di Milano boccignone@dsi.unimi.it http://homes.dsi.unimi.it/~boccignone/l
More informationCPSC 425: Computer Vision
1 / 45 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 45 Menu March 3, 2016 Topics: Hough
More information3D Photography: Stereo
3D Photography: Stereo Marc Pollefeys, Torsten Sattler Spring 2016 http://www.cvg.ethz.ch/teaching/3dvision/ 3D Modeling with Depth Sensors Today s class Obtaining depth maps / range images unstructured
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More information12/3/2009. What is Computer Vision? Applications. Application: Assisted driving Pedestrian and car detection. Application: Improving online search
Introduction to Artificial Intelligence V22.0472-001 Fall 2009 Lecture 26: Computer Vision Rob Fergus Dept of Computer Science, Courant Institute, NYU Slides from Andrew Zisserman What is Computer Vision?
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More informationNew Sony DepthSense TM ToF Technology
ADVANCED MATERIAL HANDLING WITH New Sony DepthSense TM ToF Technology Jenson Chang Product Marketing November 7, 2018 1 3D SENSING APPLICATIONS Pick and Place Drones Collision Detection People Counting
More informationPublic Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during
More informationStereo and structured light
Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationLight transport matrices
Light transport matrices http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 18 Course announcements Homework 5 has been posted. - Due on Friday
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationActive Light Time-of-Flight Imaging
Active Light Time-of-Flight Imaging Time-of-Flight Pulse-based time-of-flight scanners [Gvili03] NOT triangulation based short infrared laser pulse is sent from camera reflection is recorded in a very
More informationLocal features: detection and description May 12 th, 2015
Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationOther Reconstruction Techniques
Other Reconstruction Techniques Ruigang Yang CS 684 CS 684 Spring 2004 1 Taxonomy of Range Sensing From Brain Curless, SIGGRAPH 00 Lecture notes CS 684 Spring 2004 2 Taxonomy of Range Scanning (cont.)
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationLight Transport CS434. Daniel G. Aliaga Department of Computer Science Purdue University
Light Transport CS434 Daniel G. Aliaga Department of Computer Science Purdue University Topics Local and Global Illumination Models Helmholtz Reciprocity Dual Photography/Light Transport (in Real-World)
More informationCapturing light. Source: A. Efros
Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationRecap from Previous Lecture
Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now
More informationImage-based Lighting
Image-based Lighting 10/17/15 T2 Computational Photography Derek Hoiem, University of Illinois Many slides from Debevec, some from Efros Next week Derek away for ICCV (Venice) Zhizhong and Aditya will
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationEpipolar geometry contd.
Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives
More informationPerceptual Effects in Real-time Tone Mapping
Perceptual Effects in Real-time Tone Mapping G. Krawczyk K. Myszkowski H.-P. Seidel Max-Planck-Institute für Informatik Saarbrücken, Germany SCCG 2005 High Dynamic Range (HDR) HDR Imaging Display of HDR
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationCS635 Spring Department of Computer Science Purdue University
Light Transport CS635 Spring 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Topics Local and GlobalIllumination Models Helmholtz Reciprocity Dual Photography/Light
More informationAll human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.
All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationToF Camera for high resolution 3D images with affordable pricing
ToF Camera for high resolution 3D images with affordable pricing Basler AG Jana Bartels, Product Manager 3D Agenda Coming next I. Basler AG II. 3D Purpose and Time-of-Flight - Working Principle III. Advantages
More informationStereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision
Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More information/$ IEEE
2246 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 54, NO. 12, DECEMBER 2007 Novel Eye Gaze Tracking Techniques Under Natural Head Movement Zhiwei Zhu and Qiang Ji*, Senior Member, IEEE Abstract Most
More informationA Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India
A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract
More informationCMPSCI 670: Computer Vision! Image formation. University of Massachusetts, Amherst September 8, 2014 Instructor: Subhransu Maji
CMPSCI 670: Computer Vision! Image formation University of Massachusetts, Amherst September 8, 2014 Instructor: Subhransu Maji MATLAB setup and tutorial Does everyone have access to MATLAB yet? EdLab accounts
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationRay Tracing. CSCI 420 Computer Graphics Lecture 15. Ray Casting Shadow Rays Reflection and Transmission [Ch ]
CSCI 420 Computer Graphics Lecture 15 Ray Tracing Ray Casting Shadow Rays Reflection and Transmission [Ch. 13.2-13.3] Jernej Barbic University of Southern California 1 Local Illumination Object illuminations
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationCSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011
CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,
More informationStructured light , , Computational Photography Fall 2017, Lecture 27
Structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 27 Course announcements Homework 5 has been graded. - Mean: 129. - Median:
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationCS201 Computer Vision Lect 4 - Image Formation
CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view
More informationLight Field Occlusion Removal
Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationStructured light 3D reconstruction
Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance
More informationIntroduction to 3D Machine Vision
Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationCOMP371 COMPUTER GRAPHICS
COMP371 COMPUTER GRAPHICS SESSION 15 RAY TRACING 1 Announcements Programming Assignment 3 out today - overview @ end of the class Ray Tracing 2 Lecture Overview Review of last class Ray Tracing 3 Local
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationPhys 102 Lecture 17 Introduction to ray optics
Phys 102 Lecture 17 Introduction to ray optics 1 Physics 102 lectures on light Light as a wave Lecture 15 EM waves Lecture 16 Polarization Lecture 22 & 23 Interference & diffraction Light as a ray Lecture
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More information3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.
Acquiring 3D shape 3D scanning 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects. The produced digital model is formed by geometric
More informationLaser Eye a new 3D sensor for active vision
Laser Eye a new 3D sensor for active vision Piotr Jasiobedzki1, Michael Jenkin2, Evangelos Milios2' Brian Down1, John Tsotsos1, Todd Campbell3 1 Dept. of Computer Science, University of Toronto Toronto,
More informationAdvanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation
Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or
More informationExperimental Humanities II. Eye-Tracking Methodology
Experimental Humanities II Eye-Tracking Methodology Course outline 22.3. Introduction to Eye-Tracking + Lab 1 (DEMO & stimuli creation) 29.3. Setup and Calibration for Quality Data Collection + Lab 2 (Calibration
More informationCS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017
CS354 Computer Graphics Ray Tracing Qixing Huang Januray 24th 2017 Graphics Pipeline Elements of rendering Object Light Material Camera Geometric optics Modern theories of light treat it as both a wave
More informationBasic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues
Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationLight, Photons, and MRI
Light, Photons, and MRI When light hits an object, some of it will be reflected. The reflected light can form an image. We usually want to be able to characterize the image given what we know about the
More informationCS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian
CS 563 Advanced Topics in Computer Graphics Camera Models by Kevin Kardian Introduction Pinhole camera is insufficient Everything in perfect focus Less realistic Different camera models are possible Create
More informationVisual Perception Sensors
G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and
More informationActive Wearable Vision Sensor: Recognition of Human Activities and Environments
Active Wearable Vision Sensor: Recognition of Human Activities and Environments Kazuhiko Sumi Graduate School of Informatics Kyoto University Kyoto 606-8501, Japan sumi@vision.kuee.kyoto-u.ac.jp Masato
More informationCSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.
University of Texas at Arlington CSE 4392/5369 Introduction to Vision Sensing Dr. Gian Luca Mariottini, Ph.D. Department of Computer Science and Engineering University of Texas at Arlington WEB : http://ranger.uta.edu/~gianluca
More information(A) Electromagnetic. B) Mechanical. (C) Longitudinal. (D) None of these.
Downloaded from LIGHT 1.Light is a form of radiation. (A) Electromagnetic. B) Mechanical. (C) Longitudinal. 2.The wavelength of visible light is in the range: (A) 4 10-7 m to 8 10-7 m. (B) 4 10 7
More informationNew Sony DepthSense TM ToF Technology
ADVANCED MATERIAL HANDLING WITH New Sony DepthSense TM ToF Technology Jenson Chang Product Marketing November 7, 2018 1 3D SENSING APPLICATIONS Pick and Place Drones Collision Detection People Counting
More information