Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Similar documents
Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

CHAPTER 5 MOTION DETECTION AND ANALYSIS

Range Sensors (time of flight) (1)

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Depth Sensors Kinect V2 A. Fornaser

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion Estimation. There are three main types (or applications) of motion estimation:

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Peripheral drift illusion

Multi-stable Perception. Necker Cube

EE795: Computer Vision and Intelligent Systems

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Fotonic E-series User Manual

Stereo imaging ideal geometry

Capturing, Modeling, Rendering 3D Structures

Lecture 16: Computer Vision

Lecture 16: Computer Vision

Chaplin, Modern Times, 1936

Module 7 VIDEO CODING AND MOTION ESTIMATION

Chapter 3 Image Registration. Chapter 3 Image Registration

Outline. ETN-FPI Training School on Plenoptic Sensing

CS 4495 Computer Vision Motion and Optic Flow

Motion Estimation for Video Coding Standards

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

CS4495/6495 Introduction to Computer Vision

3D Modeling of Objects Using Laser Scanning

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Lecture 20: Tracking. Tuesday, Nov 27

Final Review CMSC 733 Fall 2014

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Feature Tracking and Optical Flow

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Computer Vision. 3D acquisition

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Practice Exam Sample Solutions

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

CPSC 425: Computer Vision

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Image Differentiation

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Stereo Vision. MAN-522 Computer Vision

ELEC Dr Reji Mathew Electrical Engineering UNSW

CS664 Lecture #18: Motion

Other Reconstruction Techniques

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

Comparison between Motion Analysis and Stereo

Constructing a 3D Object Model from Multiple Visual Features

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Ceilbot vision and mapping system

Data Term. Michael Bleyer LVA Stereo Vision

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Final project bits and pieces

Sensor Modalities. Sensor modality: Different modalities:

Feature Tracking and Optical Flow

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Midterm Examination CS 534: Computational Photography

Spatial track: motion modeling

Lecture 7: Most Common Edge Detectors

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Optic Flow and Basics Towards Horn-Schunck 1

Using temporal seeding to constrain the disparity search range in stereo matching

3D Photography: Stereo

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo vision. Many slides adapted from Steve Seitz

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

A Tutorial Guide to Tribology Plug-in

Anno accademico 2006/2007. Davide Migliore

Dense 3D Reconstruction. Christiano Gava

Dense Image-based Motion Estimation Algorithms & Optical Flow

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Stereo Matching.

Recall: Derivative of Gaussian Filter. Lecture 7: Correspondence Matching. Observe and Generalize. Observe and Generalize. Observe and Generalize

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Experiments with Edge Detection using One-dimensional Surface Fitting

A Vision System for Automatic State Determination of Grid Based Board Games

CS201 Computer Vision Lect 4 - Image Formation

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Subpixel accurate refinement of disparity maps using stereo correspondences

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

Time-of-flight basics

An Intuitive Explanation of Fourier Theory

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Classification and Detection in Images. D.A. Forsyth

Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology

Structured light 3D reconstruction

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Introduction to Homogeneous coordinates

Transcription:

Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical axis and thereby moves the vertical line of light across the scene. Since only the horizontal positions of points vary and give us depth information, the vertical order of points is preserved. This allows us to compute the depth of each point along the vertical line without ambiguities. 1 2 Although this method is faster, it still requires a complete horizontal scan before a depth image is complete. Maybe we should use a pattern of many vertical lines that only needs to be shifted by the distance between neighboring lines? The disadvantage of this idea is that we could confuse points in different vertical lines, i.e., associate points with incorrect projection angles. However, we can overcome this problem by taking multiple images of the same scene with the pattern in the same position. In each picture, a different subset of lines is projected. 3 Then each line can be uniquely identified by its pattern of presence/absence across the images. For example, for 7 vertical lines we need a series of 3 images to do this encoding: Line #1 Line #2 Line #3 Line #4 Line #5 Line #6 Line #7 Image a off off off on on on on Image b off on on off off on on Image c on off on off on off on Obviously, with this technique we can encode up to (n 1) lines using log 2 (n) images. Therefore, this method is more efficient than the single-line scanning technique. 4 The Microsoft Kinect V1 System 5 6 1

The Microsoft Kinect V1 System The Microsoft Kinect V2 System The Kinect V2 uses a time-of-flight camera to measure depth. It contains a broad IR illuminator whose light is intensity modulated: The reflected light is projected through a lens onto an array of IR light sensors. For each sensor element, the phase shift between the reflected and the currently emitted light is computed. 7 8 The Microsoft Kinect V2 System The phase shift is proportional to the distance that the light traveled and can thus be used to compute depth across the received image. Note that the measurable depth range is limited by the modulation wavelength the system cannot decide, for example, whether 0.3 or 1.3 cycles have passed. The advantage of time-of-flight cameras is that they do not need an offset between illuminator and camera or any stereo-matching algorithm. Now we will talk about Motion Analysis 9 10 Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of 3D object properties. Motion analysis Motion analysis and object tracking combine two separate but inter-related components: Localization and representation of the object of interest (target). Trajectory filtering and data association. One or the other may be more important based on the nature of the motion application. 11 12 2

A simple method for motion detection is the subtraction of two or more images in a given image sequence. Usually, this method results in a difference image d(i, j), in which non-zero values indicate areas with motion. For given images f 1 and f 2, d(i, j) can be computed as follows: d(i, j) = 0 if f 1 (i, j) f 2 (i, j) = 1 otherwise 13 14 Another example of a difference picture that indicates the motion of objects ( = 25). 15 Applying a size filter (size 10) to remove noise from a difference picture. 16 In order to determine the direction of motion, we can compute the cumulative difference image for a sequence f 1,, f n of more than two images: The differential method can rather easily be tricked. Here, the indicated changes were induced by changes in the illumination instead of object or camera motion (again, = 25). Here, f 1 is used as the reference image, and the weight coefficients a k can be used to give greater weight to more recent frames and thereby highlight the current object positions. 17 18 3

Cumulative Difference Image Example: Sequence of 4 images: 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 a 2 = 1 Result: 0 7 7 0 0 6 6 0 0 4 4 0 0 7 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 a 3 = 2 a 4 = 4 Generally speaking, while differential motion analysis is well-suited for motion detection, it is not ideal for the analysis of motion characteristics. 19 20 reflects the image changes due to motion during a time interval dt which must be short enough to guarantee small inter-frame motion changes. The optical flow field is the velocity field that represents the three-dimensional motion of object points across a twodimensional image. computation is based on two assumptions: The observed brightness of any object point is constant over time. Nearby points in the image plane move in a similar manner (the velocity smoothness constraint). 21 22 Optical Flow Optical Flow The basic idea underlying most algorithms for optical flow computation: Regard image sequence as three-dimensional (x, y, t) space Determine x- and y-slopes of equal-brightness pixels along t-axis The computation of actual 3D gradients is usually quite complex and requires substantial computational power for real-time applications. Let us consider the two-dimensional case (one spatial dimension x and the temporal dimension t), with an object moving to the right: t x 23 24 4

Optical Flow Instead of using the gradient methods, one can simply determine those straight lines with a minimum of variation (standard deviation) in intensity along them: Flow undefined in these areas t computation will be in error if the constant brightness and velocity smoothness assumptions are violated. In real imagery, their violation is quite common. Typically, the optical flow changes dramatically in highly textured regions, around moving boundaries, at depth discontinuities, etc. Resulting errors propagate across the entire optical flow solution. x 25 26 Global error propagation is the biggest problem of global optical flow computation schemes, and local optical flow estimation helps overcome the difficulties. However, local flow estimation can introduce large errors for homogeneous areas, i.e., regions of constant intensity. One solution of this problem is to assign confidence values to local flow estimations and consider those when integrating local and global optical flow. Block-Matching Motion Estimation A simple method for deriving optical flow vectors is block matching. The current video frame is divided into a large number of squares (blocks). For each block, find that same-sized area in the following frame(s) with the greatest intensity correlation to it. The spatiotemporal offset between the original block and its best matches in the following frame(s) indicates likely motion vectors. The search can be limited by imposing a maximum velocity threshold. 27 28 Feature Point Correspondence Feature point correspondence is another method for motion field construction. Velocity vectors are determined only for corresponding feature points. Object motion parameters can be derived from computed motion field vectors. Motion assumptions can help to localize moving objects. Frequently used assumptions include, like discussed above: Maximum velocity Small acceleration Common motion Feature Point Correspondence The idea is to find significant points (interest points, feature points) in all images of the sequence points least similar to their surroundings, representing object corners, borders, or any other characteristic features in an image that can be tracked over time. Basically the same measures as for stereo matching can be used. Point detection is followed by a matching procedure, which looks for correspondences between these points in time. The main difference to stereo matching is that now we cannot simply search along an epipolar line, but the search area is defined by our motion assumptions. The process results in a sparse velocity field. Motion detection based on correspondence works even for relatively long interframe time intervals. 29 30 5