User Interface Engineering HS 2013

Size: px
Start display at page:

Download "User Interface Engineering HS 2013"

Transcription

1 User Interface Engineering HS 2013 Augmented Reality Part I Introduction, Definitions, Application Areas ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

2 Outline Introduction to Augmented Reality Definition and brief history of AR and VR Mobile Augmented Reality Camera and object tracking Marker based Natural feature tracking Visual SLAM Dense Reconstruction ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

3 What is Augmented Reality? ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

4

5 Augmented Reality Definition Defining Characteristics [Azuma 97] Combines Real and Virtual Information Both can be perceived at the same time Interactive in real-time The virtual content can be interacted with Registered in 3D Virtual objects appear fixed in space ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

6 Milgram s Reality-Virtuality continuum Mixed Reality Real Environment Augmented Reality (AR) Augmented Virtuality (AV) Virtual Environment Reality - Virtuality (RV) Continuum ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

7 A Brief History of AR and VR ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

8 The Sword of Damocles. Ivan Sutherland The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would kill. - The Ultimate Display. Ivan Sutherland

9 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

10 Virtual Reality Immersive VR Head mounted display, gloves Separation from the real world ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

11 VR Today $3-5 Billion VR business (+ > $150 Billion Graphics Industry) Visualization, simulation, gaming, CAD/CAE, multimedia, graphics arts ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

12 AR vs VR Virtual Reality: Replaces Reality Scene Generation: requires realistic images Display Device: fully immersive, wide FOV Tracking and Sensing: low accuracy is okay Augmented Reality: Enhances Reality Scene Generation: minimal rendering okay Display Device: non-immersive, small FOV Tracking and Sensing: high accuracy needed ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

13 A Brief History of AR (1) s: US Air Force Super Cockpit (T. Furness) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

14 A Brief History of AR (2) Early 1990 s: Boeing coined the term AR. Wire harness assembly application begun (T. Caudell, D. Mizell). ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

15 A Brief History of AR (3) 1994: Motion stabilized display [Azuma] 1995: Fiducial tracking in video see-through [Bajura / Neumann] 1996: UNC hybrid magnetic-vision tracker ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

16 A Brief History of AR (4) 1996: MIT Wearable Computing efforts 1998: Dedicated conferences begin Late 90 s: Collaboration, outdoor, interaction Late 90 s: Augmented sports broadcasts ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

17 Spatial Augmented Reality: Merging Real and Virtual Worlds. O. Bimber, R. Raskar Augmented Reality - Technologies ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

18 Mobile Augmented Reality ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

19 NaviCam (Rekimoto, 1995) Information is registered to real-world context Hand held AR displays Interaction Manipulation of a window into information space Applications Context-aware information displays ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

20 Backpack/Wearable AR 1997 Backpack AR Feiner s Touring Machine AR Quake (Thomas) Tinmith (Piekarski) MCAR (Reitmayr) Bulky, HMD based

21 Mobile AR: Touring Machine (1997) University of Columbia Feiner, MacIntyre, Höllerer, Webster Combines See through head mounted display GPS tracking Orientation sensor Backpack PC (custom) Tablet input ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

22 MARS View Virtual tags overlaid on the real world Information in place ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

23 1997 Philip Kahn invents camera phone 1999 First commercial camera phone Sharp J-SH04 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

24 Millions of Camera Phones DSC Phone ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

25 Mobile AR by Weight Backpack+HMD: 5-8kg Scale it down: Vesp R [Kruijff ISMAR07]: Sony UMPC 1.1GHz 1.5kg still >$5K Scale it down more: Smartphone $500 All-in-one 0.1kg billions of units ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

26 Hardware Sensors Camera (8MP, 30 Maker based/markerless tracking Video overlap GPS (1-10m, 1-2Hz) Outdoor location Compass Indoor/outdoor orientation Accelerometer Motion sensing, relative tilt ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

27 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

28 Augmented Reality Main Challenges Tracking Visual Coherency Scene Reconstruction ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

29 Tracking: Pose estimation & Object Recognition ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

30 Tracking is Estimating the device s pose (position and orientation) Strictly in real time (30Hz) With high spatial precision (1cm, 1 degree) Robustly for operation by human user No unrealistic assumptions about HW Leaving enough power to other tasks (interaction, graphics) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

31 ARToolKit Tracking (Kato) ARToolKit - Computer vision based marker tracking libraries ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

32 Marker Tracking Overview Camera Image Fiducial Detection Contours Rectangle Fitting Lens Undistortion Identified Markers Pattern Checking Rectangles Undistorted Corners Pose Estimation Estimated Poses ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

33 Marker Tracking Fiducial Detection Threshold the whole image to black and white Search scanline by scanline for edges (white to black) Follow edge until either back to starting pixel image border Check for size reject candidates early that are too small (or too large) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

34 Marker Tracking Rectangle Fitting Start with an arbitrary point x on the contour The point with maximum distance must be a corner c 0 Draw line through c 0 and the center Find points c 1 & c 2 with maximum distance left and right of diag. New diagonal from c 1 to c 2 Find point c 3 right of diagonal with maximum distance ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

35 Marker Tracking Pattern checking Calculate homography using the 4 corner points Direct Linear Transform algorithm Maps normalized coordinates to marker coordinates (simple perspective projection, no camera model) Extract pattern by sampling Check pattern Id (implicit encoding) Template (normalized cross correlation) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

36 Marker tracking Pose estimation Calculates marker position and rotation relative to the camera Initial estimation directly from homography Very fast, but coarse Jitters a lot Refinement via Gauss-Newton iteration or Levenberg-Marquardt 6 parameters (3 for position, 3 for rotation) to refine At each iteration Calculate re-projection error ε Calculate Jacobian matrix J (matrix of all first-order partial derivatives) Solve the equation (J T J + λd)δ = J T ε for Δ (e.g. using Cholesky factorization) Add Δ to pose Quit if accurate enough or if max. steps reached ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

37 Marker Tracking Pipeline Goal: Do all this in less than 20 milliseconds on a mobile phone ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

38

39 Natural Feature Tracking ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

40 Markerless Tracking ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

41 Natural feature tracking Tracking from features of the surrounding environment Corners, edges, blobs,... Generally more difficult than marker tracking Markers are designed for their purpose The natural environment is not Less well-established methods Every year new ideas are proposed Usually much slower than marker tracking ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

42 Tracking by detection This is what most trackers do Targets are detected every frame Popular because tracking and detection are solved simultaneously Recognition Camera Image Keypoint detection Descriptor creation and matching Outlier Removal Pose estimation and refinement Pose ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

43 Natural feature tracking What is a keypoint? It depends on the detector you use! For high performance use the FAST corner detector Apply FAST to all pixels of your image Obtain a set of keypoints for your image Reduce the amount of corners using non-maximum suppression Describe the keypoints E. Rosten and T. Drummond (May 2006). "Machine learning for high-speed corner detection". ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

44 FAST Corner detector ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

45 Natural feature tracking Descriptors Again depends on your choice of a descriptor! Can use SIFT Estimate the dominant keypoint orientation using gradients Compensate for detected orientation Describe the keypoints in terms of the gradients surrounding it Wagner D., Reitmayr G., Mulloni A., Drummond T., Schmalstieg D., Real-Time Detection and Tracking for Augmented Reality on Mobile Phones. IEEE Transactions on Visualization and Computer Graphics, May/June, 2010 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

46 NFT Database creation Offline step Searching for corners in a static image For robustness look at corners on multiple scales Some corners are more descriptive at larger or smaller scales We don t know how far users will be from our image Build a database file with all descriptors and their position on the original image ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

47 NFT Real-time tracking Search for keypoints in the video image Create the descriptors Match the descriptors from the live video against those in the database Remove the keypoints that are outliers Use the remaining keypoints to calculate the pose of the camera Recognition Camera Image Keypoint detection Descriptor creation and matching Outlier Removal Pose estimation and refinement Pose ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

48 NFT Results Wagner D., Reitmayr G., Mulloni A., Drummond T., Schmalstieg D., Real-Time Detection and Tracking for Augmented Reality on Mobile Phones. IEEE Transactions on Visualization and Computer Graphics, May/June, 2010 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

49 15 Minute Break ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

50 User Interface Engineering HS 2013 Augmented Reality Part II Modern Approaches to Augmented Reality ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

51 Outline Introduction to Augmented Reality Definition and brief history Mobile Augmented Reality Camera and object tracking Marker based Natural feature tracking Visual SLAM Dense Reconstruction ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

52 Informed vs. uninformed tracking Informed tracking Requires knowing the environment Requires storing a large database of information Users can point the phone at anything described in the database Allows for adding semantic information to the database E.g., where is the ground plane? Uninformed tracking Works also for unknown environments Requires creating a database of keypoints on the fly Prone to drift Prone to corruption of the database User must move smoothly to build the database incrementally ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

53 NFT in unknown environments We can also build a 3D database of keypoints Georg Klein and David Murray Parallel Tracking and Mapping for Small AR Workspaces In Proc. International Symposium on Mixed and Augmented Reality (ISMAR'07) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

54 Frame by Frame SLAM Why is SLAM fundamentally hard? Time One frame Find features Update camera pose and entire map Many DOF Draw graphics ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

55 Frame by Frame SLAM SLAM Updating entire map every frame is expensive!!! Needs sparse map of high-quality features - A. Davison Proposed approach Use dense map (of low quality features) Don t update the map every frame: Keyframes Split the tracking and mapping into two threads ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

56 Parallel Tracking and Mapping Time Thread #2 Mapping Update map One frame Thread #1 Tracking Find features Update camera pose only Fast & Robust Draw graphics ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

57 Parallel Tracking and Mapping Tracking Thread: Responsible estimation of camera pose and rendering augmented graphics Must run at 30 Hz Make as robust and accurate as possible Mapping thread: Responsible for providing the map Can take lots of time per key frame Make as rich and accurate as possible ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

58 Tracking thread Pre-process frame Map Project points Project points Match points Match points Update Camera Pose Coarse stage Update Camera Pose Fine stage Draw Graphics ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

59 Pre-process frame Image pyramid with four levels Coarse to fine 640x x x120 80x60 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

60 Pre-process frame Image pyramid with four levels Coarse to fine Detect FAST corners E. Rosten et al (ECCV 2006) 640x x x120 80x60 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

61 Project Points Use motion model to update camera pose Constant velocity model Estimated current Pt + 1 Previous pos Pt Previous pos Pt 1 t t Vt = (Pt Pt 1)/ t Pt + 1 = Pt + t (Vt) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

62 Project Points Choose subset to measure ~ 50 features for coarse stages (highest score) 1000 randomly selected for fine stage 1000 ~50 640x x x120 80x60 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

63 Match Points Generate 8x8 matching template (warped from source key-frame map) Search a fixed radius around projected position Use Zero-mean SSD Only search at FAST corner points ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

64 Update camera pose 6-DOF problem Obtain by SFM (Three-point algorithm)? ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

65 Draw graphics What can we draw in an unknown scene? Assume single plane visible at start Run VR simulation on the plane ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

66 Mapping thread Stereo Initialization Wait for new key frame Add new map points Tracker Optimize map Map maintenance ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

67 Stereo Initialization Use five-point-pose algorithm D. Nister et. al Requires a pair of frames and feature correspondences Provides initial map User input required: Two clicks for two key-frames Smooth motion for feature correspondence ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

68 Wait for new key frame Key frames are only added if: There is a sufficient baseline to the other key frame Tracking quality is good When a key frame is added: The mapping thread stops whatever it is doing All points in the map are measured in the key frame New map points are found and added to the map ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

69 Add new map points Want as many map points as possible Check all FAST corners in the key frame: Check score Check if already in map Epipolar search in a neighboring key frame Triangulate matches and add to map Repeat in four image pyramid levels ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

70 Optimize map Use batch SFM method: Bundle Adjustment Adjusts map point positions and key frame poses Minimize reprojection error of all points in all keyframes (or use only last N key frames) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

71 Map maintenance When camera is not exploring, mapping thread has idle time Data association in bundle adjustment is reversible Re-attempt outlier measurements Try measure new map features in all old key frames ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

72 Mapping Thread Summary ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

73 Examples & Limitations ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

74 [Izadi, Kim, Hilliges, Molyneaux, Newcombe, Kohli, Shotton, Hodges, Freeman, Davison, Fitzgibbon. UIST 11] [Newcombe, Izadi, Hilliges, Molyneaux, Kim, Davison, Kohli, Shotton, Hodges, Fitzgibbon. ISMAR 11 (Best Paper Award)]

75 Core Components ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

76 ICP for pose estimation (V k, N k ) At time K Find single 6DOF transform T to align points (V k ) and normals (N k ) with previous frame (V k-1, N k-1 ) Projective data association Project current oriented points (V k, N k ) using global transform T k-1 Correspondences along ray Euclidean and normal compatibility Minimise point-plane error metric arg min Σ (T v k (x,y) - v k 1 (x,y)) n k 1 (x,y) 2 x,y (V k-1, N k-1 ) (T) ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

77 Tracking from the dense model ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

78 GPU Volumetric representation Voxel grid Truncated signed distance measurements (Curless & Levoy, SIGGRAPH 96) Uncertainty of range data Projective update Voxel converted: global point p project(t k -1 p) TSDF = CameraCenter p depth Weighted average Raycast extracts implicit surface ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

79 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

80 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

81 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

82

83 Geometry-aware AR

84 Occlusion handling

85

86 ProjectorFusion [Augmented Projectors. Pervasive 12.] ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

87 ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

88 Next Week Lecture Recap and Latest Research Highlights (20 minutes max) Final project presentation and demo 10 minutes for each group (use full time allotment) Present your final project including Technical detail on implementation Difficulties encountered (and overcome) Assignment of tasks to group members Live demos in the exercise slot ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

89 Reading Suggestions R. Azuma, "SIGGRAPH95 Course Notes: A Survey of Augmented Reality", ACM Siggraph, Kato, H., Billinghurst, M. Marker Tracking and HMD Calibration for a video-based Augmented Reality Conferencing System. In Proceedings of the 2nd International Workshop on Augmented Reality (IWAR 99). Wagner D., Reitmayr G., Mulloni A., Drummond T., Schmalstieg D., Real-Time Detection and Tracking for Augmented Reality on Mobile Phones. IEEE Transactions on Visualization and Computer Graphics G. Klein and D. Murray. Parallel Tracking and Mapping for Small AR Workspaces. In Proc. International Symposium on Mixed and Augmented Reality (ISMAR'07) S. Izadi, D. Kim, O. Hilliges, et al. KinectFusion: Real-time 3D Reconstruction And Interaction Using A Moving Depth Camera. In ACM UIST 11. ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

Augmented Reality, Advanced SLAM, Applications

Augmented Reality, Advanced SLAM, Applications Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,

More information

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.

More information

Object Reconstruction

Object Reconstruction B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich

More information

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM Computer Vision Group Technical University of Munich Jakob Engel LSD-SLAM: Large-Scale Direct Monocular SLAM Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich Monocular Video Engel,

More information

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

KinectFusion: Real-Time Dense Surface Mapping and Tracking

KinectFusion: Real-Time Dense Surface Mapping and Tracking KinectFusion: Real-Time Dense Surface Mapping and Tracking Gabriele Bleser Thanks to Richard Newcombe for providing the ISMAR slides Overview General: scientific papers (structure, category) KinectFusion:

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1

More information

Estimating Camera Position And Posture by Using Feature Landmark Database

Estimating Camera Position And Posture by Using Feature Landmark Database Estimating Camera Position And Posture by Using Feature Landmark Database Motoko Oe 1, Tomokazu Sato 2 and Naokazu Yokoya 2 1 IBM Japan 2 Nara Institute of Science and Technology, Japan Abstract. Estimating

More information

Multiple View Depth Generation Based on 3D Scene Reconstruction Using Heterogeneous Cameras

Multiple View Depth Generation Based on 3D Scene Reconstruction Using Heterogeneous Cameras https://doi.org/0.5/issn.70-7.07.7.coimg- 07, Society for Imaging Science and Technology Multiple View Generation Based on D Scene Reconstruction Using Heterogeneous Cameras Dong-Won Shin and Yo-Sung Ho

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Camera Drones Lecture 3 3D data generation

Camera Drones Lecture 3 3D data generation Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

A Component-based approach towards Mobile Distributed and Collaborative PTAM

A Component-based approach towards Mobile Distributed and Collaborative PTAM A Component-based approach towards Mobile Distributed and Collaborative PTAM Tim Verbelen Pieter Simoens Filip De Turck Bart Dhoedt Ghent University - IBBT, Department of Information Technology Ghent University

More information

Mosaics. Today s Readings

Mosaics. Today s Readings Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

3D Visualization through Planar Pattern Based Augmented Reality

3D Visualization through Planar Pattern Based Augmented Reality NATIONAL TECHNICAL UNIVERSITY OF ATHENS SCHOOL OF RURAL AND SURVEYING ENGINEERS DEPARTMENT OF TOPOGRAPHY LABORATORY OF PHOTOGRAMMETRY 3D Visualization through Planar Pattern Based Augmented Reality Dr.

More information

Direct Methods in Visual Odometry

Direct Methods in Visual Odometry Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared

More information

Image correspondences and structure from motion

Image correspondences and structure from motion Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.

More information

Hybrids Mixed Approaches

Hybrids Mixed Approaches Hybrids Mixed Approaches Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why mixing? Parallel Tracking and Mapping Benefits

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Towards a visual perception system for LNG pipe inspection

Towards a visual perception system for LNG pipe inspection Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview

More information

Project Updates Short lecture Volumetric Modeling +2 papers

Project Updates Short lecture Volumetric Modeling +2 papers Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23

More information

Lecture 10 Dense 3D Reconstruction

Lecture 10 Dense 3D Reconstruction Institute of Informatics Institute of Neuroinformatics Lecture 10 Dense 3D Reconstruction Davide Scaramuzza 1 REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time M. Pizzoli, C. Forster,

More information

/10/$ IEEE 4048

/10/$ IEEE 4048 21 IEEE International onference on Robotics and Automation Anchorage onvention District May 3-8, 21, Anchorage, Alaska, USA 978-1-4244-54-4/1/$26. 21 IEEE 448 Fig. 2: Example keyframes of the teabox object.

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Monocular Visual Odometry

Monocular Visual Odometry Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field

More information

AUGMENTED REALITY. Antonino Furnari

AUGMENTED REALITY. Antonino Furnari IPLab - Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli Studi di Catania http://iplab.dmi.unict.it AUGMENTED REALITY Antonino Furnari furnari@dmi.unict.it http://dmi.unict.it/~furnari

More information

Visualization of Temperature Change using RGB-D Camera and Thermal Camera

Visualization of Temperature Change using RGB-D Camera and Thermal Camera 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 Visualization of Temperature

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

A Stable and Accurate Marker-less Augmented Reality Registration Method

A Stable and Accurate Marker-less Augmented Reality Registration Method A Stable and Accurate Marker-less Augmented Reality Registration Method Qing Hong Gao College of Electronic and Information Xian Polytechnic University, Xian, China Long Chen Faculty of Science and Technology

More information

Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality

Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality International Journal of Future Computer and Communication, Vol. 2, No. 5, October 213 Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality Kah Pin Ng, Guat Yew Tan, and

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

A Systems View of Large- Scale 3D Reconstruction

A Systems View of Large- Scale 3D Reconstruction Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 10 Class 2: Visual Odometry November 2nd, 2017 Today Visual Odometry Intro Algorithm SLAM Visual Odometry Input Output Images, Video Camera trajectory, motion

More information

StereoScan: Dense 3D Reconstruction in Real-time

StereoScan: Dense 3D Reconstruction in Real-time STANFORD UNIVERSITY, COMPUTER SCIENCE, STANFORD CS231A SPRING 2016 StereoScan: Dense 3D Reconstruction in Real-time Peirong Ji, pji@stanford.edu June 7, 2016 1 INTRODUCTION In this project, I am trying

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

COMPM076 / GV07 - Introduction to Virtual Environments: Mixed Reality

COMPM076 / GV07 - Introduction to Virtual Environments: Mixed Reality COMPM076 / GV07 - Introduction to Virtual Environments: Mixed Reality Simon Julier Department of Computer Science University College London http://www.cs.ucl.ac.uk/teaching/ve Structure Introduction Display

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Yuo Uematsu and Hideo Saito Keio University, Dept. of Information and Computer Science, Yoohama, Japan {yu-o,

More information

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Semi-Dense Direct SLAM

Semi-Dense Direct SLAM Computer Vision Group Technical University of Munich Jakob Engel Jakob Engel, Daniel Cremers David Caruso, Thomas Schöps, Lukas von Stumberg, Vladyslav Usenko, Jörg Stückler, Jürgen Sturm Technical University

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Static Scene Reconstruction

Static Scene Reconstruction GPU supported Real-Time Scene Reconstruction with a Single Camera Jan-Michael Frahm, 3D Computer Vision group, University of North Carolina at Chapel Hill Static Scene Reconstruction 1 Capture on campus

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

3D Scene Reconstruction with a Mobile Camera

3D Scene Reconstruction with a Mobile Camera 3D Scene Reconstruction with a Mobile Camera 1 Introduction Robert Carrera and Rohan Khanna Stanford University: CS 231A Autonomous supernumerary arms, or "third arms", while still unconventional, hold

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

On-line Document Registering and Retrieving System for AR Annotation Overlay

On-line Document Registering and Retrieving System for AR Annotation Overlay On-line Document Registering and Retrieving System for AR Annotation Overlay Hideaki Uchiyama, Julien Pilet and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku Yokohama, Japan {uchiyama,julien,saito}@hvrl.ics.keio.ac.jp

More information

Augmented and Mixed Reality

Augmented and Mixed Reality Augmented and Mixed Reality Uma Mudenagudi Dept. of Computer Science and Engineering, Indian Institute of Technology Delhi Outline Introduction to Augmented Reality(AR) and Mixed Reality(MR) A Typical

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

A System of Image Matching and 3D Reconstruction

A System of Image Matching and 3D Reconstruction A System of Image Matching and 3D Reconstruction CS231A Project Report 1. Introduction Xianfeng Rui Given thousands of unordered images of photos with a variety of scenes in your gallery, you will find

More information

Large Scale 3D Reconstruction by Structure from Motion

Large Scale 3D Reconstruction by Structure from Motion Large Scale 3D Reconstruction by Structure from Motion Devin Guillory Ziang Xie CS 331B 7 October 2013 Overview Rome wasn t built in a day Overview of SfM Building Rome in a Day Building Rome on a Cloudless

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Millennium 3 Engineering

Millennium 3 Engineering Millennium 3 Engineering Millennium 3 Engineering Augmented Reality Product Offerings ISMAR 06 Industrial AR Workshop www.mill3eng.com www.artag.net Contact: Mark Fiala mark.fiala@nrc-cnrc.gc.ca mark.fiala@gmail.com

More information

CS231A Midterm Review. Friday 5/6/2016

CS231A Midterm Review. Friday 5/6/2016 CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Virtualized Reality Using Depth Camera Point Clouds

Virtualized Reality Using Depth Camera Point Clouds Virtualized Reality Using Depth Camera Point Clouds Jordan Cazamias Stanford University jaycaz@stanford.edu Abhilash Sunder Raj Stanford University abhisr@stanford.edu Abstract We explored various ways

More information

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Active Ranging, Structured Light, ICP 3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar

More information

Dense 3D Reconstruction from Autonomous Quadrocopters

Dense 3D Reconstruction from Autonomous Quadrocopters Dense 3D Reconstruction from Autonomous Quadrocopters Computer Science & Mathematics TU Munich Martin Oswald, Jakob Engel, Christian Kerl, Frank Steinbrücker, Jan Stühmer & Jürgen Sturm Autonomous Quadrocopters

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Memory Management Method for 3D Scanner Using GPGPU

Memory Management Method for 3D Scanner Using GPGPU GPGPU 3D 1 2 KinectFusion GPGPU 3D., GPU., GPGPU Octree. GPU,,. Memory Management Method for 3D Scanner Using GPGPU TATSUYA MATSUMOTO 1 SATORU FUJITA 2 This paper proposes an efficient memory management

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

Mesh from Depth Images Using GR 2 T

Mesh from Depth Images Using GR 2 T Mesh from Depth Images Using GR 2 T Mairead Grogan & Rozenn Dahyot School of Computer Science and Statistics Trinity College Dublin Dublin, Ireland mgrogan@tcd.ie, Rozenn.Dahyot@tcd.ie www.scss.tcd.ie/

More information