Virtual Endoscopy: Modeling the Navigation in 3D Brain Volumes
|
|
- Junior Richard
- 6 years ago
- Views:
Transcription
1 ACBME-137 Virtual Endoscopy: Modeling the Navigation in 3D Brain Volumes Aly A. Farag and Charles B. Sites Computer Vision and Image Processing Laboratory University of Louisville, KY Stephen Hushek and Thomas Moriarty Department of Neurological Surgery University of Louisville, KY Abstract Minimally invasive neuroendoscopy involves matching the two-dimensional images seen by the endoscopes to the three-dimensional reality of a patient that is summarized pre-operatively in CT and/or MRI data sets. To remedy the inaccuracies and difficulties associated with classical endoscopy, a robust real time computer-assisted navigation capability is required for neuroendoscopy. In this article, we propose a computational geometry-based approach for virtual endoscopy. The approach is also being implemented on hardware. I. Introduction The ultimate goal of image-guided minimally invasive endoscopic technology is to enable the neurosurgeon to navigate through the brain, i.e., allocate and visualize where the surgical tool is in the brain at any time during the surgical procedure. Below, we describe two approaches, one for virtual endoscopy based on computational geometry and shape description, the other based on practical implementation of the above concepts. The overall idea of simulating an endoscope is as follows: Given a stack of MRI data (or MRA), a 3D model is generated. Using the scanner parameters, the 3D volume generated by the segmentation process can be calibrated with respect to the actual dimensions of the human head. Endoscopy tries to map the projections (2D images) that the surgeon sees from the endoscope, during the surgical procedure, into the 3D volume generated from the MRI or MRA stack (i.e., a 2D to 3D mapping). Virtual endoscopy will do the reverse operation; i.e., generate 3D mouse that can display portions of the 3D data in the computer. The projection of the 3D data in the field of view of the 3D mouse into a particular plane will correspond to the 2D image that the surgeon sees using a real endoscope. Now, the problem becomes the following: given the 3D volume seen by the 3D mouse, and the stack of MRI (or MRA) data that generated that volume, can we use the data in the stack to estimate the projections of the volume seen by the mouse? Or, conversely, can we use the projections of the volume seen by the 3D mouse, the location of the optical center (and focal point) of the virtual endoscope (the 3D mouse) to estimate 2D images (segments of the slices forming the MRI stack) that were involved in generating that 3D volume?
2 ACBME-138 If the above is possible, then real time endoscopic navigation will be simulated as follows: 1. Generate a calibrated 3D volume. 2. Generate the projections of a 3D segment seen by a 3D mouse that navigates through the 3D volume. 3. Identify the set of slices of the MRI data that were used to generate the segment seen by the 3D mouse. 4. Using the projection of the 3D segments, the location of the 3D mouse (i.e., its optical center), identify the portions of the MRI slices that closely correspond to the projections. 5. The software system should simulate real time navigation through a 3D volume as follows: show 3D segments, projections, and corresponding 2D images from the raw MRI (or MRA) data. Continual changing of the location of 3D mouse and displaying the portions of the 2D raw data that generated the volume in the field of view of the mouse, will provide the navigational component that corresponds to mapping the 2D image seen by an endoscope into the 3D volume of the brain. The mathematics behind the above concepts is being formulated using shape analysis and computational geometry. Below, we show a proof of concept based on simulations. II. Virtual Endoscopy Simulations We are using the '3D mouse' simulator for input of the endoscope's position (X,Y,Z) and orientation (Rho, Theta, and Phi) as well as extension (length). Several panels are displayed simultaneously. We are currently using an MRA stack of images 256x256x116. The main panel Show s the orthogonal projections at the tip of the endoscope. An optional 'reslice' image (the plane normal to the endoscope's vector) can be displayed. A second window panel shows the orthogonal sections individually with the current endoscope tip position located with cross hairs. The third panel shows the 'reslice' image plane that would be normal to the endoscope vector. The GUI panel also shows a 3D volume rendering of the MRA stack and the Endoscope position in reference to it. The last image is the Endoscope view. This is created by moving the volume rendering camera to the location of the endoscope tip, and modifying it's view angle to match that of the endoscope. Currently that is 90 degrees modeled after the endoscope we have in the lab. We currently cannot model the depth of field of the endoscope simulation; so we have a depth of field from 0 to infinity. (This is a computational speed issue). But for the most part, we should be able to see similarities we need. Fig 1.A was the first step. It shows the 3Dmouse panel and an arbitrary 'reslice' plane normal to the mouse vector. Fig 1.B simulates endoscope in operation. The simulated mouse controls are on the right. Then endoscope view is the third window bottom pane. Fig. 1.C shows the main window panel with a Y-axis plane and the reslice plane is shown. Fig. 1.D shows the orthogonal views with the cross hairs showing the position of then endoscope (3D mouse) tip.
3 ACBME-139 Fi.g 1.A: The 3D-mouse panel and an arbritary 'reslice' plane normal to the mouse vector.. Fig 1.B: An endoscope simulation in operation.
4 ACBME-140 Fig. 1.C: The main window panel with a Y-axis plane and the reslice plane is shown. Fig. 1.D: Orthogonal views with the cross hairs showing the position of then endoscope (3D mouse) tip. Fig. 1.E: The Reslice Projection window. Fig. 1.F: The Volume Render display with the mouse..
5 ACBME-141 The above results are only the first step towards the proof of concept. A number of following steps are in progress to study the software accuracy issue, speed, and validation. III. Rigid Endoscopy Endoscopes generally exist in two types: rigid and non-rigid. In this paper, we focus on modeling and calibration of rigid endoscopes. The images captured by the endoscope camera can be used to map the imaged region of the patient's brain to its 3D model. Making such mapping requires modeling of the relationship between the 2D images and the 3D world. Camera calibration is a process, which models this relationship. Basically, there are two aspects associated with camera calibration: calibration of the internal parameters of a camera (intrinsic parameters) and pose estimation of a camera system relative to a 3D world reference system (extrinsic parameters). As the endoscope moves, the extrinsic model parameters are changed. Therefore, these parameters need re-calibration. On the other hand, adjusting the zoom and focus of the endoscope camera modifies the intrinsic calibration parameters of the camera and the parameters of the camera positioning system. In addition, the use of variable focal lengths (especially small ones) introduces geometrical distortion for objects, which are not close to the optical axis. It is therefore mandatory to develop methods to automatically recalibrate the visual sensor. In the following we describe a model of the endoscope camera and then provide techniques for its calibration in case of a moving camera and in case of variable focus and zoom. A. Camera Model The result of camera calibration is an explicit transformation that maps a 3D world point M=(X,Y,Z,1) T into the 2D pixel m=(u,v,1) T. In the pinhole camera model, the relationship between M and m is given by s m = A[ R t], (1) α u c u0 A = 0 α v v where s is an arbitrary scale factor; (R t), called the extrinsic parameters, are respectively the rotation (in terms of three angles: R x, R y and R z ) and translation (t=(t x t y t z ) T ) components of the camera transformation; A is called the camera intrinsic matrix, and (u 0,v 0 ) are the coordinates of the principal point, α u and α v the scale factors in image u and v axes, and c the parameter describing the skewness of the two image axes. The camera mapping can be represented by a 3 x 4 projection matrix, P, that encompasses all these parameters. This camera model ignores lens distortion which is often accounted for in the camera model by adding some distortion parameters [6]. However, these parameters can be estimated in the captured images by a precalibration process [10]. Then all images can be undistorted before calibration proceeds. The decoupling between distortion parameters from the others will allow us to maintain the simple relation in (1) thus making the use of the model easier.
6 ACBME-142 Given a sufficient number, N, of reference world points, M i =(X i Y i Z i 1) T, as well as their corresponding pixel positions, m i =(u i v i 1) T, the camera calibration problem is to estimate the 11 camera parameters, in other words, the projection matrix P, that minimize N E = P M i m i= 1 i (2) Generally, the 2D image pixels m i are extracted from a captured image of a calibration pattern. B. Lens Distortion Calibration The endoscope camera typically has small focal length, which results in considerable lens distortion The assumed pinhole camera model described above, does not consider lens distortion which is often accounted for in the camera model by adding some distortion coefficients [6]. However, we believe that it is better to estimate the distortion coefficients in the captured images then correct for the distortion effects by an independent process performed before calibration proceeds. By separating the estimation of lens distortion coefficients from the calibration process, the effect of the correlation between lens distortion coefficients and other camera model parameters is minimized [8]. The method that we propose to use to correct for lens distortion is based on the idea that lens distortion causes straight lines in the scene to appear as curves in the image [11]. The algorithm tries to find the distortion parameters that map the images curves to straight lines. Once the distortion parameters are calibrated, the captured images can be undistorted before processing for camera calibration and other vision tasks. C. Camera Model Calibration The existing techniques to solve this problem can be broadly classified into two main categories: Linear Techniques: In this category, camera parameters are computed directly through a non-iterative algorithm based on a closed-form solution (e.g., [15]). Having advantages of speed and simplicity, these techniques provide less accurate results. Nonlinear Minimization: With this type of scheme, an iterative minimization algorithm is employed to solve for the camera parameters (e.g., [6]). This approach may achieve high accuracy, and allow easy adaptation of any complex model of imaging. However, since the algorithm is iterative, the procedure may end up with a farfrom-optimal solution unless a good enough initial guess is available. To find such initial solutions, the linear techniques are often employed. Other methods incorporate the direct closed-form solution for most of the calibration parameters and some iterative solution for the others. Among these methods, Tsai's method [7] may be the most popular one. We have recently proposed a new solution of the camera calibration problem based on using a multilayer feedforward network (MLFN) [9], which can be classified into the second category of nonlinear minimization techniques. This approach relaxes the requirement to start the
7 ACBME-143 non-linear calibration procedure with a good initial guess, which is required by other nonlinear camera calibration techniques. This property is very useful when no such initial solution is available, or when this starting point is rather far away from the optimal solution. Our neural network-based approach (called neurocalibration) is able to calibrate the camera implicitly by providing the camera projection matrix and explicitly by specifying the camera intrinsic and extrinsic parameters[10]. D. Calibration of a Moving Endoscope If the camera is stationary, we do not have to re-calibrate again. Yet in all endoscope applications, the camera will be moving; This changes the camera extrinsic parameters while the intrinsic parameters can be safely assumed constant. This implies the recalculation of the perspective projection matrix. Being mounted rigidly on the digitizer arm, the camera location in the 3D space can be measured. The arm can provide the transformation (4 x 4 matrix), denoted by H, that relates the new position and orientation of the camera with respect to the position and orientation at which the camera was initially calibrated. extrinsic parameters. Calibrating cameras with changing zoom and focus raises several challenges [12]. The calibration problem becomes characterizing how the parameters of the fixed camera model vary with lens zoom and focus settings. The calibration approach generally, involves first calibrating a conventional static camera model at a number of lens settings spanning the lens' control space. To model how the terms of the static camera model vary with lens setting, partial lookup tables and interpolations or fitting multi-variable polynomials [12] can be used. One way to look to the task of zoomlens camera calibration is as a combination of fixed parameter camera calibration and function interpolation over a large collection of data. The features of our neurocalibration approach enable us to present an all-neural framework for zoom-lens calibration [10]. This framework consists of a number of MLFNs learning concurrently, independently and cooperatively, to capture the variations of model parameters across optical lens settings. Our approach has the following key features, as opposed to other techniques (e.g., [14], [13], [12]): This transformation is used to update the camera extrinsic parameters by postmultiplying the extrinsic matrix [R t] by H. As such, the new camera perspective projection matrix is found. E. Calibration of Zoom-adjustable Endoscope Camera Adjusting the endoscope camera's zoom and/or focus help bring some imaged parts in more details and in more focus. However, it complicates the camera models since it changes the camera intrinsic and It is general; it can consider, any number/combination of lens control parameters, e.g., zoom, focus and/or aperture. It can capture complex variations in the model parameters across control space. All of the parameters are fitted to the calibration data at the same time, while in other approaches, one parameter is fitted at a time and the final level of error generally depends on the order in which the models are fit to the data.
8 ACBME-144 IV Experimental Results The calibration approach has been tested with real data. An image of a calibration pattern ( see Fig. 2) whose points are know in 3D space is used. The 3D points and their corresponding 2D points extracted from the image are used for calibration. To test the quality of the calibration, an image of a model of a patient's head is acquired by the endoscope camera with the endoscope rigidly mounted on a digitizer arm. Then another image is acquired after the camera has been moved. The motion matrix provided by the arm is used to compute the new model of the camera as shown in Fig. 3. Since the camera is calibrated in the two position, the epipolar geometry between the two acquired images can be recovered. Corresponding points between the two images should lie on corresponding "epipolar" lines. How far a point is from its corresponding epipolar line measures the calibration accuracy. The root mean square error of the distances of the marked points from the epipolar lines is 0.2, which shows the high calibration accuracy of the endoscope camera. Fig. 3: The recovered epipolar geometry shows the accuracy of the calibrated camera. Acknowledgments This project has been partially funded by the Whitaker Foundation Research Grant No and the Norton Healthcare Organization Grants Fig. 2: Calibration pattern
9 ACBME-145 References 1. Eldeib, A. Farag, Paul Larson, and T. Moriarty, A Stereovision Technique for Image Guided Neurosurgery, Proceedings of the International Conference on Computer-Assisted Radiology (CAR-2000), Los Angeles CA, Jan 31, A. Eldeib, S. Yamany, A. Farag, and T. Moriarty, Volume Registration by Surface Point Signature and Mutual Information Maximization with Applications in Intra-Operative MRI Surgeries, Proc. IEEE International Conference on Image Processing (ICIP'2000), Vancouver, BC, Canada, September C. Sites, A. Farag, S. Hushek, T. Moriarty, A Fast Automatic Method for 3D Volume Segmentation of the Human Cerebrovascular Journal of Computer-Assisted Surgery (In preparation). 4. M. N. Ahmed and A. A. Farag, Two-stage Neural Network for Volume Segmentation of Medical Images, Pattern Recognition Letters, Vol. 18, No 11-13, pp , November M. N. Ahmed, S. M. Yamany, N. A. Mohamed, and A. A. Farag, A Modified Fuzzy C-Means Algorithm for MRI Bias-Field Estimation and Adaptive Segmentation, International Conference on Medical Image Computing and Computer- Assisted Intervention (MICCAI 99), Cambridge, England, pp , September J. Weng, Paul Cohen and M. Herniou, Camera calibration with distortion models and accuracy evaluation, IEEE Trans. Patt. Anal. Machine Intell, Vol. 14, No. 10, Oct Roger Tsai, A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses," IEEE Journal of Robotics and Automation, Vol. RA-3, No. 4, Aug S. Shih, Y. Hung and W. Lin, Accuracy analysis on the estimation of camera parameters for active vision systems," Proc. Int. Conf. Pattern Recognition (Vienna, Austria)}, Vol. 1, Aug Moumen Ahmed, Elsayed Hemayed and Aly Farag, ''A Neural Network That Can Tell Camera Calibration Parameters", Proc. IEEE International Conference on Computer Vision, Greece, June Moumen Ahmed and Aly Farag, ''A Neural Optimization Framework for Zoom-lens Camera", Proc. IEEE International Conference on Computer Vision and Pattern Recognition, SC, June Moumen Ahmed and Aly Farag, ''Nonmetric calibration of Camera Lens Distortion", Proc. IEEE International Conference on Image Processing, Greece, Oct (To appear).
10 ACBME R. G. Wilson, Modeling and calibration of automated zoom lenses", PhD dissertation, Dept. Elect. Comp. Eng., Carnegie Mellon Univ., A. Wiley and K. Wong, Geometric calibration of zoom lenses for computer vision metrology," Photogrammetric Eng. Remote Sensing, Vol. 61, No. 1, Jan K. Tarabanis, R. Tsai and D. Goodman, Calibration of a computer controlled robotic vision sensor with a zoom lens," CVGIP: Image Understanding, Vol. 59, No. 2, Jan Olivier Faugeras, Three-dimensional computer vision: a geometric viewpoint, MIT Press, 1993.
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationPRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationUniversity of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract
Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationImage Transformations & Camera Calibration. Mašinska vizija, 2018.
Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize
More informationAn Analytical Piecewise Radial Distortion Model for Precision Camera Calibration
1 An Analytical Piecewise Radial Distortion Model for Precision Camera Calibration Lili Ma, Student Member, IEEE, YangQuan Chen and Kevin L. Moore, Senior Members, IEEE Center for Self-Organizing and Intelligent
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationNAME VCamera camera model representation
NAME VCamera camera model representation SYNOPSIS #include void VRegisterCameraType (void); extern VRepnKind VCameraRepn; VCamera camera; ld... -lvcam -lvista -lm -lgcc... DESCRIPTION
More informationAugmenting Reality, Naturally:
Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationA Study of Medical Image Analysis System
Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationMultiple View Reconstruction of Calibrated Images using Singular Value Decomposition
Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition Ayan Chaudhury, Abhishek Gupta, Sumita Manna, Subhadeep Mukherjee, Amlan Chakrabarti Abstract Calibration in a multi
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationProjective Geometry and Camera Models
Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and
More informationFully Automatic Endoscope Calibration for Intraoperative Use
Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationNavigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images
Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationHomogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.
Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is
More informationDRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri
The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,
More informationA Study on the Distortion Correction Methodology of Vision Sensor
, July 2-4, 2014, London, U.K. A Study on the Distortion Correction Methodology of Vision Sensor Younghoon Kho, Yongjin (James) Kwon 1 Abstract This study investigates a simple and effective vision calibration
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationThree-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner
Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner F. B. Djupkep Dizeu, S. Hesabi, D. Laurendeau, A. Bendada Computer Vision and
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationComputational Medical Imaging Analysis Chapter 4: Image Visualization
Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,
More informationCamera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993
Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video
More informationCamera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah
Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric
More informationProject Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)
Address: Hong Kong Polytechnic University, Phase 8, Hung Hom, Kowloon, Hong Kong. Telephone: (852) 3400 8441 Email: cnerc.steel@polyu.edu.hk Website: https://www.polyu.edu.hk/cnerc-steel/ Project Title:
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationA COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION
XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION David Samper 1, Jorge Santolaria 1,
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationVisual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors
Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationRecovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles
177 Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles T. N. Tan, G. D. Sullivan and K. D. Baker Department of Computer Science The University of Reading, Berkshire
More informationAn Embedded Calibration Stereovision System
2012 Intelligent Vehicles Symposium Alcalá de Henares, Spain, June 3-7, 2012 An Embedded Stereovision System JIA Yunde and QIN Xiameng Abstract This paper describes an embedded calibration stereovision
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationCamera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah
Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D
More informationComputed Photography - Final Project Endoscope Exploration on Knee Surface
15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the
More informationArm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot
Czech Technical University, Prague The Center for Machine Perception Camera Calibration and Euclidean Reconstruction from Known Translations Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationStability Study of Camera Calibration Methods. J. Isern González, J. Cabrera Gámez, C. Guerra Artal, A.M. Naranjo Cabrera
Stability Study of Camera Calibration Methods J. Isern González, J. Cabrera Gámez, C. Guerra Artal, A.M. Naranjo Cabrera Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationDepth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences
Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationL16. Scan Matching and Image Formation
EECS568 Mobile Robotics: Methods and Principles Prof. Edwin Olson L16. Scan Matching and Image Formation Scan Matching Before After 2 Scan Matching Before After 2 Map matching has to be fast 14 robots
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationMultichannel Camera Calibration
Multichannel Camera Calibration Wei Li and Julie Klein Institute of Imaging and Computer Vision, RWTH Aachen University D-52056 Aachen, Germany ABSTRACT For the latest computer vision applications, it
More informationAutomatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera
Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Wolfgang Niem, Jochen Wingbermühle Universität Hannover Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationA Family of Simplified Geometric Distortion Models for Camera Calibration
A Family of Simplified Geometric Distortion Models for Camera Calibration Lili Ma, Student Member, IEEE, YangQuan Chen and Kevin L. Moore, Senior Members, IEEE Center for Self-Organizing and Intelligent
More information3D HAND LOCALIZATION BY LOW COST WEBCAMS
3D HAND LOCALIZATION BY LOW COST WEBCAMS Cheng-Yuan Ko, Chung-Te Li, Chen-Han Chung, and Liang-Gee Chen DSP/IC Design Lab, Graduated Institute of Electronics Engineering National Taiwan University, Taiwan,
More informationCALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang
CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationEasy to Use Calibration of Multiple Camera Setups
Easy to Use Calibration of Multiple Camera Setups Ferenc Kahlesz, Cornelius Lilge, and Reinhard Klein University of Bonn, Institute of Computer Science II, Computer Graphics Group Römerstrasse 164, D-53117
More informationA Method for Tracking the Camera Motion of Real Endoscope by Epipolar Geometry Analysis and Virtual Endoscopy System
A Method for Tracking the Camera Motion of Real Endoscope by Epipolar Geometry Analysis and Virtual Endoscopy System Kensaku Mori 1,2, Daisuke Deguchi 2, Jun-ichi Hasegawa 3, Yasuhito Suenaga 2, Jun-ichiro
More informationSashi Kumar Penta COMP Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006
Computer vision framework for adding CG Simulations Sashi Kumar Penta sashi@cs.unc.edu COMP 790-072 Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006 Figure 1: (i) Top
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationMERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia
MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects
More informationProject 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.
Project 4 Results Representation SIFT and HoG are popular and successful. Data Hugely varying results from hard mining. Learning Non-linear classifier usually better. Zachary, Hung-I, Paul, Emanuel Project
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationImage Formation I Chapter 1 (Forsyth&Ponce) Cameras
Image Formation I Chapter 1 (Forsyth&Ponce) Cameras Guido Gerig CS 632 Spring 215 cknowledgements: Slides used from Prof. Trevor Darrell, (http://www.eecs.berkeley.edu/~trevor/cs28.html) Some slides modified
More information(Geometric) Camera Calibration
(Geoetric) Caera Calibration CS635 Spring 217 Daniel G. Aliaga Departent of Coputer Science Purdue University Caera Calibration Caeras and CCDs Aberrations Perspective Projection Calibration Caeras First
More informationMeasurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation
2016 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-8-0 Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationPART A Three-Dimensional Measurement with iwitness
PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an
More informationA High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More informationA COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES
A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,
More informationPerspective Projection Describes Image Formation Berthold K.P. Horn
Perspective Projection Describes Image Formation Berthold K.P. Horn Wheel Alignment: Camber, Caster, Toe-In, SAI, Camber: angle between axle and horizontal plane. Toe: angle between projection of axle
More informationIntegrating 3D Vision Measurements into Industrial Robot Applications
Integrating 3D Vision Measurements into Industrial Robot Applications by Frank S. Cheng cheng1fs@cmich.edu Engineering and echnology Central Michigan University Xiaoting Chen Graduate Student Engineering
More informationImproved Navigated Spine Surgery Utilizing Augmented Reality Visualization
Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Zein Salah 1,2, Bernhard Preim 1, Erck Elolf 3, Jörg Franke 4, Georg Rose 2 1Department of Simulation and Graphics, University
More informationImage Formation I Chapter 2 (R. Szelisky)
Image Formation I Chapter 2 (R. Selisky) Guido Gerig CS 632 Spring 22 cknowledgements: Slides used from Prof. Trevor Darrell, (http://www.eecs.berkeley.edu/~trevor/cs28.html) Some slides modified from
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More information