Measurement of distances between anatomical structures using a translating stage with mounted endoscope

Size: px
Start display at page:

Download "Measurement of distances between anatomical structures using a translating stage with mounted endoscope"

Transcription

1 Measurement of distances between anatomical structures using a translating stage with mounted endoscope Lueder A. Kahrs *a, Gregoire S. Blachon a, Ramya Balachandran a, J. Michael Fitzpatrick b, Robert F. Labadie a a Department of Otolaryngology, Vanderbilt University Medical Center, st Avenue South, MCE 10420, South Tower, Nashville, TN USA b Department of Electrical Engineering and Computer Science, Vanderbilt University, th Avenue South, 363 Jacobs Hall, Nashville, TN USA ABSTRACT During endoscopic procedures it is often desirable to determine the distance between anatomical features. One such clinical application is percutaneous cochlear implantation (CI), which is a minimally invasive approach to the cochlea via a single, straight drill path and can be achieved accurately using bone-implanted markers and customized microstereotactic frame. During clinical studies to validate CI, traditional open-field cochlear implant surgery was performed and prior to completion of the surgery, a customized microstereotactic frame designed to achieve the desired CI trajectory was attached to the bone-implanted markers. To determine whether this trajectory would have safely achieved the target, a sham drill bit is passed through the frame to ensure that the drill bit would reach the cochlea without damaging vital structures. Because of limited access within the facial recess, the distances from the bit to anatomical features could not be measured with calipers. We hypothesized that an endoscope mounted on a sliding stage that translates only along the trajectory, would provide sufficient triangulation to accurately measure these distances. In this paper, the design, fabrication, and testing of such a system is described. The endoscope is mounted so that its optical axis is approximately aligned with the trajectory. Several images are acquired as the stage is moved, and threedimensional reconstruction of selected points allows determination of distances. This concept also has applicability in a large variety of rigid endoscopic interventions including bronchoscopy, laparoscopy, and sinus endoscopy. Keywords: triangulation, 3D reconstruction, feasibility study, endoscope measurement 1. DESCRITION OF UROSE Thousands of rigid endoscopic procedures are performed annually. While the intent of many of these procedures is to assess a potentially pathological condition (e.g. assessment of the airway for tumor growth), accurate measurement of critical distances is lacking. Recently, we have recognized the need for accurate measurement of anatomical features during surgical interventions. Specifically, we need to accurately predict the path of a proposed trajectory defined by a customized microstereotactic frame for a new surgical procedure called the percutaneous cochlear implantation (CI) 1-4 in which image-guided, minimally-invasive access of the cochlea is desired. During CI, fiducial markers are implanted in the temporal bone following which a computed tomography (CT) scan is acquired. ertinent anatomy is identified via non-rigid registration of the CT to an atlas 5, and a suitable drilling trajectory is selected 6. A customized microstereotactic frame is designed and constructed to mount on the fiducial markers and constrain the drill along the planned trajectory. Validation studies of the CI technique are underway in which a traditional, open-field approach to the cochlea is performed (mastoidectomy with facial recess) after which the customized microstereotactic frame is affixed and the trajectory achieved by the frame is verified 7,8. The planned drilling trajectory is chosen such that it targets the cochlea without damaging critical structures such as the facial nerve, which, if damaged, causes ipsilateral facial paralysis, and the chorda tympani, which, if damaged, causes ipsilateral taste insensitivity. During the validation, we sought to determine how close the surgical trajectory that is achieved by the customized microstereotactic frame comes to the vital anatomy the facial nerve and the chorda tympani. These measurements were initially done by passing sham drill bits (shaped like a drill bit but with no cutting surfaces) of varied sizes through the customized microstereotactic frame. Due * l.a.kahrs@vanderbilt.edu; phone ; fax ;

2 to limited access within the facial recess (region bounded by the facial nerve and the chorda tympani), the actual distances from the bit to anatomical features could not be measured directly with calipers. Instead, we estimated these distances post-operatively using photographs acquired during the validation via an endoscope while the drill bit was positioned at the facial recess (Figure 1). The drill bit in the picture was used to provide a scale, but the accuracy of this approach was limited by the projective geometry of the three-dimension (3D) field to the two-dimension (2D) photograph. To overcome this problem, herein we propose a method that allows intra-operative evaluation of distances for ensuring safe margins from the drill bit to critical structures using an endoscope translation stage (ETS), which, when combined with triangulation, allows accurate, repeatable, and real-time distance measurements during CI validation studies. Figure 1. CI validation using sham drill bit. A picture is captured using an endoscope in the operating room during the validation with the sham drill bit at the facial recess. The points on the facial nerve (plus sign) and the chorda tympani (asterisk) that are closest to the drill are manually marked in the picture. Using the drill bit as a reference for scale, the distances between the bit and the critical structures are then computed. 2. MEASUREMENT RINCILE Triangulation and 3D reconstruction from different camera views is well-known from the literature 9,10. A point in 3D is observed from a minimum of two known camera poses, resulting in camera images representing the projection of the 3D point. In this paper, the measurement principle is based on motion of a single camera observing a stable scene. In our scenario, an endoscope is utilized on a linear stage, and we have a special case that involves translation between camera poses that is roughly coaxial with the optical axis. For simplification of the following explanation, it is assumed that the optical axis (OA) is the same as the axis of the endoscope movement and both axes include the point E, which is the intersection of the trajectory of the stage and the image plane (equivalent to the epipole in two-camera stereo vision and the focus of expansion in optic flow). Variation between the camera poses can be described by changing only one transformation parameter and using different translational positions z i for capturing images. Figure 2 illustrates three simulated endoscopic images from different coaxial poses z i visualizing a 3D point on a structure S. Inside the particular images the corresponding 2D points (z i ) with their pixel coordinates (u,v) belong to the same 3D point. As shown in the far-right subfigure of Figure 2, all points (e.g. (z i )) observed from different camera poses appear to move radially along lines originating at E, and a point closer to E the will radiate less when is z i varied. All 2D points (z i ) can be mapped into 3D rays, R i, (shown in Figure 3) that pass through the optical center of the camera (, view point) with an angular difference i from the OA. The intersection of two or more rays (R 1, R 2,, R i ) from different camera poses leads to reconstruction of a point I, as our measured estimate of the above mentioned 3D point. The subscript I signifies that it is a point of intersection and differentiates it from the ground truth point. Figure 3a-c illustrates the measurement principle by showing a cross section, including the optical axis and point I. The angle i shown in Figure 3a corresponds to the distance (in the sense of the pixel coordinates) between E and (z i ) in Figure 2.

3 v u (z 1 ) (z 2 ) (z 3 ) S E Figure 2. Simulated endoscopic views from different coaxial distances z i (left panel, z 1 ; center-left, z 2 ; center-right, z 3; and far-right, overlay of all three) of a representative structure S and corresponding 2D points (z i ). The epipole E is the only point which does not move in the image while z i changes. Corresponding points move along radial lines away from the E as z i changes with points that are closer to E moving less. (a) OA (b) (c) (d) z 2 z 1 z z z 2 z 3 z 1 R i i R 2 I R 3 I C I23 I13 I12 R 1 R 1 R 2 Figure 3. 2D illustration of triangulation principles for a coaxial moving camera (endoscope). (a) A ray R i passes the optical center ( ) and has an angle i from the optical axis (OA). (b) A second ray will intersect at I under certain conditions, e.g. both rays are in a common plane or the value z 1 -z 2 and angular difference 1-2 are in a certain range. (c) Examples of three rays that intersect at one point I. (d) Inaccuracies, e.g. in segmentation or determination of relative coaxial camera poses can lead to multiple intersection points, where a mean, calculated intersection point C needs to be determined. Real measurements following this principle usually lead to rays that do not perfectly intersect because of pixelation, image noise, non-precise feature segmentation, inaccurate knowledge of the relative camera movements, etc. Figure 3d shows an example with multiple intersection points Iij and the need for the determination of a mean, calculated intersection point C. In a real 3D scenario the rays are usually skew lines, and as a result potential errors must be tackled. Two options are described in the following: One possibility is the determination of the shortest line segment L that joins the skew lines (see Figure 4). The midpoint of this segment is the calculated intersection point C, but it is usually not identical with the ground truth () and result in a back projection error. Another possibility is the search for a minimized length of L (best case: intersecting rays, length is zero) by varying the pixel coordinates of (z i ). The distance of the variation is called d i. This is illustrated in Figure 5: The variation of (z i ) is symbolized by single-headed arrows and examples are given for (z 1 ) and d 1. The best variation of the measurement

4 ((z 1 ), grey dot) is towards the (unknown) ground truth (black dot), which would achieve a small back projection error. According to the literature 9,10, the back projection error is minimized, while L is kept approximately zero and the d i are minimized in the least-squares sense as well., z1 (b) (a) (z 1 ) OA R 1 C L, z3 R 3 (z 3 ) Figure 4. (a) 3D visualization of the measurement and triangulation principles for pure coaxial movements between two camera poses (z 1 and z 3, compare Figure 2). The image planes are mirrored around the optical centers for a compact depiction. The thicker rays merge at the ideal (ground truth) intersection point, while the thinner rays illustrate real measurements, which usually leads to skew lines. (b) The line segment L (black line) connects skew lines at their nearest distance and the point C symbolizes the midpoint of L. The distance (white doubleheaded arrow) is the back projection error in this case. (a) (z 1 ) (b) (c) (d) L C I d 1 Figure 5. (a) A measured point ((z 1 ), grey dot) will usually be unequal to the ideal feature point (black dot) and leads to skew lines during 3D reconstruction. Modifying the pixel coordinates of the endoscopic measurement (arrow) changes the rays and allow the search for a minimized length of the line segment L (best case: intersecting rays at I, see subfigures (b) and (c)). (d) Minimizing the distances d i in the least-squares sense (d 1 shown as example) at the same time while minimizing L (boundary condition) in all endoscopic camera views reduces the back projection error to a minimum.

5 3. SYSTEM ROTOTYE We designed the ETS for distance measurement by triangulation. For easy integration with our ongoing CI and for accurate measurement, the ETS was built as a linear guide system similar to a linear drill guide designed for CI 4. Figure 6 shows the realized prototype. However, other designs are possible for other endoscopic applications (e.g. laparoscopy). The endoscope is mounted on the slider at the endoscope s cubic portion, in which the light adapter connects to the main portion of the endoscope. We use a GYRUS ACMI explorent (diameter at tip: 1.7 mm, straight view) endoscope, although many other models could be used as well. The endoscope is attached via a C-mount endoscope coupler (focal length = 25 mm) to a USB camera (BigCatch EM-310C, 3.2Mpixel) and approximately 0.5 Mpixel (diameter ~800 pixels) are visible. Metal rods of specific lengths were used as distance holders (spacers) between the slide stop and the slider, to guarantee accurate determination of the relative coaxial camera movement (z i values). As a further enhancement to the ETS, we are able to include a potentiometer, which allows an electronic determination of the z i value. Software to integrate these hardware components was written to provide online measurements based on selecting corresponding points in images and a method to determine the ray intersections. Figure 6. rototype of ETS. (a) Adaptor to mounts the ETS to the customized microstereotactic frame, (b) main portion (bracket) of the ETS, (c) location for distance holders, (d) slider with clamp for cubic portion of an endoscope (e), (f) rails for sliding, (g) oblong hole for access of the linear potentiometer and (h) endoscope coupler. The inset (i) shows three spacers with length of 2.5, 10, and 20 mm. 4. EVALUATION We designed and built a tool to measure distances between points of interest, for our application from the centerline of the planned trajectory to vital structures, during CI validation studies. We have recently conducted our first patient trials in order to optimize the intra-operative workflow for this new device (Fig 7a). All parts were sterilizable. Distance holders (inset of Figure 6) were used to allow capturing images of the target, the facial recess, at different depths along the trajectory achieved by the customized microstereotactic frame. For more flexibility, we designed the distance holders such that they rest between the camera and holder (location c in Figure 6) so that the entire ETS can rotate when coupled to the microstereotactic frame. A long term solution is electronic measurement of the camera position along the rails using an optical encoder. Figure 7b shows the usage of the ETS during a cadevaric temporal bone study. The upper part shows two endoscopic images with superimposed labeling (black dashed lines and X s) from different measurement positions. Selecting corresponding points of interest in both images allows 3D reconstruction and determination of the distance between those points. Two corresponding points are shown as X s in the upper part and as spheres with (x,y,z)-value on the black line in the lower part of Figure 7b. The two spheres inside the transparent cylinder are the two on the OA.

6 (a) (b) (-1.24,0.77,10.87) 1.99mm (-1.09,-1.11,11.5) Figure 7. (a) Usage of the ETS in the operating room. (b) Triangulation example on a human cadaver temporal bone. Images are captured from two different coaxial endoscope poses. ositions of corresponding points in both images can be triangulated and distances between them can be determined. 5. DISCUSSION, CONCLUSION AND OUTLOOK Endoscopic procedures are often used to visualize the anatomy and determine pathological condition. The measurement of distances between anatomical structures during endoscopic evaluation is useful in several applications including validation of CI surgery. While others have reported on similar approaches 11-13, to the best of our knowledge this is the first report using a slide mechanism and triangulation to achieve the purpose of distance measurement using endoscopic images. By analyzing the literature it can be stated that the most common approaches to measurement with endoscopes include the usage of electromagnetic 11 or optical tracking 12 to determine pose changes of the endoscope. An advantage of our ETS in comparison to those tracking methods is that no additional navigation system is needed (cost reduction). A further idea of endoscopic measurements uses laser projections onto the organ surface and analyzes the pattern distribution by means of triangulation 13. After calculating how far the organ surface is away, virtual rulers are superimposed to measure the lateral dimensions on continuous surfaces. These rulers may work only piecewise, and it is unclear whether the algorithms work correctly on non-flat surfaces such as those inside the temporal bone. We show in this work a method to use the endoscope to measure distances between anatomical structures via an accurate sliding mechanism for an endoscope (ETS) and an appropriate triangulation methodology. The rigid connection of the ETS with the skull by the customized microstereotactic frame and the precise sliding mechanism allow accurate measurements. Automatic 3D reconstructions of outlines of anatomy (such as those illustrated as black lines in Figure 7b) are achievable. Ongoing work includes determination of, and comparison with, additional rays, e.g. the CI trajectory or the movement axis (epipole), and absolute calibration so that, in addition to the distances (and without an additional registration step), positions can be measured in the patient coordinate system using the rigid relationship that exists between the endoscope, the microstereotactic frame, and the patient CT.

7 ACKNOWLEDGMENT This work was supported by Award Number R01DC and R01DC from the National Institute on Deafness and Other Communication Disorders. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Deafness and Other Communication Disorders or the National Institutes of Health. REFERENCES [1] Labadie, R. F., Chodhury,., Cetinkaya, E., Balachandran, R., Haynes, D. S., Fenlon, M. R., Jusczyzck, A. S. and Fitzpatrick, J. M., "Minimally invasive, image-guided, facial-recess approach to the middle ear: demonstration of the concept of percutaneous cochlear access in vitro," Otol. Neurotol. 26(4), (2005). [2] Warren, F. M., Balachandran, R., Fitzpatrick, J.M. and Labadie, R. F., "ercutaneous cochlear access using bonemounted, customized drill guides: demonstration of concept in vitro," Otol. Neurotol. 28(3), (2007). [3] Labadie, R. F., Mitchell, J., Balachandran, R., and Fitzpatrick, J. M., "Customized, rapid-production microstereotactic table for surgical targeting: description of concept and in vitro validation," Int. J. Comput. Assist. Radiol. Surg. 4(3), (2009). [4] Balachandran, R., Mitchell, J. E., Blachon, G., Noble, J. H., Dawant, B. M., Fitzpatrick, J. M. and Labadie, R.F., "ercutaneous cochlear implant drilling via customized frames: an in vitro study," Otolaryngol. Head Neck Surg. 142(3), (2010). [5] Noble, J. H., Dawant, B. M., Warren, F. M. and Labadie, R.F., "Automatic identification and 3D rendering of temporal bone anatomy," Otol. Neurotol. 30(4), (2009). [6] Noble, J. H., Majdani, O., Labadie R. F., Dawant, B. and Fitzpatrick, J. M., "Automatic determination of optimal linear drilling trajectories for cochlear access accounting for drill-positioning error," Int. J. Med. Robot. 6(3), (2010). [7] Labadie, R. F., Noble, J. H., Dawant, B. M., Balachandran, R., Majdani, O., Fitzpatrick, J. M., Clinical validation of percutaneous cochlear implant surgery: initial report, Laryngoscope 118(6), (2008). [8] Labadie, R. F., Balachandran, R., Mitchell, J. E., Noble, J. H., Majdani, O., Haynes, D. S., Bennett, M. L., Dawant, B. M. and Fitzpatrick, J. M., "Clinical validation study of percutaneous cochlear access using patient-customized microstereotactic frames, " Otol. Neurotol. 31(1), (2010). [9] Kanatani, K., Sugaya, Y. and Niitsuma, H., "Triangulation from two views revisited: Hartley-Sturm vs. optimal correction, " roc. 19th British Machine Vision Conf., (2008). [10] Hartley, R. I. and Sturm,., "Triangulation," Computer Vision and Image Understanding 68(2), (1997) [11] Koishi, T., Sasaki, M., Nakaguchi, T., Tsumura, N. and Miyake, Y., "Endoscopy System for Length Measurement by Manual ointing with an Electromagnetic Tracking Sensor," Optical Review 17(2), (2010). [12] Wang, H., Mirota, D., Hager, G. and Ishii, M., "Anatomical reconstruction from endoscopic images: toward quantitative endoscopy," Am. J. Rhinol. 22(1), (2008). [13] Nakatani, H., Abe, K., Miyakawa, A. and Terakawa, S., "Three-dimensional measurement endoscope system with virtual rulers," J. Biomed. Opt. 12(5), (2007).

An Accuracy Approach to Robotic Microsurgery in the Ear

An Accuracy Approach to Robotic Microsurgery in the Ear An Accuracy Approach to Robotic Microsurgery in the Ear B. Bell¹,J.Salzmann 1, E.Nielsen 3, N.Gerber 1, G.Zheng 1, L.Nolte 1, C.Stieger 4, M.Caversaccio², S. Weber 1 ¹ Institute for Surgical Technologies

More information

AUTOMATIC DETERMINATION OF OPTIMAL SURGICAL DRILLING TRAJECTORIES FOR COCHLEAR IMPLANT SURGERY

AUTOMATIC DETERMINATION OF OPTIMAL SURGICAL DRILLING TRAJECTORIES FOR COCHLEAR IMPLANT SURGERY AUTOMATIC DETERMINATION OF OPTIMAL SURGICAL DRILLING TRAJECTORIES FOR COCHLEAR IMPLANT SURGERY By Jack H. Noble Thesis Submitted to the Faculty of the Graduate School of Vanderbilt University in partial

More information

Incorporating Target Registration Error Into Robotic Bone Milling

Incorporating Target Registration Error Into Robotic Bone Milling Incorporating Target Registration Error Into Robotic Bone Milling Michael A. Siebold *a, Neal P. Dillon b, Robert J. Webster III a,b, and J. Michael Fitzpatrick a a Electrical Engineering and Computer

More information

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Endoscopic Reconstruction with Robust Feature Matching

Endoscopic Reconstruction with Robust Feature Matching Endoscopic Reconstruction with Robust Feature Matching Students: Xiang Xiang Mentors: Dr. Daniel Mirota, Dr. Gregory Hager and Dr. Russell Taylor Abstract Feature matching based 3D reconstruction is a

More information

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database 2017 29 6 16 GITI 3D From 3D to 4D imaging Data Fusion Virtual Surgery Medical Virtual Reality Team Morphological Database Functional Database Endo-Robot High Dimensional Database Team Tele-surgery Robotic

More information

Surgical Planning Tool for Robotically Assisted Hearing Aid Implantation

Surgical Planning Tool for Robotically Assisted Hearing Aid Implantation Surgical Planning Tool for Robotically Assisted Hearing Aid Implantation Nicolas Gerber 1 *, Brett Bell 1, Kate Gavaghan 1, Christian Weisstanner 2, Marco Caversaccio 3, Stefan Weber 1 1 ARTORG Center

More information

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1 Modifications for P551 Fall 2013 Medical Physics Laboratory Introduction Following the introductory lab 0, this lab exercise the student through

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Safety margins in robotic bone milling: from registration uncertainty to statistically safe surgeries

Safety margins in robotic bone milling: from registration uncertainty to statistically safe surgeries Received: 19 July 2016 Revised: 15 August 2016 Accepted: 16 August 2016 DOI 10.1002/rcs.1773 ORIGINAL ARTICLE Safety margins in robotic bone milling: from registration uncertainty to statistically safe

More information

A model-based approach for tool tracking in laparoscopy

A model-based approach for tool tracking in laparoscopy A model-based approach for tool tracking in laparoscopy Potential applications and evaluation challenges Sandrine Voros (INSERM), TIMC-IMAG laboratory Computer Assisted Medical Interventions (CAMI) team

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Zein Salah 1,2, Bernhard Preim 1, Erck Elolf 3, Jörg Franke 4, Georg Rose 2 1Department of Simulation and Graphics, University

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

arxiv: v1 [cs.cv] 19 Jan 2019

arxiv: v1 [cs.cv] 19 Jan 2019 IJCARS manuscript No. (will be inserted by the editor) Endoscopic vs. volumetric OCT imaging of mastoid bone structure for pose estimation in minimally invasive cochlear implant surgery Max-Heinrich Laves

More information

AUTOMATIC IDENTIFICATION OF THE STRUCTURES OF THE EAR AND A NEW APPROACH FOR TUBULAR STRUCTURE MODELING AND SEGMENTATION. Jack H.

AUTOMATIC IDENTIFICATION OF THE STRUCTURES OF THE EAR AND A NEW APPROACH FOR TUBULAR STRUCTURE MODELING AND SEGMENTATION. Jack H. AUTOMATIC IDENTIFICATION OF THE STRUCTURES OF THE EAR AND A NEW APPROACH FOR TUBULAR STRUCTURE MODELING AND SEGMENTATION By Jack H. Noble Dissertation Submitted to the Faculty of the Graduate School of

More information

Tracked surgical drill calibration

Tracked surgical drill calibration Tracked surgical drill calibration An acetabular fracture is a break in the socket portion of the "ball-and-socket" hip joint. The majority of acetabular fractures are caused by some type of highenergy

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper

More information

Increasing Safety of a Robotic System for Inner Ear Surgery Using Probabilistic Error Modeling Near Vital Anatomy

Increasing Safety of a Robotic System for Inner Ear Surgery Using Probabilistic Error Modeling Near Vital Anatomy Runner Up Young Scientist Award Increasing Safety of a Robotic System for Inner Ear Surgery Using Probabilistic Error Modeling Near Vital Anatomy Neal P. Dillon a, Michael A. Siebold b, Jason E. Mitchell

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics] Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics] Gustavo Sato dos Santos IGI Journal Club 23.10.2014 Motivation Goal:

More information

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Computed Photography - Final Project Endoscope Exploration on Knee Surface 15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the

More information

X-ray Target Reconstruction for Cyber Knife Radiosurgery Assignment for CISC/CMPE 330

X-ray Target Reconstruction for Cyber Knife Radiosurgery Assignment for CISC/CMPE 330 X-ray Target Reconstruction for Cyber Knife Radiosurgery Assignment for CISC/CMPE 330 We will perform radiosurgery of multipole liver tumors under X-ray guidance with the Cyber Knife (CK) system. The patient

More information

The Concept of Evolutionary Computing for Robust Surgical Endoscope Tracking and Navigation

The Concept of Evolutionary Computing for Robust Surgical Endoscope Tracking and Navigation The Concept of Evolutionary Computing for Robust Surgical Endoscope Tracking and Navigation Ying Wan School of Computing and Communications University of Technology, Sydney This dissertation is submitted

More information

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems 2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems Yeny Yim 1*, Xuanyi Chen 1, Mike Wakid 1, Steve Bielamowicz 2, James Hahn 1 1 Department of Computer Science, The George Washington

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Advanced Reconstruction Techniques Applied to an On-Site CT System

Advanced Reconstruction Techniques Applied to an On-Site CT System 2nd International Symposium on NDT in Aerospace 2010 - We.1.A.4 Advanced Reconstruction Techniques Applied to an On-Site CT System Jonathan HESS, Markus EBERHORN, Markus HOFMANN, Maik LUXA Fraunhofer Development

More information

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha Model Generation from Multiple Volumes using Constrained Elastic SurfaceNets Michael E. Leventon and Sarah F. F. Gibson 1 MIT Artificial Intelligence Laboratory, Cambridge, MA 02139, USA leventon@ai.mit.edu

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli 3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli Client Contact: Dr. Joseph McIsaac Hartford Hospital 80 Seymour St. PO Box 5037 Hartford, CT 06102 (860)

More information

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Interactive Visualization of Multimodal Volume Data for Neurosurgical Planning Felix Ritter, MeVis Research Bremen Multimodal Neurosurgical

More information

Image-Based Rendering

Image-Based Rendering Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome

More information

A High Speed Face Measurement System

A High Speed Face Measurement System A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555

More information

Sensor-aided Milling with a Surgical Robot System

Sensor-aided Milling with a Surgical Robot System 1 Sensor-aided Milling with a Surgical Robot System Dirk Engel, Joerg Raczkowsky, Heinz Woern Institute for Process Control and Robotics (IPR), Universität Karlsruhe (TH) Engler-Bunte-Ring 8, 76131 Karlsruhe

More information

Preliminary Testing of a Compact, Bone-Attached Robot for Otologic Surgery

Preliminary Testing of a Compact, Bone-Attached Robot for Otologic Surgery Preliminary Testing of a Compact, Bone-Attached Robot for Otologic Surgery Neal P. Dillon a, Ramya Balachandran b, Antoine Motte dit Falisse c, George B. Wanna b, Robert F. Labadie b, Thomas J. Withrow

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council

More information

Homework Assignment /655 (CIRCLE ONE) Fall Instructions and Score Sheet (hand in with answers)

Homework Assignment /655 (CIRCLE ONE) Fall Instructions and Score Sheet (hand in with answers) Homework Assignment 3 601.455/655 (CIRCLE ONE) Fall 2017 Instructions and Score Sheet (hand in with answers) Name Email Other contact information (optional) Signature (required) I have followed the rules

More information

CHomework Assignment /655 Fall 2017 (Circle One)

CHomework Assignment /655 Fall 2017 (Circle One) CHomework Assignment 2 600.455/655 Fall 2017 (Circle One) Instructions and Score Sheet (hand in with answers) Name Email Other contact information (optional) Signature (required) I/We have followed the

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa 3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple

More information

Geometry of Multiple views

Geometry of Multiple views 1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

ANALYSIS, DESIGN, AND MODELING OF IMAGE-GUIDED ROBOTIC SYSTEMS FOR OTOLOGIC SURGERY. Neal P. Dillon

ANALYSIS, DESIGN, AND MODELING OF IMAGE-GUIDED ROBOTIC SYSTEMS FOR OTOLOGIC SURGERY. Neal P. Dillon ANALYSIS, DESIGN, AND MODELING OF IMAGE-GUIDED ROBOTIC SYSTEMS FOR OTOLOGIC SURGERY By Neal P. Dillon Dissertation Submitted to the Faculty of the Graduate School of Vanderbilt University in partial fullfillment

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Real-time self-calibration of a tracked augmented reality display

Real-time self-calibration of a tracked augmented reality display Real-time self-calibration of a tracked augmented reality display Zachary Baum, Andras Lasso, Tamas Ungi, Gabor Fichtinger Laboratory for Percutaneous Surgery, Queen s University, Kingston, Canada ABSTRACT

More information

Generating 3D Meshes from Range Data

Generating 3D Meshes from Range Data Princeton University COS598B Lectures on 3D Modeling Generating 3D Meshes from Range Data Robert Kalnins Robert Osada Overview Range Images Optical Scanners Error sources and solutions Range Surfaces Mesh

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Fully Automatic Endoscope Calibration for Intraoperative Use

Fully Automatic Endoscope Calibration for Intraoperative Use Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006

Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006 Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006 Introduction For rotationally symmetric optical components, a convenient optomechanical approach which is usually

More information

5-Axis Flex Track Drilling Systems on Complex Contours: Solutions for Position Control

5-Axis Flex Track Drilling Systems on Complex Contours: Solutions for Position Control 5-Axis Flex Track Drilling Systems on Complex Contours: Solutions for Position Control 2013-01-2224 Published 09/17/2013 Joseph R. Malcomb Electroimpact Inc. Copyright 2013 SAE International doi:10.4271/2013-01-2224

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Achieving Proper Exposure in Surgical Simulation

Achieving Proper Exposure in Surgical Simulation Achieving Proper Exposure in Surgical Simulation Christopher SEWELL a, Dan MORRIS a, Nikolas BLEVINS b, Federico BARBAGLI a, Kenneth SALISBURY a Departments of a Computer Science and b Otolaryngology,

More information

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner F. B. Djupkep Dizeu, S. Hesabi, D. Laurendeau, A. Bendada Computer Vision and

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Camera Calibration. COS 429 Princeton University

Camera Calibration. COS 429 Princeton University Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Character Modeling IAT 343 Lab 6. Lanz Singbeil

Character Modeling IAT 343 Lab 6. Lanz Singbeil Character Modeling IAT 343 Lab 6 Modeling Using Reference Sketches Start by creating a character sketch in a T-Pose (arms outstretched) Separate the sketch into 2 images with the same pixel height. Make

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

Optimum Robot Control for 3D Virtual Fixture in Constrained ENT Surgery

Optimum Robot Control for 3D Virtual Fixture in Constrained ENT Surgery Optimum Robot Control for 3D Virtual Fixture in Constrained EN Surgery Ming Li and Russell H. aylor Department of Computer Science Department NSF Engineering Research Center for Computer Integrated Surgical

More information

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING Reveal pathology Reveal the anatomic truth Steven R. Singer, DDS srs2@columbia.edu IDEAL DIAGNOSTIC IMAGING STUDY Provides desired diagnostic

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion HW 4 due Friday Final Exam: Tuesday, 6/7 at 8:00-11:00 Fill out your CAPES Introduction to Computer Vision CSE 152 Lecture 20 Motion Some problems of motion 1. Correspondence: Where

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information

Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform

Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform Chatchai Neatpisarnvanit Department of Electrical Engineering Mahidol University Nakorn Pathom, Thailand egcnp@mahidol.ac.th

More information

Basilio Bona DAUIN Politecnico di Torino

Basilio Bona DAUIN Politecnico di Torino ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Introduction to 3D Concepts

Introduction to 3D Concepts PART I Introduction to 3D Concepts Chapter 1 Scene... 3 Chapter 2 Rendering: OpenGL (OGL) and Adobe Ray Tracer (ART)...19 1 CHAPTER 1 Scene s0010 1.1. The 3D Scene p0010 A typical 3D scene has several

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks

Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks Proceedings of the 1999 IEEVRSJ International Conference on Intelligent Robots and Srjtems Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks J. Y. Kim* and H. S. Cho** * Department of Robot

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Quantifying Risky Behavior in Surgical Simulation

Quantifying Risky Behavior in Surgical Simulation Quantifying Risky Behavior in Surgical Simulation Christopher Sewell 1 Dan Morris 1 Nikolas Blevins 2 Federico Barbagli 1 Kenneth Salisbury 1 Departments of 1 Computer Science and 2 Otolaryngology, Stanford

More information

Rendering-Based Video-CT Registration with Physical Constraints for Image-Guided Endoscopic Sinus Surgery

Rendering-Based Video-CT Registration with Physical Constraints for Image-Guided Endoscopic Sinus Surgery Rendering-Based Video-CT Registration with Physical Constraints for Image-Guided Endoscopic Sinus Surgery Y. Otake a,d, S. Leonard a, A. Reiter a, P. Rajan a, J. H. Siewerdsen b, M. Ishii c, R. H. Taylor

More information