Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room

Size: px
Start display at page:

Download "Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room"

Transcription

1 Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room Ève Coste-Manière 1, Louaï Adhami 1, Fabien Mourgues 1, Olivier Bantiche 1, David Le 2, David Hunt 2, Nick Swarup 2, Ken Salisbury 2, and Gary Guthart 2 1 INRIA Sophia-Antipolis, ChIR surgical robotics, 2 Intuitive Surgical Inc. Abstract. The aim of this work is to quantify the errors introduced at different levels of applying results planned using a computer integrated system (CIS) in the operating room, and make use of these errors to adapt the transfer and rethink the planning. In particular, the registration between preoperative imaging and intraoperative patient model, as well as between the patient and the robot are addressed. Two different registration methods are used and their accuracy compared. Moreover, augmented reality trials are conducted to assess the difficulty of adapting preoperative data to intraoperative models in order to deliver useful information to the surgeon during the intervention. The experimental work in this paper is conducted on a dog for a coronary bypass intervention using the Da Vinci TM surgical system. 1 Theoretical results The benefits of robotic assistance have become apparent in many surgical specialties, including cardiac surgery; however, their potential is still far from being fully exploited in the operating room. The main reasons for this dissimilarity between use and potential stem from the relatively recent use of robots in surgery, which implies new settings, new protocols, new methods for performing the intervention, etc.. One of the best immediate solutions to remedy this mismatch is through a more efficient preparation of the intervention, where the robotic system, the patient and the needs of the intervention are taken into account. Unfortunately, this will also inevitably be coupled with the difficulty of applying the planned results in the operating room, which could invalidate any claimed optimality. As a consequence, being able to measure the errors produced during the transfer enables us to guarantee a level of security on the execution of the planned results. The work presented in this paper focuses on transferring planned results using a computer integrated approach for robotically assisted minimally-invasive surgery (RMIS). A quick overview of the approach is shown in figure 1. This paper concentrates on transferring the planned results into the operating theater. The transfer is a delicate operation for not all factors can be con-

2 2 Ève Coste-Manière et al. Perception Port Placement Robot Positioning Verification & Validation Simulation Transfer Execution / Augmented Reality Analysis Integration: STARS Gather information about the patient, the robot and the environment. Determine the best incision sites (ports) based on intervention requirements, patient anatomy and tools specs. Determine the best relative position of the robot, the patient and the operating theater. Automatically validate planning results under more realistic operating conditions and verify the operating protocol and the corresponding robot logic flow. Rehearse the intervention using patient and robot data in simulated operating conditions. Transfer the results to the operating room through proper registration and positioning. Execute planned results with visual/computational assistance, monitor & predict possible complications. Store the settings and history of the intervention for archiving and subsequent analysis. All the components of this architecture are integrated into a modular single interfaced system: Simulation and Transfer Architecture for Robotic Surgery. Fig. 1. General CIS approach, a more detailed description is found in [2]. trolled, possibly leading to the loss of any claimed optimality or even utility of the planned solutions. In an RMIS procedure, the transfer consists of the following intraoperative steps: register preoperative models to the patient, register the robot to the patient, register the robot to the patient, locate the planned ports on the patient, and give the robot the planned posture. Once the results have been transfered, the intervention is monitored for any predicted or unpredicted complications, and its success is assessed. The effect of the transfer can then be deduced by analyzing the observed complications and comparing expected vs. obtained success rate of the intervention (e.g. overall visibility, dexterity, etc..). 1.1 Registration The registration step is twofold: register the preoperative images (on which the planning results have been obtained) to the patient in intraoperative position, then register the robot to the patient. In the following a fiducial based and a projector based method are presented to carry out the registration. Fiducial based registration Fiducial based registration is a simple and efficient registration technique; however, it necessitates the segmentation of fiducial markers in the preoperative data, as well as their localization intraoperatively. By definition, fiducials markers are usually very distinct in

3 Transfer Precision in the Operating Room 3 preoperative imaging and their delineation can easily be achieved as depicted in figure 3. Moreover, the configuration of the markers should minimize any potential variations between their preoperative and intraoperative positions; e.g., privileging stiff placement such as on the bony structures. Once fiducial points have been obtained in both preoperative images and intraoperative conditions (see figure 3), a rigid transformation is sought in a least-square sense relating the two coordinate frames: the patient coordinate, which is that of the preoperative images and the remote coordinate frame, which is that of the robot. Looking for a rigid transformation assumes the following: The position of the segmented centroids of the fiducials are accurate (e.g. no shift in the z axis in a CT scan). The fiducials did not move between preoperative acquisition and intraoperative measurements. The first assumption depends on the equipment used and is usually easy to meet. On the other hand, the second assumption requires that the fiducials be attached to rigid structures on the patient, which is not always possible, especially when the fiducial markers are glued on the skin. As a consequence, special care should be taken when choosing the location of the fiducials. Surface reconstruction based registration The goal of this method is to register the skin surface segmented from the scan with a 3D reconstruction of the patient in the OR. The reconstruction system is based on a novel structured light method which requires a single image of the scene illuminated by the projector. This results in a small hardware setup (i.e the projector and digital camera mounted on a tripod, both controlled by a laptop computer), and a simple protocol since no respiration gating is needed. Details of the method can be found in [4]). Briefly, we use passive correlation based stereoscopic methods, with the advantage that we can choose one image of the stereoscopic pair (i.e. the projected image) so that the solution to the correspondence problem is both easy to compute and unique since the projected image is a random gray-level pattern. For our experiments, we chose a calibration method that takes as input several views of a single planar grid (a checkerboard) seen at various orientations, as described in [8]. In order to calibrate both the camera and the projector with the same set of images, the random gray-level pattern is projected on the calibration grid throughout the acquisition sequence. An estimation of the homographic relationships between the original pattern and the pattern seen on the grid pictures allows to calculate the positions of grid corners in the projector coordinate frame and thus its calibration. Once the setup is calibrated, the reconstruction is a quite straightforward step, since we use standard correlation based stereo algorithms to solve the correspondence problem. Rectification is done using the camera and projector

4 4 Ève Coste-Manière et al. calibration data, so that corresponding points lie on the same horizontal scanline in both rectified images (for details see section 6.3 of [6]). The disparity map is obtained by matching pattern and images points, using zero-mean normalized cross correlation. The registration of the reconstructed skin surface with the one segmented in the scan is done using an Iterative Closest Point (ICP) algorithm (see [9]). 1.2 Port placement An optimization algorithm is used to determine the best entry points on the segmented anatomy of the dog. The algorithm is based on a semantic description of the intervention, where the main syntactic elements are target points inserted by the surgeon on the anatomical model of the patient to represent the position, orientation and amplitude of the surgical sites. The semantics of the intervention are translated into mathematical criteria that are subsequently used in an exhaustive optimization to yield the most advantageous ports combination for the patient, intervention and robotic tools under consideration. The criteria and their mathematical formulation are detailed in [3]. The position of the ports computed during the planning process are transferred on the patient s skin either using the robot as an interactive pointer, or through a direct video projection. Interactive guidance Once the robot is registered to the patient, the planned ports are expressed in the robot coordinate frame and used to guide the user who will manipulate the end effector as a pointing device, as shown in figure 6. The precision of this approach is limited by that of the registration, as well as the ability of the operator to move the pointing device (the arm end effector). Moreover, the ports should be in an area where the result of the registration remains valid. In other words, the placement of the fiducial markers should take into account a desirable proximity with the potential position of the ports. Direct projection Since the patient has been registered to the scan using the stereo setup, it is quite simple to project any given point chosen from the scan directly on the skin. The projection matrix of the projector, associated with the transformation matrix computed previously gives a direct relationship between the coordinates of the ports in the scan frame and in the projector image frame. 1.3 Robot positioning Planning results are not solely composed of the placement of ports, they also include an optimal position of the robotic system, which is computed in a

5 Transfer Precision in the Operating Room 5 way that maximizes the separation between the robotic arms and with surrounding obstacles, while at the same time insuring fixed spatial constraints (position of the ports) on the arms (see [1]). The positioning concerns passive degrees of freedom (dofs) of the robot: in the case of the Da Vinci TM robot this is the position of the base and the set-up joints, which carry the active (tele-operated) parts of the arms of the robot. Therefore, to reproduce the computed position of the robot in operating conditions, the base of the robot has to be correctly placed with respect to the patient, and its set-up joints should be fixed at the planned values. Alternatively, if not all of the passive dofs can be controlled (e.g. impossibility to move the base of the robot), then the optimization of the positioning is not carried out for the latter, rather they are considered as additional constraints on the system. 1.4 Augmented reality The possibilities offered by augmented reality are boundless, especially in minimally invasive surgery, where reduced visibility is a major concern. For example in TECAB interventions, a view of the operating field (3D with the Da Vinci TM system) enhanced with a 3D model of the coronary tree built from angiography acquisitions would make the work easier for the surgeon (see [5]). This non obvious registration problem between the preoperative model and the endoscopic view can be addressed by: using the result of the external registration of the patient with the robot and the preoperative models (see section 1.1 for details) as an initialization of the superimposition, using the displacement of the endoscope tip deduced from the robot articular values, refining the superimposition with intra-images measurements to deal with the displacement (see section 2.2), deformation and motion of the concerned organs. We propose to interactively solve the registration problem by combining the measures based on automatically extracted reliable landmarks and the indications given by the surgeon. The use of the robot kinematics and the superimposition in the endoscopic images assume that the motorized stereoscopic endoscope is precisely calibrated (see [7] for details on the developed method);i.e., the following should be computed: the optical parameters of the two cameras, the rigid transform between them, and the transformation between the camera frames and the endoscope tip. 2 Experimental validation and results The experiments concern the TECAB intervention, which consists of grafting on a damaged coronary artery another artery (or alternatively using a vein

6 6 Ève Coste-Manière et al. between the two arteries) that would be used as a bypass to irrigate the affected portion of the heart. The entire intervention is performed through incisions in the chest wall, where carbon dioxide is insufflated and the left lung is collapsed, thus enabling the movement of the instruments. The Da Vinci TM surgical system, operated in a special API mode that enables precise readings of the state of the robot and the endoscopic images, is used to operate a dog. 2.1 Preoperative processing & planning Two sets of CT scans where acquired and processed to obtain surface models of the heart of the dog, the bones, LIMA (artery used for the bypass) and the skin, as shown in figure 2, on which the surgeon modeled the intervention through target cones on the heart and on the LIMA. The planning is performed as described in [3] and [1], to obtain optimal placement of the ports and the corresponding collision free position of the robot (figure 2), achieving a minimum separation of 37.8 mm between the robotic arms. Fig. 2. Planning the intervention: Dog anatomy and optimal port placement (left), optimal robot position (right). 2.2 Registration Twelve fiducial markers were manually identified on the CT scan and pointed by one of the robotic arms after the dog had been positioned on the operating table, as shown in figure 3. The obtained RMS error was 19 mm, which is considerably higher than previously observed on plastic phantoms errors, which were typically lower than one centimeter. Looking back at the assumptions behind using a one step rigid transformation and at the measurement process, it is unlikely that the CT scan reconstruction caused the above error, due to the quality of the latter. The same applies to the precision of the robot, which was calibrated before the experiment. Indeed, this step registers the robot directly to the CT scan model, thus implicitly neglecting any non-rigid

7 Transfer Precision in the Operating Room 7 transformation between the scan and the intraoperative dog position. This relatively high error is the counterpart of the simplicity and non-invasiveness of this marker based registration. Fig. 3. Fiducial markers are easily distinguished in preoperative CT images (left). Pointing fiducial markers with the robot tool tip (middle and right). A reconstruction of a skin patch of the dog has been done in order to test the second registration method. It was then registered to the scan with an ICP algorithm, as shown in figure 4. To verify the registration, it had been planned to reproject the fiducials segmented from the scan. Unfortunately, due to constraints in the availability of the lab, the stereo calibration had to be done during the intervention, consuming most of the time granted to this experiment. As a consequence, the reprojection could not be done directly on the dog, but was done instead on the picture from the camera (the markers coordinates were mapped in the camera frame instead of the projector frame). The result is shown in figure 5. Fig. 4. (left) stereo reconstructed patch of skin; (middle) patch registered with scan segmented skin; (right) skin segmented from the scan. Error measurements To better discriminate the different sources of error, a new registration was performed after the insufflation, which introduced a deformation of the rib cage on which the fiducials are distributed. The obtained RMS error was of 20.1 mm, which is comparable to the original error before the insufflation, insinuating that the deformation is within the precision of the registration. On the other hand, the surface based registration gave a RMS error of 10.7 mm. The error was obtained by measuring the distances between cor-

8 8 Ève Coste-Manière et al. responding fiducials in the scan and in the reconstructed patch after ICP registration. This is to be compared with an RMS error of 6.7 mm, corresponding to the transformation that best matches these markers. This latter error is due to geometric distortion between the scan model and the stereo one (deformation due to breathing, position, calibration and reconstruction error). Finally, before the end of the intervention, a reference needle inserted in the thorax of the dog and pointed by the instrument was used to investigate the span of the validity of the registration, as shown in figure 5. An error of 3.2 cm was measured, which is not explained by the registration error. As a consequence, an alternative solution should be sought to correct this shift, such as intraoperative tracking through visual clues (see section 1.4). Fig. 5. (left) Virtual needle used to measure tool tip error. (right) synthetic projection of fiducials. 2.3 Port placement As shown in figure 6, an interactive interface is used to continuously display the error vector between the position of a desired port and one of the arms of the robot. The surgeon uses this information to move the tool tip of the arm to the incision site. The method was judged (clinically) satisfactory in terms of both efficiency and precision. As explained in section 2.2, the locations of ports could not be reprojected intraoperatively, and thus no error measurement could be done directly on the dog. However a synthetic reprojection, which by construction should be very close to what the actual reprojection would have been, is shown in figure 6 (right). 2.4 Robot positioning The robot was positioned to match the stationary remote centers of the arms with the ports, and to avoid any singular or colliding configuration, as shown in figure 2. The minimum separation obtained was of 38.7 mm between the

9 Transfer Precision in the Operating Room 9 Fig. 6. (left) Using the robot as an interactive pointing device for the planned ports: A radar like X-Y and depth indication of the desired port with additional error information is constantly displayed. (right) Synthetic projection of ports. robotic arms. This optimization concerned the robot setup joints and the position of base of the Da Vinci TM system with respect to the patient (total of 9 dofs), and took 132 seconds. Subsequently, and in the absence of any absolute positioning system, the base was moved in a position that matched as much as possible the computed optimal position, after which a new registration was performed and a another optimization of the position of the robot was performed, this time without the position of the base. The resulting position is shown in figure 7, where there was no collision free solution. Indeed, an automatic validation, where the expected movements of the robot are reproduced after the set-up joints have been fixed, showed that a persistent collision occurs with the lower limbs of the dog (maximum of 11 mm), that no collision occurs between the arms, and that a small collision occurs between one of the arms and the operating table (3.2 mm). The conflict with the lower limbs of the dog is not alarming, since mild pressure (1.1 cm) can easily be tolerated. However, a collision with the operating table would lead to a blocking situation. An easy workaround would have been to allow the rotation of the dog in the pose planning, which would have given more flexibility to the relative positioning between the robot and the dog, thus avoiding the colliding state. Nevertheless, it was decided to keep the current position in order to confirm the predicted complications experimentally. Indeed, and despite the high registration error, both collision were observed at the expected sights. Moreover, an unpredicted collision between the arms also occurred, but this was due to the fact that the surgeon was exploring the pericardium, a situation which had not been modeled in the planning phase. 2.5 Augmented reality The endoscope was calibrated on data set of about ten pairs of images (as illustrated on figure 8). Using the pose of the endoscope with respect to the grid, the reprojection error was about 0.5 pixels. After estimation of the transformation between the cameras and the endoscope tip, the reprojection

10 10 Ève Coste-Manière et al. Fig. 7. Final ports and positioning of the robot at the beginning of the intervention. error was about 4.5 pixels using the robot forward kinematics. An equivalent error was found by reprojecting the grid with new positions of the endoscope corresponding to its displacement across the operating volume. As a consequence, the displacement of the endoscope deduced from the forward robot kinematics will have to be refined by using intra-images measurements to insure the sustain of a precise superimposition. Due to the experimental constraints we could not acquire an angiogram of the dog and test the initialization of the overlay. But the section 2.2 gives a rough estimate of the shift that we can expect. It will have to be interactively compensated by the surgeon after the first superimposition. The right part of the figure 8 illustrates the overlay of a coronary tree in the endoscopic images built off-line from the calibrated endoscopic images and reprojected. Fig. 8. Left: reprojection of the calibration grid using the forward robot kinematics; right: 3D model of the coronary tree built from calibrated endoscopic views. 3 Conclusions and perspectives Despite the difficulties faced in terms of time limitations and logistics, the performed animal experiment was highly rewarding: the operation was suc-

11 Transfer Precision in the Operating Room 11 cessfully carried out according to the transferred planning and the predicted complications were confirmed. It also gave strong support to many of the assumptions built during previous experiments on a plastic phantom, such as the adequacy of the geometric approximations used for the robot and the surrounding obstacles, and unveiled several subtleties that were not accounted for, such as the effect of using a rigid transformation for the registration. Moreover, the time requirements of the transfer methods, summing to under 20 min for any combination, were satisfactory. Efforts in the near future will focus on analyzing the collected data and correlating the observed errors to the transfer steps and assumptions. Furthermore, the effect of the latter on the planned results and the final outcome of the intervention should be clearly identified. New solutions will also have to be sought to increase intraoperative precision, and deal with the motion of internal organs through tracking techniques for use with augmented reality. The long term goal of the analysis work is to adapt our CIS system for planning surgical robotic interventions to the settings of the operating room and its inherent surgical requirements, in order to achieve maximum patient benefit. References 1. L. Adhami and È. Coste-Manière. Positioning tele-operated surgical robots for collision-free optimal operation. In Proc. of the 2002 IEEE International Conference on Robotics and Automation, May L. Adhami and È. Coste-Manière. A versatile system for computer integrated mini-invasive robotic surgery. In Medical Image Computing and Computer Assisted Intervention (MICCAI 02), to appear, L. Adhami, È. Coste-Manière, and J.-D. Boissonnat. Planning and simulation of robotically assisted minimal invasive surgery. In Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI 00), volume 1935 of Lect. Notes in Comp. Sc Springer, Oct F. Devernay, O. Bantiche, and È. Coste-Manière. Structured light on dynamic scenes using standard stereoscopy algorithms. Research report, ChIR Medical Robotics group. INRIA Sophia Antipolis, March F. Devernay, F. Mourgues, and È. Coste-Manière. Towards endoscopic augmented reality for robotically assisted minimally invasive cardiac surgery. In IEEE, editor, Proc. Intl. Workshop on Medical Imaging and Augmented Reality, pages 16 20, O. Faugeras. Three-Dimensional Computer Vision. MIT Press, F. Mourgues and È. Coste-Manière. Flexible calibration of robotized stereoscopic endoscope for overlay in robot assisted surgery. In Medical Image Computing and Computer Assisted Intervention (MICCAI 02), to appear, R. Y. Tsai. A versatile camera calibration technique for high accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation, pages RA 3(4): , August Z. Zhang. Iterative point matching for registration of freeform curves and surfaces. International Journal of Computer Vision, pages 13(2): , 1994.

Optimized Port Placement for the Totally Endoscopic Coronary Artery Bypass Grafting using the da Vinci Robotic System

Optimized Port Placement for the Totally Endoscopic Coronary Artery Bypass Grafting using the da Vinci Robotic System Optimized Port Placement for the Totally Endoscopic Coronary Artery Bypass Grafting using the da Vinci Robotic System Ève Coste-Manière 1, Louaï Adhami 1, Renaud Severac-Bastide 2, Adrian Lobontiu 3, J.

More information

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart Christine Hartung 1, Claudia Gnahm 1, Stefan Sailer 1, Marcel Schenderlein 1, Reinhard Friedl 2, Martin Hoffmann 3, Klaus

More information

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database 2017 29 6 16 GITI 3D From 3D to 4D imaging Data Fusion Virtual Surgery Medical Virtual Reality Team Morphological Database Functional Database Endo-Robot High Dimensional Database Team Tele-surgery Robotic

More information

Automatic Patient Registration for Port Placement in Minimally Invasive Endoscopic Surgery

Automatic Patient Registration for Port Placement in Minimally Invasive Endoscopic Surgery Automatic Patient Registration for Port Placement in Minimally Invasive Endoscopic Surgery Marco Feuerstein 1,2, Stephen M. Wildhirt 2, Robert Bauernschmitt 2, and Nassir Navab 1 1 Computer Aided Medical

More information

Towards endoscopic augmented reality for robotically assistedminimally invasive cardiac surgery

Towards endoscopic augmented reality for robotically assistedminimally invasive cardiac surgery Towards endoscopic augmented reality for robotically assistedminimally invasive cardiac surgery Frédéric Devernay, Fabien Mourgues, Ève Coste-Manière To cite this version: Frédéric Devernay, Fabien Mourgues,

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of

More information

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Tobias Ortmaier Laboratoire de Robotique de Paris 18, route du Panorama - BP 61 92265 Fontenay-aux-Roses Cedex France Tobias.Ortmaier@alumni.tum.de

More information

Sensor-aided Milling with a Surgical Robot System

Sensor-aided Milling with a Surgical Robot System 1 Sensor-aided Milling with a Surgical Robot System Dirk Engel, Joerg Raczkowsky, Heinz Woern Institute for Process Control and Robotics (IPR), Universität Karlsruhe (TH) Engler-Bunte-Ring 8, 76131 Karlsruhe

More information

Volumetric Deformable Models for Simulation of Laparoscopic Surgery

Volumetric Deformable Models for Simulation of Laparoscopic Surgery Volumetric Deformable Models for Simulation of Laparoscopic Surgery S. Cotin y, H. Delingette y, J.M. Clément z V. Tassetti z, J. Marescaux z, N. Ayache y y INRIA, Epidaure Project 2004, route des Lucioles,

More information

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY Summary of thesis by S. Bernhardt Thesis director: Christophe Doignon Thesis

More information

Fully Automatic Endoscope Calibration for Intraoperative Use

Fully Automatic Endoscope Calibration for Intraoperative Use Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,

More information

An Endoscope With 2 DOFs Steering of Coaxial Nd:YAG Laser Beam for Fetal Surgery [Yamanaka et al. 2010, IEEE trans on Mechatronics]

An Endoscope With 2 DOFs Steering of Coaxial Nd:YAG Laser Beam for Fetal Surgery [Yamanaka et al. 2010, IEEE trans on Mechatronics] An Endoscope With 2 DOFs Steering of Coaxial Nd:YAG Laser Beam for Fetal Surgery [Yamanaka et al. 2010, IEEE trans on Mechatronics] GEORGE DWYER Automatic Tracking Algorithm in Coaxial Near-infrared Laser

More information

Virtual Endoscopy: Modeling the Navigation in 3D Brain Volumes

Virtual Endoscopy: Modeling the Navigation in 3D Brain Volumes ACBME-137 Virtual Endoscopy: Modeling the Navigation in 3D Brain Volumes Aly A. Farag and Charles B. Sites Computer Vision and Image Processing Laboratory University of Louisville, KY 40292 Stephen Hushek

More information

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Zein Salah 1,2, Bernhard Preim 1, Erck Elolf 3, Jörg Franke 4, Georg Rose 2 1Department of Simulation and Graphics, University

More information

Optical Guidance. Sanford L. Meeks. July 22, 2010

Optical Guidance. Sanford L. Meeks. July 22, 2010 Optical Guidance Sanford L. Meeks July 22, 2010 Optical Tracking Optical tracking is a means of determining in real-time the position of a patient relative to the treatment unit. Markerbased systems track

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Lecture 8: Registration

Lecture 8: Registration ME 328: Medical Robotics Winter 2019 Lecture 8: Registration Allison Okamura Stanford University Updates Assignment 4 Sign up for teams/ultrasound by noon today at: https://tinyurl.com/me328-uslab Main

More information

Ceilbot vision and mapping system

Ceilbot vision and mapping system Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize

More information

Optimum Robot Control for 3D Virtual Fixture in Constrained ENT Surgery

Optimum Robot Control for 3D Virtual Fixture in Constrained ENT Surgery Optimum Robot Control for 3D Virtual Fixture in Constrained EN Surgery Ming Li and Russell H. aylor Department of Computer Science Department NSF Engineering Research Center for Computer Integrated Surgical

More information

Respiratory Motion Estimation using a 3D Diaphragm Model

Respiratory Motion Estimation using a 3D Diaphragm Model Respiratory Motion Estimation using a 3D Diaphragm Model Marco Bögel 1,2, Christian Riess 1,2, Andreas Maier 1, Joachim Hornegger 1, Rebecca Fahrig 2 1 Pattern Recognition Lab, FAU Erlangen-Nürnberg 2

More information

An Accuracy Approach to Robotic Microsurgery in the Ear

An Accuracy Approach to Robotic Microsurgery in the Ear An Accuracy Approach to Robotic Microsurgery in the Ear B. Bell¹,J.Salzmann 1, E.Nielsen 3, N.Gerber 1, G.Zheng 1, L.Nolte 1, C.Stieger 4, M.Caversaccio², S. Weber 1 ¹ Institute for Surgical Technologies

More information

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION Philips J. Res. 51 (1998) 197-201 FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION This special issue of Philips Journalof Research includes a number of papers presented at a Philips

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Robot Control for Medical Applications and Hair Transplantation

Robot Control for Medical Applications and Hair Transplantation Dr. John Tenney Director of Research Restoration Robotics, Inc. Robot Control for Medical Applications and Hair Transplantation Presented to the IEEE Control Systems Society, Santa Clara Valley 18 November

More information

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery 3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery Masahiko Nakamoto 1, Yoshinobu Sato 1, Masaki Miyamoto 1, Yoshikazu Nakamjima

More information

Design and Implementation of a Laparoscope Calibration Method for Augmented Reality Navigation

Design and Implementation of a Laparoscope Calibration Method for Augmented Reality Navigation Design and Implementation of a Laparoscope Calibration Method for Augmented Reality Navigation M. Schwalbe 1, M. Fusaglia 1, P. Tinguely 2, H. Lu 1, S. Weber 1 1 ARTORG Center for Biomedical Engineering,

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics

An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics Copyright c 2008 ICCES ICCES, vol.8, no.2, pp.41-46 An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics G. Papaioannou 1, C. Mitrogiannis 1 and G. Nianios 1

More information

Tracked surgical drill calibration

Tracked surgical drill calibration Tracked surgical drill calibration An acetabular fracture is a break in the socket portion of the "ball-and-socket" hip joint. The majority of acetabular fractures are caused by some type of highenergy

More information

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics] Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics] Gustavo Sato dos Santos IGI Journal Club 23.10.2014 Motivation Goal:

More information

Endoscopic Reconstruction with Robust Feature Matching

Endoscopic Reconstruction with Robust Feature Matching Endoscopic Reconstruction with Robust Feature Matching Students: Xiang Xiang Mentors: Dr. Daniel Mirota, Dr. Gregory Hager and Dr. Russell Taylor Abstract Feature matching based 3D reconstruction is a

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

3D+t Modeling of Coronary Artery Tree from Standard Non Simultaneous Angiograms

3D+t Modeling of Coronary Artery Tree from Standard Non Simultaneous Angiograms 3D+t Modeling of Coronary Artery Tree from Standard Non Simultaneous Angiograms Fabien Mourgues, Frédéric Devernay, Grégoire Malandain, Ève Coste-Manière To cite this version: Fabien Mourgues, Frédéric

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Spherical wrist dimensional synthesis adapted for tool-guidance medical robots

Spherical wrist dimensional synthesis adapted for tool-guidance medical robots Spherical wrist dimensional synthesis adapted for tool-guidance medical robots T. ESSOMBA a-b, L. NOUAILLE a, M.A. LARIBI b, C.A. NELSON c, S. ZEGHLOUL b, G. POISSON a a. University of Orléans, PRISME

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Hua Zhong 1, Takeo Kanade 1,andDavidSchwartzman 2 1 Computer Science Department, Carnegie Mellon University, USA 2 University

More information

IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION

IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION Joris Schouteden, Marc Pollefeys, Maarten Vergauwen, Luc Van Gool Center for Processing of Speech and Images, K.U.Leuven, Kasteelpark Arenberg

More information

Infrared Camera Calibration in the 3D Temperature Field Reconstruction

Infrared Camera Calibration in the 3D Temperature Field Reconstruction , pp.27-34 http://dx.doi.org/10.14257/ijmue.2016.11.6.03 Infrared Camera Calibration in the 3D Temperature Field Reconstruction Sun Xiaoming, Wu Haibin, Wang Wei, Liubo and Cui Guoguang The higher Educational

More information

Real-time FEM based control of soft surgical robots

Real-time FEM based control of soft surgical robots Real-time FEM based control of soft surgical robots Frederick Largilliere, Eulalie Coevoet, Laurent Grisoni, Christian Duriez To cite this version: Frederick Largilliere, Eulalie Coevoet, Laurent Grisoni,

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Towards deformable registration for augmented reality in robotic assisted partial nephrectomy

Towards deformable registration for augmented reality in robotic assisted partial nephrectomy Towards deformable registration for augmented reality in robotic assisted partial nephrectomy Supervisor: Elena De Momi, PhD Co-supervisor: Dott. Ing. Sara Moccia Author: Anna Morelli, 853814 Academic

More information

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Computed Photography - Final Project Endoscope Exploration on Knee Surface 15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Technical aspects of SPECT and SPECT-CT. John Buscombe

Technical aspects of SPECT and SPECT-CT. John Buscombe Technical aspects of SPECT and SPECT-CT John Buscombe What does the clinician need to know? For SPECT What factors affect SPECT How those factors should be sought Looking for artefacts For SPECT-CT Issues

More information

Suturing in Confined Spaces: Constrained Motion Control of a Hybrid 8-DoF Robot

Suturing in Confined Spaces: Constrained Motion Control of a Hybrid 8-DoF Robot Suturing in Confined Spaces: Constrained Motion Control of a Hybrid 8-DoF Robot Ankur Kapoor 1, Nabil Simaan 2, Russell H. Taylor 1 1 ERC-CISST Department of Computer Science Johns Hopkins University 2

More information

Scale-Invariant Registration of Monocular Endoscopic Images to CT-Scans for Sinus Surgery

Scale-Invariant Registration of Monocular Endoscopic Images to CT-Scans for Sinus Surgery Scale-Invariant Registration of Monocular Endoscopic Images to CT-Scans for Sinus Surgery Darius Burschka 1, Ming Li 2, Russell Taylor 2, and Gregory D. Hager 1 1 Computational Interaction and Robotics

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction MCE/EEC 647/747: Robot Dynamics and Control Lecture 1: Introduction Reading: SHV Chapter 1 Robotics and Automation Handbook, Chapter 1 Assigned readings from several articles. Cleveland State University

More information

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations 3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations Martin Groher 2, Frederik Bender 1, Ali Khamene 3, Wolfgang Wein 3, Tim Hauke Heibel 2, Nassir Navab 2 1 Siemens

More information

Intensity-Based 3D-Reconstruction of Non-Rigid Moving Stenosis From Many Angiographies erschienen in

Intensity-Based 3D-Reconstruction of Non-Rigid Moving Stenosis From Many Angiographies erschienen in Intensit-Based 3D-Reconstruction of Non-Rigid Moving Stenosis From Man Angiographies erschienen in @inproceedings{bouattour24i3o, pages = {45-49}, crossref = {proceedings24bfd}, title = {Intensit-Based

More information

Video to CT Registration for Image Overlay on Solid Organs

Video to CT Registration for Image Overlay on Solid Organs Video to CT Registration for Image Overlay on Solid Organs Balazs Vagvolgyi, Li-Ming Su, Russell Taylor, and Gregory D. Hager Center for Computer-Integrated Surgical Systems and Technology Johns Hopkins

More information

Intelligent Cutting of the Bird Shoulder Joint

Intelligent Cutting of the Bird Shoulder Joint Intelligent Cutting of the Bird Shoulder Joint Ai-Ping Hu, Sergio Grullon, Debao Zhou, Jonathan Holmes, Wiley Holcombe, Wayne Daley and Gary McMurray Food Processing Technology Division, ATAS Laboratory,

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab Introduction Medical Imaging and Application CGV 3D Organ Modeling Model-based Simulation Model-based Quantification

More information

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Intraoperative Fast 3D Shape Recovery of Abdominal Organs in Laparoscopy

Intraoperative Fast 3D Shape Recovery of Abdominal Organs in Laparoscopy Intraoperative Fast 3D Shape Recovery of Abdominal Organs in Laparoscopy Mitsuhiro Hayashibe 1, Naoki Suzuki 1, Asaki Hattori 1, and Yoshihiko Nakamura 2 1 Institute for High Dimensional Medical Imaging,

More information

Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery

Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery Surg Endosc (2012) 26:3413 3417 DOI 10.1007/s00464-012-2355-8 and Other Interventional Techniques Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery Jędrzej Kowalczuk Avishai

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot Weiming ZHAI, Yannan ZHAO and Peifa JIA State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Initial Clinical Experience with 3D Surface Image Guidance

Initial Clinical Experience with 3D Surface Image Guidance Initial Clinical Experience with 3D Surface Image Guidance Amanda Havnen-Smith, Ph.D. Minneapolis Radiation Oncology Ridges Radiation Therapy Center Burnsville, MN April 20 th, 2012 Non-funded research

More information

A Novel Remote Center-of Motion Parallel manipulator for Minimally Invasive Celiac Surgery

A Novel Remote Center-of Motion Parallel manipulator for Minimally Invasive Celiac Surgery International Journal of Research in Engineering and Science (IJRES) ISSN (Online): 2320-9364, ISSN (Print): 2320-9356 Volume 3 Issue 8 ǁ August. 2015 ǁ PP.15-19 A Novel Remote Center-of Motion Parallel

More information

4D Shape Registration for Dynamic Electrophysiological Cardiac Mapping

4D Shape Registration for Dynamic Electrophysiological Cardiac Mapping 4D Shape Registration for Dynamic Electrophysiological Cardiac Mapping Kevin Wilson 1,5, Gerard Guiraudon 4,5, Doug Jones 3,4, and Terry M. Peters 1,2,5 1 Biomedical Engineering Program 2 Department of

More information

The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a

The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a 4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015) The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a 1 Zhengde

More information

Medicale Image Analysis

Medicale Image Analysis Medicale Image Analysis Registration Validation Prof. Dr. Philippe Cattin MIAC, University of Basel Prof. Dr. Philippe Cattin: Registration Validation Contents 1 Validation 1.1 Validation of Registration

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

A novel 3D torso image reconstruction procedure using a pair of digital stereo back images

A novel 3D torso image reconstruction procedure using a pair of digital stereo back images Modelling in Medicine and Biology VIII 257 A novel 3D torso image reconstruction procedure using a pair of digital stereo back images A. Kumar & N. Durdle Department of Electrical & Computer Engineering,

More information

P1: OTA/XYZ P2: ABC c01 JWBK288-Cyganek December 5, :11 Printer Name: Yet to Come. Part I COPYRIGHTED MATERIAL

P1: OTA/XYZ P2: ABC c01 JWBK288-Cyganek December 5, :11 Printer Name: Yet to Come. Part I COPYRIGHTED MATERIAL Part I COPYRIGHTED MATERIAL 1 Introduction The purpose of this text on stereo-based imaging is twofold: it is to give students of computer vision a thorough grounding in the image analysis and projective

More information

(b) (a) 394 H. Liao et al. Fig.1 System configuration

(b) (a) 394 H. Liao et al. Fig.1 System configuration 394 H. Liao et al. PC 3-D Voxel Data MRI etc. RS232C RGB IP Display Optical Tracking System Probe control equipment Fig.1 System configuration We use optical 3-D tracking system to track the position of

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Introduction to Digitization Techniques for Surgical Guidance

Introduction to Digitization Techniques for Surgical Guidance Introduction to Digitization Techniques for Surgical Guidance Rebekah H. Conley, MS and Logan W. Clements, PhD Department of Biomedical Engineering Biomedical Modeling Laboratory Outline Overview of Tracking

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Motion Control of a Master-Slave Minimally Invasive Surgical Robot Based on the Hand-Eye-Coordination

Motion Control of a Master-Slave Minimally Invasive Surgical Robot Based on the Hand-Eye-Coordination Motion Control of a Master-Slave Minimally Invasive Surgical Robot Based on the Hand-Eye-Coordination Aolin Tang, Qixin Cao, Hongbing Tan, Masakatsu G. Fujie 2, Tiewen Pan 3, State Key Lab of Mechanical

More information

Real-time self-calibration of a tracked augmented reality display

Real-time self-calibration of a tracked augmented reality display Real-time self-calibration of a tracked augmented reality display Zachary Baum, Andras Lasso, Tamas Ungi, Gabor Fichtinger Laboratory for Percutaneous Surgery, Queen s University, Kingston, Canada ABSTRACT

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy Matthew Kindig, MS 1, Grant Marchelli, PhD 2, Joseph M. Iaquinto, PhD 1,2, Duane Storti, PhD

More information

On Satellite Vision-aided Robotics Experiment

On Satellite Vision-aided Robotics Experiment On Satellite Vision-aided Robotics Experiment Maarten Vergauwen, Marc Pollefeys, Tinne Tuytelaars and Luc Van Gool ESAT PSI, K.U.Leuven, Kard. Mercierlaan 94, B-3001 Heverlee, Belgium Phone: +32-16-32.10.64,

More information

The team. Disclosures. Ultrasound Guidance During Radiation Delivery: Confronting the Treatment Interference Challenge.

The team. Disclosures. Ultrasound Guidance During Radiation Delivery: Confronting the Treatment Interference Challenge. Ultrasound Guidance During Radiation Delivery: Confronting the Treatment Interference Challenge Dimitre Hristov Radiation Oncology Stanford University The team Renhui Gong 1 Magdalena Bazalova-Carter 1

More information

2D and 3d In-Vivo Imaging for Robotic Surgery. Peter K. Allen Department of Computer Science Columbia University

2D and 3d In-Vivo Imaging for Robotic Surgery. Peter K. Allen Department of Computer Science Columbia University 2D and 3d In-Vivo Imaging for Robotic Surgery Peter K. Allen Department of Computer Science Columbia University CU Robotics Medical Devices 1. In-vivo imaging device (mono) 2. In-vivo imaging device (stereo)

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han Computer Vision 7. The Geometry of Multi Views Computer Engineering, i Sejong University i Dongil Han THE GEOMETRY OF MULTIPLE VIEWS Epipolar Geometry The Stereopsis Problem: Fusion and Reconstruction

More information

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any) Address: Hong Kong Polytechnic University, Phase 8, Hung Hom, Kowloon, Hong Kong. Telephone: (852) 3400 8441 Email: cnerc.steel@polyu.edu.hk Website: https://www.polyu.edu.hk/cnerc-steel/ Project Title:

More information

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging Bingxiong Lin, Yu Sun and Xiaoning Qian University of South Florida, Tampa, FL., U.S.A. ABSTRACT Robust

More information

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,

More information

ULTRASOUND (US) has become one of the standard medical imaging techniques and is widely used within diagnostic

ULTRASOUND (US) has become one of the standard medical imaging techniques and is widely used within diagnostic Towards -Based Autonomous Robotic US Acquisitions: A First Feasibility Study Christoph Hennersperger*, Bernhard Fuerst*, Salvatore Virga*, Oliver Zettinig, Benjamin Frisch, Thomas Neff, and Nassir Navab

More information

Nonrigid Motion Compensation of Free Breathing Acquired Myocardial Perfusion Data

Nonrigid Motion Compensation of Free Breathing Acquired Myocardial Perfusion Data Nonrigid Motion Compensation of Free Breathing Acquired Myocardial Perfusion Data Gert Wollny 1, Peter Kellman 2, Andrés Santos 1,3, María-Jesus Ledesma 1,3 1 Biomedical Imaging Technologies, Department

More information

Telemanipulation of Snake-Like Robots for Minimally Invasive Surgery of the Upper Airway

Telemanipulation of Snake-Like Robots for Minimally Invasive Surgery of the Upper Airway Telemanipulation of Snake-Like Robots for Minimally Invasive Surgery of the Upper Airway Ankur Kapoor 1, Kai Xu 2, Wei Wei 2 Nabil Simaan 2 and Russell H. Taylor 1 1 ERC-CISST Department of Computer Science

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,

More information

4D Motion Modeling of the Coronary Arteries from CT Images for Robotic Assisted Minimally Invasive Surgery

4D Motion Modeling of the Coronary Arteries from CT Images for Robotic Assisted Minimally Invasive Surgery 4D Motion Modeling of the Coronary Arteries from CT Images for Robotic Assisted Minimally Invasive Surgery Dong Ping Zhang 1, Eddie Edwards 1,2, Lin Mei 1,2, Daniel Rueckert 1 1 Department of Computing,

More information

Markerless human motion capture through visual hull and articulated ICP

Markerless human motion capture through visual hull and articulated ICP Markerless human motion capture through visual hull and articulated ICP Lars Mündermann lmuender@stanford.edu Stefano Corazza Stanford, CA 93405 stefanoc@stanford.edu Thomas. P. Andriacchi Bone and Joint

More information