Combining Template Tracking and Laser Peak Detection for 3D Reconstruction and Grasping in Underwater Environments

Size: px
Start display at page:

Download "Combining Template Tracking and Laser Peak Detection for 3D Reconstruction and Grasping in Underwater Environments"

Transcription

1 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems October 7-12, Vilamoura, Algarve, Portugal Combining Template Tracking and Laser Peak Detection for 3D Reconstruction and Grasping in Underwater Environments Mario Prats, J. Javier Fernández and Pedro J. Sanz Abstract Autonomous grasping of unknown objects by a robot is a highly challenging skill that is receiving increasing attention in the last years. This problem becomes still more challenging (and less explored) in underwater environments, with highly unstructured scenarios, limited availability of sensors and, in general, adverse conditions that affect in different degree the robot perception and control systems. This paper describes an approach for semi-autonomous grasping and recovery on underwater unknown objects from floating vehicles. A laser stripe emitter is attached to a robot forearm that performs a scan of a target of interest. This scan is captured by a camera that also estimates the motion of the floating vehicle while doing the scan. The scanned points are triangulated and transformed according to the motion estimation, thus recovering partial 3D structure of the scene with respect to a fixed frame. A user then indicates the part where to grab the object, and the final grasp is automatically planned on that area. The approach herein presented is tested and validated in water tank conditions. I. INTRODUCTION Exploration of the oceans and shallow waters is attracting the interest of many companies and institutions all around the world, in some cases because of the ocean valuous resources, in others because of the knowledge that it houses for scientists, and also for rescue purposes. Remote Operated Vehicles (ROV s) are currently the most extended machines for doing these tasks. In this context, expert pilots remotely control the underwater vehicles from support vessels. However, due to the high costs and control problems involved in ROV-based missions, the trend is to advance towards more autonomous systems, i.e. Autonomous Underwater Vehicles (AUV s). One of the most challenging problems in the migration from tele-operated to autonomous vehicles is to automate grasping and manipulation tasks, mainly due to the limited availability of sensors, and to the continuous motion that naturally affects floating vehicles in the water. Only a few projects are related with autonomous underwater manipulation. The earlier achievements date back to the 1990 s, when the UNION project [1] validated in simulation conditions coordinated control and sensing strategies for increasing the autonomy of intervention ROVs. Simultaneously, the European project AMADEUS [2] demonstrated the coordinated control of two 7 degrees of freedom (DOF) arms submerged in a water tank. The ALIVE project [3] demonstrated the capability to autonomously navigate, dock and operate on an underwater panel. It is worth mentioning the SAUVIM project [4], that recently demonstrated the autonomous recovery of seafloor objects with a 7 DOF arm, M. Prats, J.J. Fernández and P.J. Sanz are with Computer Science and Engineering Department, University of Jaume-I, Castellón, Spain mprats@uji.es Fig. 1. The laser stripe emitter is mounted on a manipulator forearm that performs a scan of the object to be grasped. The AUV design of this picture corresponds to the future integration with the Girona 500 vehicle [7] for experiments in real scenarios. The manipulator is the Light-weight ARM 5E [8]. and the CManipulator project [5], [6], which obtained interesting results on autonomous object grasping and connector plugging. In all the cases, it was assumed that the object to grasp was known in advance in terms of a 3D model or special visual features. In this paper we present our approach for grasping unknown objects underwater, either with recovery purposes, or for other applications that require a firm grasp (e.g. fixing the vehicle to an underwater structure). Grasping objects generally requires knowing at least some partial 3D structure. Different methods exist for recovering 3D information of a given scene. They can be generally classified, according to the used sensor, into sonar-based, laser rangefinders, visionbased and structured light. Sonar-based methods are the most extended approach in the underwater robotics community, because of the good properties of sound propagation in the water. However, these are more suited for long distances and not for the short range typically required for manipulation ( 1m). Laser rangefinders are rare underwater, probably because of the light absorption problem and floating particles. Stereo vision is the cheapest alternative, although not useful on turbid waters, on untextured floors or in the darkness. Structured light, however, is also a cheap alternative, can work on untextured grounds on short distances, can emit in the wavelengths that are less absorbed by the water, and offer a good accuracy even in the darkness, although they /12/S IEEE 106

2 Fig. 2. Scanning of the surface with the laser stripe while doing tracking and estimating the platform motion. need to be combined with a camera for doing triangulation. In this paper we combine a laser emitter with a vision system in order to recover the 3D structure of unknown objects lying on the seafloor, after been scanned from a floating vehicle. The reconstructed 3D point cloud is then used for planning a grasp that is executed autonomously by the robot, after a simple user indication. With the combination of laser peak detection, target tracking, 3D reconstruction and grasp execution, this is a unique system in the underwater robotics literature. Laser scanning systems have been widely used in ground applications for 3D reconstruction. For instance, [9] presented a hand-held device composed of an already calibrated laser stripe emitter and a CCD Camera. Most of the existing work is focused on how to accurately detect the laser stripe in the image [10] and on camera-laser calibration [11], [12]. In underwater environments, laser scanning has been normally used for inspection of subsea structures [13], [14], [15]. This is the first approach where a laser scan is performed with an underwater arm and combined with a grasp planner for object recovery purposes. A. Considered scenario and structure of the paper The scenario illustrated in Figure 1 is considered. An underwater robotic arm (in our case the CSIP Light-weight ARM 5E [8]) is attached to an AUV. For the experiments described in this paper a 4 DOF vehicle prototype is used (see Figure 7). However, integration with the Girona 500 [7] vehicle has been already accomplished and may be used in the future for experiments in real scenarios. An underwater camera (in our case a Bowtech 550C-AL) is placed near the base of the arm and facing downwards, i.e. looking towards the ground. A laser stripe emitter (in our case the Tritech SeaStripe) is assembled on the forearm of the manipulator, and emits a laser stripe visible in the camera field of view (depending on the arm joint configuration). It is assumed that the AUV has reached a pose where the target of interest is visible in the image, and that this pose is known with respect to a global fixed frame W. The paper is then focused on the laser scan process from a vehicle affected by motion disturbances, 3D reconstruction of the floor and grasping of the target object. This article is organized as follows: next Section briefly outlies the scanning and vehicle motion estimation process; Section III describes the algorithm used for detecting the laser stripe in the image; Section IV introduces the 3D reconstruction process from the scan; User-guided grasp planning on the 3D point cloud is presented in Section V; Finally, some results and conclusions are included in Sections VI and VII. II. LASER SCANNING & MOTION ESTIMATION The floor is scanned by moving the elbow joint of the manipulator (after which the laser emitter is attached) at a constant velocity. At the same time, two visual processing algorithms run in parallel: the laser peak detector, in charge of segmenting the laser stripe from the rest of the image and computing the 3D points (see next section for details), and a template tracking algorithm that tracks a patch of the floor and estimates the motion of the floating platform from the patch motion. Indeed, it cannot be assumed that the underwater vehicle stays fixed while doing the scan. Since all the reconstructed 3D points have to be related to a common frame, it is therefore necessary to estimate the local vehicle motion while doing the scan, and use that estimation for transforming the 3D points into a common reference system. The visual patch for motion estimation can be initialized on any part of the image, and does not need to be of a specific size. Small sizes will allow a faster tracking, but will be more affected by illumination changes as those generated by the laser stripe passing through the image. Bigger sizes will be more robust, but require more computation time. We make use of Efficient Second Order Minimization template tracking [16], which allows us to reach high tracking rates 107

3 even with considerable template sizes (see Figure 2 for an example; in this case tracking was running at around 20 fps with 200x200 pixels templates on a Pentium laptop at 1.6 Ghz). It is worth noting how the tracking is robust to the illumination changes induced by the green laser stripe. The tracking process provides, at a given time t, the centroid of the tracked template (in pixels), and its angle with respect to the image horizontal (in radians), i.e. p t = (x t, y t, θ t ). This vector is compared at each iteration with respect to the initial template position and orientation, p 0 = (x 0, y 0, θ 0 ). The distance to the floor can be easily obtained from the vehicle sensors (e.g. a DVL), and also computed from triangulation with the laser stripe. Being Z this distance, given in meters, the displacement of the vehicle in X, Y and yaw (θ) with respect to its initial pose, can be computed as: X = (x 0 x t )Z p x Y = (y 0 y t )Z p y θ = θ 0 θ t being p x and p y the camera intrinsic parameters relating pixels and meters. With these values, an homogeneous transformation matrix is built at each iteration relating the current vehicle pose with respect to the initial one. This homogeneous transformation is computed as: 0 M t = ( Rz (θ) T(X, Y, Z) 0 1 where R z (θ) is a 3 3 rotation matrix of θ radians around Z axis, and T(X, Y, Z) is a 3 1 column vector containing the (X, Y, Z) translation. ) III. LASER PEAK DETECTION One of the first problems when dealing with 3D reconstruction based on structured light is to detect the projected light patterns in the image. In the case of a laser stripe, this process is called laser peak detection, and different approaches with variable robustness and accuracy have been presented in the literature [17]. The problem is to detect those pixels in the image that best correspond to the projected laser stripe. A state-of-theart approach is adopted. It is composed of the following two steps: 1) The image is segmented using a color reference similar to the laser stripe color. The euclidean distance (in RGB space) between each pixel color and the reference color is computed, and only those pixels with a distance lower than a threshold are considered. Please note that underwater vehicles normally operate on dark environments that offer an advantage in this case. In fact, segmentation of a laser stripe on a dark background is easier than doing it in light conditions. The experiments presented in this paper were done Fig. 3. A laser scan of an amphora and a sea urchin (left), and the camera view and user grasp specification (right). The laser stripe is detected on the image and used for the reconstruction of the 3D points. under normal lighting. Results should be better in the darkness of a real ocean scenario. 2) For each row in the segmented image, the peak is computed in sub-pixel accuracy as the centroid of the segmented pixels. Figure 3 shows a laser scan of an amphora and a sea urchin in a water tank, and the view from the onboard camera. These are the two application examples that will be used throughout the paper. The recovery of an amphora and other ancient objects can be of interest for underwater archaeology, whereas grasping sea life is of great interest for marine biologists. IV. 3D RECONSTRUCTION For those pixels in the image that correspond to the laser stripe, 3D information can be recovered by triangulation. Assuming that the input image is undistorted, a pixel in the image with row and column coordinates r and c, defines a line in projective( coordinates given ) by the column vector c c 0 p x l = (l x, l y, 1) =, r r0 p y, 1, being c 0, r 0, p x and p y the camera intrinsic parameters, i.e. the principal point in pixels and the focal length to pixel size ratio. If a pixel in the image belongs in addition to the projected laser plane, the intersection of the camera ray with the laser plane gives the 3D coordinates of the point. The line defined by the camera ray can be expressed with its parametric equation as: P = (X, Y, Z) = λl (1) If, in addition, a 3D point, P, belongs to the laser plane, it holds that: (P P 0 ) T n = 0; (2) where n is the plane normal given in camera coordinates, and P 0 is a 3D point that belongs to the plane. Merging equations 1 and 2 leads to: 108

4 Fig. 4. The reconstructed point clouds seen from the side. The one at the top corresponds to the amphora. The bottom one is the sea urchin. λ = P 0 T n l T n and the final 3D coordinates of the point are given by P = λl. In order to compute these equations it is necessary to know the laser plane equation (given by the point P 0 and a normal n) in camera coordinates. As the laser emitter is attached to the robot arm, the camera-laser relationship is computed from the camera external calibration with respect to the arm base, and the arm kinematics. Being C M B = [ C R B C t B ] the homogeneous matrix representing the camera frame with respect to the arm base frame, B M F the relationship between the arm base frame and the forearm frame (given by the arm kinematics), and F M L the laser pose with relative to the forearm (given in our case by a CAD model), the laser-camera transformation is computed as C M L = C M B B M F F M L. Then, the plane normal and point can be expressed in camera coordinates as: n = C R L (1 0 0) T P 0 = C t L Let s denote as t P the homogeneous vector containing the 3D position of the point P taken with the vehicle at pose t, represented by the homogeneous matrix 0 M t introduced in section II. Then, the final 3D position of the point with respect to a common frame is computed as 0 P = 0 M t t P. Figure 4 shows the point clouds corresponding to the amphora and the sea urchin scans, taken from a moving vehicle. V. GRASP PLANNING A supervisory grasping strategy is adopted: the approach is to let a human indicate the most suitable part for grasping, and then automatically plan a grasp on that part. In fact, Fig. 5. The original point cloud is intersected with the grasp area specified by the user, and only the points inside that area are kept. Then, outliers are removed and the point cloud is downsampled. The results are shown in this image for the grasp on the amphora (top) and the sea urchin (bottom). Red pixels indicate more proximity to the camera. in some underwater robotic applications such as archeology, the selection of the part where to grab an object is crucial in order to avoid any damages of the item. Therefore, in our approach it is a human who indicates the most suitable part for grasping, and the robot just plans a grasp around that area. The main novelty with respect to the existing approaches is that the grasp will be later performed autonomously, and not remotely controlled by the human. The user just indicates a grasp region, but the control is performed autonomously. A. User specification & point cloud filtering The user input is a pair of antipodal grasp points clicked on a 2D view of the object. The projection of those 2D points define two 3D lines that are used to define a 3D volume (with a fixed width). The user-defined 3D volume is intersected with the point cloud, and thus, only those points lying inside are kept. The next step is to remove outliers and to build a downsampled cloud. All these operations highly reduce the size of the point cloud, allowing to perform all posterior computations faster. As an example, Figure 5(top) shows a case where a grasp for the amphora is specified (see also the grasp specification in Figure 3, right). The final point cloud contains a small number of points that approximately describe the volume of the object around the area indicated by the user. The bottom part of Figure 5 shows the equivalent process for the sea urchin. 109

5 B. Setting the grasp frame In order to set the position and orientation of the grasp, a cartesian frame, G, is defined (relative to the common frame, W ) according to the following steps: First, the closest and the farest points of the filtered point cloud relative to the camera are found. Let c = (c x c y c z ) and f = (f x f y f z ) be these points. From them, the local object height is computed as h = f z c z. Let e be the maximum amount (in meters) the hand can envelope the object. This depends on the hand geometry, and can be generally set to the maximum distance between the fingertips and the palm. In our case it is set to e = 0.05m. For larger hands, more enveloping may be desired. From the object height, h, and the maximum enveloping distance, e, the final grasp depth is computed as the minimum of both, i.e. gd = min(h, e). After computing all the previous parameters, the grasp frame position is set to C t G = (c x, c y, c z + gd). The grasp frame orientation is set so that the local Z axis corresponds to the camera Z axis, and Y axis is parallel to the vector that joins the two points indicated by the user. X axis is then set according to the right-hand rule. Finally, the grasp frame is expressed with respect to a world fixed frame, W, as W M G = W M C CM G, being W M C the transformation matrix that relates the camera frame and the world frame at the moment the laser scan is started. C. Inverse kinematics After the grasp to perform has been specified, it is necessary to compute a suitable vehicle position on top of the object so that the required grasp is feasible. This can be done by computing the inverse kinematics of the whole armvehicle kinematic chain. The obtained vehicle pose will be sent to the vehicle navigation system and will need to be reached before starting the grasping action. Our approach is to adopt a classical iterative inverse jacobian method where the jacobian is computed in a specific way in order to exploit the kinematic redundancy of the whole system. We first describe the redundancy management techniques that we consider, and then give more details on the inverse kinematics solver, followed by some examples. 1) Management of redundancy: In the context of this work, we consider a 4 DOF vehicle with an attached robotic arm. With an arm of more than two DOF, the whole kinematic system is redundant, thus allowing the robot to reach a given cartesian pose with many different joint configurations. This allows to establish preferences of some configurations over others. In this work we adopt the Resolved Motion Rate Control (RMRC) approach for manipulator control [18], and the well-known gradient projection method (GPM) for joint redundancy management [19]. The general approach is to project a secondary task into the nullspace of the first task, as follows: q = J + EẋE + (I J + E J E)e j (3) where ẋ E is a desired arm end-effector velocity, q is the corresponding joint velocity, J + E is the pseudo-inverse of the arm jacobian at the end-effector, (I J + E J E) is the nullspace projector, and e j is a cost vector, which is normally computed as the gradient of a cost function h(q), i.e. e j = h q. There are many different possibilities of cost functions: increase manipulability, minimize energy consumption, etc. [19]. In our case, we define e j in order to accomplish the secondary task of keeping the arm posture near an equilibrium configuration that (i) minimizes unbalancing effects on the vehicle, (ii) guarantees that the end-effector is far from the workspace limits, and (iii) guarantees that the end-effector is in the field of view of the vehicle camera (situated in the bottom part of the vehicle and facing downwards, see Figure 1). In general, our cost vector adopts the following expression: e j = λ j (q e q) where q e is a desired joint equilibrium configuration, q is the vector with current joint values and λ j is the gain of the secondary task. 2) Grasp redundancy: In our approach, the grasping action is specified as moving a hand frame, H (attached to the gripper, and given relative to the end-effector frame, E) towards a desired relative positioning with respect to the grasp frame, expressed as H M G. Constrained and free degrees of freedom in this relationship can be indicated. For the constrained DOF, the hand frame must completely reach the desired relative pose with respect to the grasp frame. However, for free degrees of freedom, there is no particular relative pose used as reference. Instead, the robot controller can choose a suitable relative pose, according to different criteria such as manipulability, joint limit avoidance, etc. We refer to these directions as grasp-redundant DOF. A grasp with grasp-redundant DOF is called an under-constrained grasp. 3) Inverse kinematics of a joint and grasp-redundant vehicle-arm system: We assume that the kinematic model of the vehicle-arm system is known, and includes n degrees of freedom (DOF), four of which correspond to the underwater vehicle 3D position and yaw angle, and the rest to the arm, i.e. q = (q v q a ), being q v = (x y z α), and q a = (q 1 q 2... q n 4 ). This constitutes a n DOF kinematic system for which the forward kinematic model can be computed following standard techniques, leading to x E = F K va (q). The inverse kinematics can be computed following an Inverse Jacobian method [20], leading to q = IK va (x E ). These methods are based on Newton-Rhapson numerical algorithms for solving systems of nonlinear equations. One of the main advantages is that the same RMRC controller (equation 3) that is used for real-time control can be used here for computing an inverse kinematics solution on redundant systems. The general approach is to build a virtual joint configuration ˆq (for which an initial estimate is known) and 110

6 Fig. 6. Autonomous execution of the grasps by the robot. control the corresponding virtual end-effector pose towards the desired cartesian pose, x E. More concretely, the following equation is used to update the solution until convergence (i.e. x E 0) : + q = J+ E x E + (I JE JE )ej (4) λx (x E F Kva (q )). The number of where x E = redundant DOF can be further increased if grasp-redundant DOF are also considered. In the following we extend the previous expression in order to exploit under-constrained grasps. First, as the execution of the grasp is controlled at the hand frame, H, a new jacobian is computed from the end-effector jacobian, as: 1 JH = E WH JE Fig. 7. The testbed used for the experiments. The arm is attached to a floating platform that is placed in the water. (5) being E WH the twist transformation matrix associated to MH. A modified jacobian is then computed for exploiting grasp-redundant DOF, as JrH = Sc JH, being Sc a diagonal selection matrix that selects the degrees of freedom necessary for the grasp. This removes grasp-redundant DOF from the main task. Then, equation 4 is transformed into: E q = JrH + x H + (I JrH + JrH )ej VI. RESULTS The two application examples that have been used throughout the paper have been executed with a real underwater arm in a water tank. The Light-weight ARM 5E was assembled into a floating structure and placed in the water (see Figure 7). The objects were manually placed on the floor in a way they were in the field of view of the camera and inside the workspace of the arm. For each object, a laser scan was performed by moving the elbow joint until the laser stripe covered all the image. Vehicle motion was naturally generated when moving the arm because of the dynamic coupling between both subsystems. In addition to the vehicle motion generated because of the arm motion, we introduced further disturbances by manually moving the floating structure. Figure 8 shows the amount of displacement generated on the floating structure, as measured by the motion estimator presented in section II. Figure 6 shows the final grasp execution on the amphora and the sea urchin with the real arm, according to the user specification, 3D reconstruction, planning and inverse (6) With this new expression, ej is projected not only on the joint redundant DOF, but also on the grasp redundant ones. Therefore, more DOF are available for the secondary task, allowing the robot to perform the main task, while effectively performing auxiliary motion. Secondary tasks acting on preferred configurations for the grasp-redundant DOF could also be defined by projecting them into the joint space. 4) Grasp inverse kinematics: The goal of the grasping action is to match the hand frame, H, with the grasp frame, G. It is therefore possible to apply the inverse kinematic model to the grasp frame origin, W g, given in an absolute frame W, as q = IKva (W g), leading to a suitable qv and qa for the given task. 111

7 (TRIDENT Project), by Foundation Caixa Castelló- Bancaixa PI.1B , and by Generalitat Valenciana ACOMP/2012/252. Fig. 8. Motion disturbances (vehicle trajectory) generated on the floating platform during the amphora recovery experiment (top), and grasping of the sea urchin (bottom). kinematics described in previous sections of this paper. The grasp poses where reached successfully in both cases, thus validating our approach in a real and practical scenario. VII. CONCLUSION This paper has presented a new approach for semiautonomous recovery of unknown objects with underwater robots. First, the object 3D structure is recovered by using a laser stripe emitter combined with a camera. The laser emitter is mounted on a robot arm that is in charge of performing the scanning motion from a mobile platform. For this, laser peak detection and template tracking algorithms are combined. A 3D point cloud is generated after the scanning process, and filtered with a grasp region indicated by an operator on a 2D view of the target object. This allows to reduce the number of 3D points and to delimit the grasping area. After that, a grasp planner computes a suitable grasp on the reduced point cloud, and an inverse kinematic solver computes a suitable vehicle-arm pose for reaching the object at the desired grasp pose. All the previous methods have been validated in water tank conditions. VIII. ACKNOWLEDGEMENTS This research was partly supported by Spanish Ministry of Research and Innovation DPI C03 (TRITON Project), by the European Commission Seventh Framework Programme FP7/ under Grant agreement REFERENCES [1] V. Rigaud, E. Coste-Maniere, M.J. Aldon, P. Probert, M. Perrier, P. Rives, D. Simon, D. Lang, J. Kiener, A. Casal, J. Amar, P. Dauchez, and M. Chantler. Union: underwater intelligent operation and navigation. IEEE Robotics & Automation Magazine, 5(1):25 35, March [2] G. Marani, D. Angeletti, G. Cannata, and G. Casalino. On the functional and algorithmic control architecture of the amadeus dual arm robotic workcell. In WAC 2000, Hawaii (USA), June [3] J. Evans, P. Redmond, C. Plakas, K. Hamilton, and D. Lane. Autonomous docking for intervention-auvs using sonar and video-based real-time 3d pose estimation. In OCEANS 2003, volume 4, pages , Sept [4] J. Yuh, S.K. Choi, C. Ikehara, G.H. Kim, G. McMurty, M. Ghasemi- Nejhad, N. Sarkar, and K. Sugihara. Design of a semi-autonomous underwater vehicle for intervention missions (sauvim). In Proceedings of the International Symposium on Underwater Technology, pages 63 68, April [5] Dirk Spenneberg, Jan Albiez, Frank Kirchner, Jochen Kerdels, and Sascha Fechner. C-manipulator: An autonomous dual manipulator project for underwater inspection and maintenance. ASME Conference Proceedings, 2007(42703): , [6] M. Hildebrandt, J. Albiez, and F. Kirchner. Computer-based control of deep-sea manipulators. In OCEANS MTS/IEEE Kobe Techno- Ocean, pages 1 6, april [7] D. Ribas, P. Ridao, LL. Magí, N. Palomeras, and M. Carreras. The Girona 500, a multipurpose autonomous underwater vehicle. In Proceedings of the Oceans IEEE, Santander, Spain, June [8] CSIP. Light-weight arm 5e datasheet. LightWeight 5E.pdf. [9] K. H. Strobl, W. Sepp, E. Wahl, T. Bodenmuller, M. Suppa, J. F. Seara, and G. Hirzinger. The dlr multisensory hand-guided device: the laser stripe profiler. In Proc. IEEE Int. Conf. Robotics and Automation ICRA 04, volume 2, pages , [10] J. Forest, J. Salvi, E. Cabruja, and C. Pous. Laser stripe peak detector for 3d scanners. a fir filter approach. In Proc. 17th Int. Conf. Pattern Recognition ICPR 2004, volume 3, pages , [11] A. M. McIvor. Calibration of a laser stripe profiler. In Proc. Second Int 3-D Digital Imaging and Modeling Conf, pages 92 98, [12] W. Stocher and G. Biegelbauer. Automated simultaneous calibration of a multi-view laser stripe profiler. In Proc. IEEE Int. Conf. Robotics and Automation ICRA 2005, pages , [13] A. Gordon. Use of laser scanning system on mobile underwater platforms. In Proc. Symp. Autonomous Underwater Vehicle Technology AUV 92, pages , [14] Chau-Chang Wang, Shiahn-Wern Shyue, and Shi-Her Cheng. Underwater structure inspection with laser light stripes. In Proc. Int. Symp. Underwater Technology UT 00, pages , [15] Chau-Chang Wang and Dajun Tang. Seafloor roughness measured by a laser line scanner and a conductivity probe. IEEE Journal of Ocean Engineering, 34(4): , [16] S. Benhimane and E. Malis. Real-time image-based tracking of planes using efficient second-order minimization. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages , [17] Hsin-Hung Chen, Chau-Chang Wang, and Fu-Chuan Hung. Comparative assessments of peak detection algorithms for underwater laser ranging. In Proc. OCEANS 04. MTTS/IEEE TECHNO-OCEAN 04, volume 3, pages , [18] D. E. Whitney. Resolved motion rate control of manipulators and human prostheses. IEEE Transactions on Man Machine Systems, 10(2):47 53, [19] Bruno Siciliano. Kinematic control of redundant robot manipulators: A tutorial. Journal of Intelligent and Robotic Systems, 3(3): , [20] Jorge Angeles. On the numerical solution of the inverse kinematic problem. International Journal of Robotics Research, 4(2):21 37,

Advances in the Specification and Execution of Underwater Autonomous Manipulation Tasks

Advances in the Specification and Execution of Underwater Autonomous Manipulation Tasks Advances in the Specification and Execution of Underwater Autonomous Manipulation Tasks Mario Prats, Juan Carlos Garcia, Jose Javier Fernandez, Raul Marin and Pedro J. Sanz Computer Science and Engineering

More information

Towards specification, planning and sensor-based control of autonomous underwater intervention

Towards specification, planning and sensor-based control of autonomous underwater intervention Towards specification, planning and sensor-based control of autonomous underwater intervention Mario Prats Juan C. García J. Javier Fernández Raúl Marín Pedro J. Sanz Jaume-I University, 12071 Castellón,

More information

APPLICATIONS AND CHALLENGES FOR UNDERWATER SWIMMING MANIPULATORS

APPLICATIONS AND CHALLENGES FOR UNDERWATER SWIMMING MANIPULATORS APPLICATIONS AND CHALLENGES FOR UNDERWATER SWIMMING MANIPULATORS Jørgen Sverdrup-Thygeson AMOS Days October 2017 Introduction NEEDS FOR SUBSEA INSPECTION, MAINTENANCE AND REPAIR The number of new subsea

More information

Workspace Optimization for Autonomous Mobile Manipulation

Workspace Optimization for Autonomous Mobile Manipulation for May 25, 2015 West Virginia University USA 1 Many tasks related to robotic space intervention, such as satellite servicing, require the ability of acting in an unstructured environment through autonomous

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Vision-based Localization of an Underwater Robot in a Structured Environment

Vision-based Localization of an Underwater Robot in a Structured Environment Vision-based Localization of an Underwater Robot in a Structured Environment M. Carreras, P. Ridao, R. Garcia and T. Nicosevici Institute of Informatics and Applications University of Girona Campus Montilivi,

More information

Automatic workspace analysis and vehicle adaptation for hydraulic underwater manipulators

Automatic workspace analysis and vehicle adaptation for hydraulic underwater manipulators Automatic workspace analysis and vehicle adaptation for hydraulic underwater manipulators Jan Albiez, Marc Hildebrandt, Jochen Kerdels and Frank Kirchner Underwater Robotics Department DFKI Bremen, Robotics

More information

A Framework for Compliant Physical Interaction based on Multisensor Information

A Framework for Compliant Physical Interaction based on Multisensor Information A Framework for Compliant Physical Interaction based on Multisensor Information Mario Prats, Pedro J. Sanz and Angel P. del Pobil Abstract Dependable robotic manipulation of everyday objects and execution

More information

Robot Vision Control of robot motion from video. M. Jagersand

Robot Vision Control of robot motion from video. M. Jagersand Robot Vision Control of robot motion from video M. Jagersand Vision-Based Control (Visual Servoing) Initial Image User Desired Image Vision-Based Control (Visual Servoing) : Current Image Features : Desired

More information

Automatic Control Industrial robotics

Automatic Control Industrial robotics Automatic Control Industrial robotics Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Prof. Luca Bascetta Industrial robots

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1

DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1 DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1 Dept. of Electrical Engineering and Computer Science Korea Advanced Institute of Science

More information

Increasing the Autonomy Levels for Underwater Intervention Missions by using Learning and Probabilistic Techniques

Increasing the Autonomy Levels for Underwater Intervention Missions by using Learning and Probabilistic Techniques Increasing the Autonomy Levels for Underwater Intervention Missions by using Learning and Probabilistic Techniques Jorge Sales 1, Luís Santos 2, Pedro J. Sanz 1, Jorge Dias 2,3, and J. C. García 1 1 Computer

More information

Self-calibration of Multiple Laser Planes for 3D Scene Reconstruction

Self-calibration of Multiple Laser Planes for 3D Scene Reconstruction Self-calibration of Multiple Laser Planes for 3D Scene Reconstruction Ryo Furukawa Faculty of Information Sciences, Hiroshima City University, Japan ryo-f@cs.hiroshima-cu.ac.jp Hiroshi Kawasaki Faculty

More information

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators 56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically

More information

Registration of Moving Surfaces by Means of One-Shot Laser Projection

Registration of Moving Surfaces by Means of One-Shot Laser Projection Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,

More information

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh

More information

Resolution of spherical parallel Manipulator (SPM) forward kinematic model (FKM) near the singularities

Resolution of spherical parallel Manipulator (SPM) forward kinematic model (FKM) near the singularities Resolution of spherical parallel Manipulator (SPM) forward kinematic model (FKM) near the singularities H. Saafi a, M. A. Laribi a, S. Zeghloul a a. Dept. GMSC, Pprime Institute, CNRS - University of Poitiers

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Development of Vision System on Humanoid Robot HRP-2

Development of Vision System on Humanoid Robot HRP-2 Development of Vision System on Humanoid Robot HRP-2 Yutaro Fukase Institute of Technology, Shimizu Corporation, Japan fukase@shimz.co.jp Junichiro Maeda Institute of Technology, Shimizu Corporation, Japan

More information

Virtual Interaction System Based on Optical Capture

Virtual Interaction System Based on Optical Capture Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,

More information

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position.

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. Kinematics Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. 1/31 Statics deals with the forces and moments which are aplied on the mechanism

More information

Motion estimation of unmanned marine vehicles Massimo Caccia

Motion estimation of unmanned marine vehicles Massimo Caccia Motion estimation of unmanned marine vehicles Massimo Caccia Consiglio Nazionale delle Ricerche Istituto di Studi sui Sistemi Intelligenti per l Automazione Via Amendola 122 D/O, 70126, Bari, Italy massimo.caccia@ge.issia.cnr.it

More information

Increasing the accuracy with a rich sensor system for robotic laser osteotomy

Increasing the accuracy with a rich sensor system for robotic laser osteotomy Increasing the accuracy with a rich sensor system for robotic laser osteotomy Holger Mönnich moennich@ira.uka.de Daniel Stein dstein@ira.uka.de Jörg Raczkowsky rkowsky@ira.uka.de Heinz Wörn woern@ira.uka.de

More information

ÉCOLE POLYTECHNIQUE DE MONTRÉAL

ÉCOLE POLYTECHNIQUE DE MONTRÉAL ÉCOLE POLYTECHNIQUE DE MONTRÉAL MODELIZATION OF A 3-PSP 3-DOF PARALLEL MANIPULATOR USED AS FLIGHT SIMULATOR MOVING SEAT. MASTER IN ENGINEERING PROJET III MEC693 SUBMITTED TO: Luc Baron Ph.D. Mechanical

More information

Force-Moment Capabilities of Redundantly-Actuated Planar-Parallel Architectures

Force-Moment Capabilities of Redundantly-Actuated Planar-Parallel Architectures Force-Moment Capabilities of Redundantly-Actuated Planar-Parallel Architectures S. B. Nokleby F. Firmani A. Zibil R. P. Podhorodeski UOIT University of Victoria University of Victoria University of Victoria

More information

REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION

REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION Denny Oetomo Singapore Institute of Manufacturing Technology Marcelo Ang Jr. Dept. of Mech. Engineering

More information

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector Inverse Kinematics Given a desired position (p) & orientation (R) of the end-effector q ( q, q, q ) 1 2 n Find the joint variables which can bring the robot the desired configuration z y x 1 The Inverse

More information

Positioning an Underwater Vehicle through Image Mosaicking

Positioning an Underwater Vehicle through Image Mosaicking Positioning an Underwater Vehicle through Image Mosaicking R. Garcia, J. Batlle, X. Cufi J. Amat Computer Vision and Robotics Group IIiA, University of Girona, E.P.S. (P-II) 17071 Girona, Spain e-mail:

More information

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which

More information

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE Chapter 1. Modeling and Identification of Serial Robots.... 1 Wisama KHALIL and Etienne DOMBRE 1.1. Introduction... 1 1.2. Geometric modeling... 2 1.2.1. Geometric description... 2 1.2.2. Direct geometric

More information

Stackable 4-BAR Mechanisms and Their Robotic Applications

Stackable 4-BAR Mechanisms and Their Robotic Applications The 010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-, 010, Taipei, Taiwan Stackable 4-BAR Mechanisms and Their Robotic Applications Hoyul Lee and Youngjin Choi Abstract

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

BIN PICKING APPLICATIONS AND TECHNOLOGIES

BIN PICKING APPLICATIONS AND TECHNOLOGIES BIN PICKING APPLICATIONS AND TECHNOLOGIES TABLE OF CONTENTS INTRODUCTION... 3 TYPES OF MATERIAL HANDLING... 3 WHOLE BIN PICKING PROCESS... 4 VISION SYSTEM: HARDWARE... 4 VISION SYSTEM: SOFTWARE... 5 END

More information

600M SPECTRE FIBEROPTIC ROV SYSTEM

600M SPECTRE FIBEROPTIC ROV SYSTEM Science Class ROV 600M SPECTRE FIBEROPTIC ROV SYSTEM General Specifications Dimensions: 60.0 L x 35.0 W x 33.0 H Weight: 795 pounds Payload: 30-75 pounds Tilt Assembly: 270 Pan - 90 Tilt Forward Thrust:

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

INSTITUTE OF AERONAUTICAL ENGINEERING

INSTITUTE OF AERONAUTICAL ENGINEERING Name Code Class Branch Page 1 INSTITUTE OF AERONAUTICAL ENGINEERING : ROBOTICS (Autonomous) Dundigal, Hyderabad - 500 0 MECHANICAL ENGINEERING TUTORIAL QUESTION BANK : A7055 : IV B. Tech I Semester : MECHANICAL

More information

LAIR. UNDERWATER ROBOTICS Field Explorations in Marine Biology, Oceanography, and Archeology

LAIR. UNDERWATER ROBOTICS Field Explorations in Marine Biology, Oceanography, and Archeology UNDERWATER ROBOTICS Field Explorations in Marine Biology, Oceanography, and Archeology COS 402: Artificial Intelligence - Sept. 2011 Christopher M. Clark Outline! Past Projects! Maltese Cistern Mapping!

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory

An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory Roshdy Foaad Abo-Shanab Kafr Elsheikh University/Department of Mechanical Engineering, Kafr Elsheikh,

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute What are the DH parameters for describing the relative pose of the two frames?

More information

3D Laser Scanner for Underwater Manipulation

3D Laser Scanner for Underwater Manipulation sensors Article 3D Laser Scanner for Underwater Manipulation Albert Palomer 1, * ID, Pere Ridao 1 ID, Dina Youakim 1 ID, David Ribas 2 ID, Josep Forest 1 ID and Yvan Petillot 3 ID 1 Computer Vision and

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,

More information

Towards Autonomous Robotic Valve Turning

Towards Autonomous Robotic Valve Turning BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 12, No 3 Sofia 2012 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.2478/cait-2012-0018 Towards Autonomous Robotic Valve

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

SCREW-BASED RELATIVE JACOBIAN FOR MANIPULATORS COOPERATING IN A TASK

SCREW-BASED RELATIVE JACOBIAN FOR MANIPULATORS COOPERATING IN A TASK ABCM Symposium Series in Mechatronics - Vol. 3 - pp.276-285 Copyright c 2008 by ABCM SCREW-BASED RELATIVE JACOBIAN FOR MANIPULATORS COOPERATING IN A TASK Luiz Ribeiro, ribeiro@ime.eb.br Raul Guenther,

More information

EEE 187: Robotics Summary 2

EEE 187: Robotics Summary 2 1 EEE 187: Robotics Summary 2 09/05/2017 Robotic system components A robotic system has three major components: Actuators: the muscles of the robot Sensors: provide information about the environment and

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Control of industrial robots. Kinematic redundancy

Control of industrial robots. Kinematic redundancy Control of industrial robots Kinematic redundancy Prof. Paolo Rocco (paolo.rocco@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Kinematic redundancy Direct kinematics

More information

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa University of Pisa Master of Science in Computer Science Course of Robotics (ROB) A.Y. 2016/17 cecilia.laschi@santannapisa.it http://didawiki.cli.di.unipi.it/doku.php/magistraleinformatica/rob/start Robot

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Efficient Seabed Coverage Path Planning for ASVs and AUVs*

Efficient Seabed Coverage Path Planning for ASVs and AUVs* Efficient Seabed Coverage Path Planning for ASVs and AUVs* Enric Galceran 1 and Marc Carreras 1 Abstract Coverage path planning is the problem of moving an effector (e.g. a robot, a sensor) over all points

More information

Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot

Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot Gossaye Mekonnen Alemu and Sanjeev Kumar Department of Mathematics IIT Roorkee Roorkee-247 667, India gossayemekonnen@gmail.com malikfma@iitr.ac.in

More information

Constraint-Based Task Programming with CAD Semantics: From Intuitive Specification to Real-Time Control

Constraint-Based Task Programming with CAD Semantics: From Intuitive Specification to Real-Time Control Constraint-Based Task Programming with CAD Semantics: From Intuitive Specification to Real-Time Control Nikhil Somani, Andre Gaschler, Markus Rickert, Alexander Perzylo, and Alois Knoll Abstract In this

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz Humanoid Robotics Inverse Kinematics and Whole-Body Motion Planning Maren Bennewitz 1 Motivation Plan a sequence of configurations (vector of joint angle values) that let the robot move from its current

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Robotics kinematics and Dynamics

Robotics kinematics and Dynamics Robotics kinematics and Dynamics C. Sivakumar Assistant Professor Department of Mechanical Engineering BSA Crescent Institute of Science and Technology 1 Robot kinematics KINEMATICS the analytical study

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

MTRX4700 Experimental Robotics

MTRX4700 Experimental Robotics MTRX 4700 : Experimental Robotics Lecture 2 Stefan B. Williams Slide 1 Course Outline Week Date Content Labs Due Dates 1 5 Mar Introduction, history & philosophy of robotics 2 12 Mar Robot kinematics &

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz Humanoid Robotics Inverse Kinematics and Whole-Body Motion Planning Maren Bennewitz 1 Motivation Planning for object manipulation Whole-body motion to reach a desired goal configuration Generate a sequence

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) Compare the testing methods for testing path segment and finding first

More information

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji

More information

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair

More information

Introduction to Robotics

Introduction to Robotics Université de Strasbourg Introduction to Robotics Bernard BAYLE, 2013 http://eavr.u-strasbg.fr/ bernard Modelling of a SCARA-type robotic manipulator SCARA-type robotic manipulators: introduction SCARA-type

More information

Flexible Visual Inspection. IAS-13 Industrial Forum Horizon 2020 Dr. Eng. Stefano Tonello - CEO

Flexible Visual Inspection. IAS-13 Industrial Forum Horizon 2020 Dr. Eng. Stefano Tonello - CEO Flexible Visual Inspection IAS-13 Industrial Forum Horizon 2020 Dr. Eng. Stefano Tonello - CEO IT+Robotics Spin-off of University of Padua founded in 2005 Strong relationship with IAS-LAB (Intelligent

More information

Introduction to modeling and control of underwater vehicle-manipulator systems

Introduction to modeling and control of underwater vehicle-manipulator systems Introduction to modeling and control of underwater vehicle-manipulator systems Gianluca Antonelli Università di Cassino e del Lazio Meridionale antonelli@unicas.it http://webuser.unicas.it/lai/robotica

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots 2 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-3, 2, Shanghai, China Intermediate Desired Value Approach for Continuous Transition among Multiple

More information

Chapter 1: Introduction

Chapter 1: Introduction Chapter 1: Introduction This dissertation will describe the mathematical modeling and development of an innovative, three degree-of-freedom robotic manipulator. The new device, which has been named the

More information

New shortest-path approaches to visual servoing

New shortest-path approaches to visual servoing New shortest-path approaches to visual servoing Ville Laboratory of Information rocessing Lappeenranta University of Technology Lappeenranta, Finland kyrki@lut.fi Danica Kragic and Henrik I. Christensen

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach

Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach Masahide Ito Masaaki Shibata Department of Electrical and Mechanical

More information

Design of a Three-Axis Rotary Platform

Design of a Three-Axis Rotary Platform Design of a Three-Axis Rotary Platform William Mendez, Yuniesky Rodriguez, Lee Brady, Sabri Tosunoglu Mechanics and Materials Engineering, Florida International University 10555 W Flagler Street, Miami,

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Lecture «Robot Dynamics»: Kinematic Control

Lecture «Robot Dynamics»: Kinematic Control Lecture «Robot Dynamics»: Kinematic Control 151-0851-00 V lecture: CAB G11 Tuesday 10:15 12:00, every week exercise: HG E1.2 Wednesday 8:15 10:00, according to schedule (about every 2nd week) Marco Hutter,

More information

Research Article Motion Control of Robot by using Kinect Sensor

Research Article Motion Control of Robot by using Kinect Sensor Research Journal of Applied Sciences, Engineering and Technology 8(11): 1384-1388, 2014 DOI:10.19026/rjaset.8.1111 ISSN: 2040-7459; e-issn: 2040-7467 2014 Maxwell Scientific Publication Corp. Submitted:

More information

Planar Robot Kinematics

Planar Robot Kinematics V. Kumar lanar Robot Kinematics The mathematical modeling of spatial linkages is quite involved. t is useful to start with planar robots because the kinematics of planar mechanisms is generally much simpler

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Mobile Manipulation A Mobile Platform Supporting a Manipulator System for an Autonomous Robot

Mobile Manipulation A Mobile Platform Supporting a Manipulator System for an Autonomous Robot Mobile Manipulation A Mobile Platform Supporting a Manipulator System for an Autonomous Robot U.M. Nassal, M. Damm, T.C. Lueth Institute for Real-Time Computer Systems and Robotics (IPR) University of

More information

The Underwater Swimming Manipulator A Bio-Inspired AUV

The Underwater Swimming Manipulator A Bio-Inspired AUV The Underwater Swimming Manipulator A Bio-Inspired AUV 10.2 J. Sverdrup-Thygeson, E. Kelasidi, K. Y. Pettersen, and J. T. Gravdahl Centre for Autonomous Marine Operations and Systems, Department of Engineering

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

PPGEE Robot Dynamics I

PPGEE Robot Dynamics I PPGEE Electrical Engineering Graduate Program UFMG April 2014 1 Introduction to Robotics 2 3 4 5 What is a Robot? According to RIA Robot Institute of America A Robot is a reprogrammable multifunctional

More information

AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES

AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES Proceedings of the International Conference Underwater Acoustic Measurements: Technologies &Results Heraklion, Crete, Greece, 28 th June 1 st July 2005 AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES

More information

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun

More information

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io autorob.github.io Inverse Kinematics Objective (revisited) Goal: Given the structure of a robot arm, compute Forward kinematics: predicting the pose of the end-effector, given joint positions. Inverse

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao

More information

Singularity Handling on Puma in Operational Space Formulation

Singularity Handling on Puma in Operational Space Formulation Singularity Handling on Puma in Operational Space Formulation Denny Oetomo, Marcelo Ang Jr. National University of Singapore Singapore d oetomo@yahoo.com mpeangh@nus.edu.sg Ser Yong Lim Gintic Institute

More information