Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach

Size: px
Start display at page:

Download "Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach"

Transcription

1 Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach Masahide Ito Masaaki Shibata Department of Electrical and Mechanical Engineering, Seikei University, Kichijoji-kitamachi, Musashino-shi, Tokyo , JAPAN, and masahide i@st.seikei.ac.jp Abstract In this paper, we propose a visual tracking control method of a hand-eye robot for a moving target object with multiple feature points. The hand-eye robot is composed of a three degrees-offreedom planar manipulator and a single CCD camera that is mounted on the manipulator s endeffector. The control objective is to keep all feature points of the target object around their desired coordinates on the image plane. In many conventional visual servo methods, it is assumed that the target object is static. Consequently, the visual tracking error arises in the case of a moving target object. We have already proposed a visual tracking control system that takes into consideration the target object motion. This method can reduce the visual tracking error, but can only deal with a single feature point. Therefore, this paper extends such a visual tracking control method to multiple feature points. The effectiveness of our control method is evaluated experimentally. keywords: visual servoing, moving target object, feature points, eye-in-hand configuration, hand-eye robot 1 INTRODUCTION Robotic systems need to understand their external environment to behave autonomously. For this purpose, the camera is a very useful sensory device to provide a vision function for robots. An image captured via the camera contains a vast amount of information of the environment. Introducing visual information extracted from the image into the control loop has the potential to increase the flexibility and accuracy of a given task. From such background, vision-based control method has attracted attention of many researchers and engineers. In particular, feedback control with visual information, so-called visual servoing or visual feedback control, is an important technique for robotic systems [1 6]. 1

2 CCD Camera 2nd joint 1st joint 3rd joint 3-DoF Planar Manipulator Figure 1: Hand-eye Robot. Visual servoing is classified roughly into position-based visual servoing (PBVS) and image-based visual servoing (IBVS) approaches. The difference between the two approaches depends on how to use visual features of the target object which are extracted from the image. In the PBVS approach, the controller is designed using the relative three-dimensional (3D) pose between the camera and the target object, which is estimated from the visual features. This approach also needs a priori 3D model of the target object for the 3D reconstruction. On the other hand, the controller in the IBVS approach is designed directly using the visual features; i.e., there is no need for the 3D reconstruction. As a consequence, one advantage of the IBVS approach over the PBVS approach is robustness against modeling errors and external disturbances. The IBVS approach for a robot in the eye-in-hand configuration is focused on in this paper. In many conventional IBVS methods [1 5], it is assumed that the target object is static. Consequently, a steady-state error between the actual and desired features on the image plane, which we call the visual tracking error, arises in the case of a moving target object. In our previous works, we have proposed a visual tracking control method that takes into consideration the target object motion for a robot with two charged-coupled device (CCD) cameras [6, 7] or a robot with a single CCD camera [8]. Although we have demonstrated experimentally that our proposed method can reduce the visual tracking error, this method can only deal with a single feature point. This paper extends such a visual tracking control method to multiple feature points. The controlled object is a hand-eye robot, which is a typical example of eye-in-hand system with a single camera. As depicted in Figure 1, the hand-eye robot considered in this paper is composed of a three degrees-offreedom (3-DoF) planar manipulator and a single CCD camera that is mounted on the manipulator s end-effector at a constant tilt angle. In the case of single camera systems, we need to estimate the depth from the camera to the target object. The structural features of the hand-eye robot used in our study derive a certain relationship between image information and the depth. We can estimate the depth through this relationship. The rest of the paper is organized as follows. In the next section, we divide the kinematics of a 2

3 Camera frame Σ c c y y (, ) Σ f z v c x x World frame Σ w u (u, v ) s p c c p oi Image plane s y c z s p oi s x s z Standard camera frame i-th feature point on image plane (u i, v i ) i-th marker on target object Σ s Figure 2: Coordinate frames for vision system. hand-eye robot into of the vision and the manipulator to show two kinds of Jacobian matrices. Based on the Jacobian matrices, in Section 3 we present a visual tracking control method for the moving target object with multiple feature points. In Section 4, we show experimental results. Finally, we summarize the main contributions of the paper and discuss future works. 2 KINEMATICS OF HAND-EYE ROBOT In this section we divide the kinematics of the hand-eye robot into those of the vision and the manipulator to provide two kinds of Jacobian matrices. 2.1 Vision kinematics A target object is equipped with k markers. We refer to the markers as feature points on the image plane. The geometric relation between the camera and the i-th marker is depicted in Figure 2, where the coordinate frames Σ w, Σ c, Σ s, and Σ f represent the world frame, the camera frame, the standard camera frame, and the image plane frame, respectively. The frame Σ w is located at the base of the manipulator. The frame Σ s is static on Σ w at a given time. Let s p oi := [ s x oi, s y oi, s z oi ], s p c := [ s x c, s y c, s z c ], and c p oi := [ c x oi, c y oi, c z oi ] be the position vectors of the i-th marker on Σ s, the camera on Σ s, and the i-th marker on Σ c, respectively. The coordinates of the i-th feature point and the center on the image plane are denoted as f i = [ u i, v i ] and f = [ u, v ], respectively. When the camera moves at a certain translational and angular velocity, the translational velocity of the i-th marker on Σ c can be represented as cṗ oi = s ṗ oi s ṗ c s ω c c p oi 1 c z c oi y oi = 1 c z oi c x oi 1 c y c oi x oi s ṗ c s ṗ oi s ω c (1) 3

4 where s ω c := [ s ω cx, s ω cy, s ω cz ] is the angular velocity of the camera on Σ s. Using the pinhole camera model with a perspective projection, image deviations can be represented as f i = ūi := u i u = 1 λ x c x oi v i v i v c (2) z oi λ c y y oi where λ x, λ y are the horizontal and vertical focal lengths, respectively. Differentiating (2) with respect to time and using (1) and (2), we obtain f i = J img ( f i, c z oi ) s ṗ c s ω c J (1,1) img ( f i, c z oi ) s ṗ oi (3) where [ ] J img ( f i, c z oi ) = J (1,1) img ( f i, c z oi ) J (1,2) img ( f i ) R 2 6, λx ū c z oi i c z oi, J (1,1) img ( f i, c z oi ) := J (1,2) img ( f i ) := 1 λ y ū i v i λy c z oi v i c z oi ( ) λ x + ū2 i λ x λ y + v2 i λ y 1 λ x ū i v i λ x λ y v i λ y λ x ū i, and [ s ṗ c, s ω c ] is called the velocity twist of the camera. Furthermore, summarizing (3) in terms of k feature points, we obtain f = J img ( f, c z o ) s ṗ c s ω c J (1,1) img ( f, c z o ) s ṗ o (4) where f := [ f 1,, f k ] R 2k, c z o := [ c z o1,, c z ok ] R k, s ṗ o := [ s ṗ o 1,, s ṗ o k ] R 3k, J img ( f 1, c z o1 ) J img ( f, c z o ) :=. R2k 6, J img ( f k, c z ok ) J (1,1) img ( f 1, c z o1 ) J (1,1) img ( f, c z o ) :=... R2k 3k. J (1,1) img ( f k, c z ok ) The matrix J img is the so-called image Jacobian matrix or interaction matrix. 2.2 Manipulator kinematics The hand-eye robot system has five coordinate frames which consist of the base frame Σ b, the i-th joint frame Σ i, i = 1, 2, 3, and the camera frame Σ c, as shown schematically in Figure 3. Let θ i, α i, d i and h i denote the i-th joint relative angle, the relative angle and distance from i 1 y to i y and the distance 4

5 Σ b b x b z 1 x θ 1 1 z Σ Σ 2 1 θ 2, 2 z 2 x Σ 3 Σ c 3 z, c z h 1 h 2 h 2 h 3 Σ b b z Σ 1 1 y Σ 3, Σ c Σ 2 1 z 2 y c y d 1 d 2 d 3 α 3 3 z c z 3 y 2 z 3 x, c x θ 3 b y (a) Top view (b) Side view in the case of θ i =, i = 1, 2, 3 Figure 3: Joint frames on hand-eye robot. from i 1 z to i z, respectively. The homogeneous transformation matrices from Σ i to Σ i 1 and from Σ c to Σ 3 are derived as cos θ i sin θ i d i sin θ i 1 i 1 1 h H i (θ i ) = i, 3 cos α H c = 3 sin α 3. (5) sin θ i cos θ i d i cos θ i sin α 3 cos α Note that the left-superscript i 1 is replaced by b when i = 1. Using (5), the homogeneous transformation matrices from Σ c to Σ i 1, i = 1, 2, 3, are given by 2 H c (θ 3 ) = 2 H 3 (θ 3 ) 3 H c, 1 H c (θ 23 ) = 1 H 2 (θ 2 ) 2 H 3 (θ 3 ) 3 H c and b H c (θ) = b H 1 (θ 1 ) 1 H 2 (θ 2 ) 2 H 3 (θ 3 ) 3 H c, where θ 23 := [ θ 2, θ 3 ] and θ := [ θ 1, θ 2, θ 3 ]. Now, regarding i 1 H c, i = 1, 2, 3, as i 1 H c = r cx i 1 r cy i 1 r cz i 1 p c 1 that consist of vectors i 1 r cx, i 1 r cy, i 1 r cz and i 1 p c R 3, we obtain the following manipulator 5

6 Jacobian matrix: c J (θ 23 ) = = b p c b r cx, e 2 1 p c 1 r cx, e 2 2 p c 2 r cx, e 2 b p c b r cy, e 2 1 p c 1 r cy, e 2 2 p c 2 r cy, e 2 b p c b r cz, e 2 1 p c 1 r cz, e 2 2 p c 2 r cz, e 2 b r cx, e 2 1 r cx, e 2 2 r cx, e 2 b r cy, e 2 1 r cy, e 2 2 r cy, e 2 b r cz, e 2 1 r cz, e 2 2 r cz, e 2 d 1 cos(θ 2 + θ 3 ) + d 2 cos θ 3 + d 3 d 2 cos θ 3 + d 3 d 3 d 1 sin α 3 sin(θ 2 + θ 3 ) d 2 sin α 3 sin θ 3 d 2 sin α 3 sin θ 3 d 1 cos α 3 sin(θ 2 + θ 3 ) + d 2 cos α 3 sin θ 3 d 2 cos α 3 sin θ 3 cos α 3 cos α 3 cos α 3 sin α 3 sin α 3 sin α 3, (6) where e 2 = [, 1, ] is the second basis vector in R 3. Thus, using (6), the angular velocity of joints is related to the velocity twist of the camera by the formula s ṗ c s ω c = c J (θ 23 ) θ. (7) 3 VISUAL TRACKING FOR MOVING TARGET OBJECT WITH MULTIPLE FEATURE POINTS This section presents a visual tracking system based on the kinematics of the hand-eye robot. The controller of the 3-DoF planar manipulator consists of a disturbance observer [9] and a PD controller. The desired angle and angular velocity assigned to the PD controller are designed by the kinematics of the hand-eye robot without neglecting the target object velocity. 3.1 Visual tracking system The manipulator dynamics is linearized and decoupled with the use of a disturbance observer [9], and the controller of the linearized system adopts the PD control law. We obtain the closed system θ(t) = K p (θ(t) θ d ) K v ( θ(t) θ d ), (8) where the superscript d refers to the desired value. Matrices K p and K v are the positive gain matrices. It is well known that the control system based on the disturbance observer is robust against system parameter variations and external disturbances. The control objective is to keep all feature points of the target object around their desired coordinates on the image plane. We design the desired angle θ d and angular velocity θ d in (8) using visual 6

7 information to achieve the control objective. Substituting (7) into (3) yields All solutions of (9) are expressed as f = J img ( f, c z o ) c J (θ 23 ) θ J (1,1) img ( f, c z o ) s ṗ o. (9) { } { } θ = J + vis ( f, c (1,1) z o, θ 23 ) f + J img ( f, c z o ) s ṗ o + I 3 J + vis ( f, c z o, θ 23 )J vis ( f, c z o, θ 23 ) φ (1) where J vis := J c img J R 2k 3, J + vis = J vis(j vis J vis) 1, for k = 1 and rank J vis = 2 (J visj vis ) 1 J vis, for k > 1 and rank J vis = 3 (11) denotes the pseudo-inverse of J vis, (I 3 J + vis J vis) R 3 3 is the orthogonal projection operator into the null-space of J vis, ker J vis, and φ R 3 is an arbitrary vector, respectively. Setting φ 3, the motion of the target object on the image plane is related to the angular velocity of the joints by the following formula: { θ = J + vis ( f, c (1,1) z o, θ 23 ) f + J img ( f, c z o ) s ṗ o }. (12) Hence, based on (12), the desired angular velocity of the joints is given by: { } θ d = J + vis ( f, c z o, θ 23 ) K img (f f d (1,1) ) + J img ( f, c z o ) s ṗ o (13) where K img = block diag {K img 1,, K img k }, K img i := diag{k u img i, K v img i} >, i = 1,..., k, denotes the positive gain matrix and f d = [ (f d 1),, (f d k) ], f d i := [ u d i, vi d ], i = 1,..., k, denotes the desired coordinates of all feature points on the image plane. The desired angle of the joints θ d can be calculated by the step-by-step integration of θ d. Our proposed method is basically classified as IBVS. In many conventional IBVS methods [1 5] it is assumed that the target object on Σ s is static, i.e., s ṗ o 3, even if the target object is moving. Thus, it is usually necessary to increase the value of K img in the conventional method when the target object is moving. However high gain control may cause instability of the control system itself. In contrast, our proposed method is appropriate to the vision kinematics, as designed using s ṗ o without annihilating it. Hence our proposed method can suppress the value of K img. Now, note that the depth c z o and the target object velocity s ṗ o are not directly measured. We need to estimate them. The details will be given in the following subsections. 7

8 3.2 Depth estimation The hand-eye robot considered in this paper is a single camera system. Hence, the depth of the target object on Σ c, c z oi, cannot be measured directly. For this problem, a number of depth observers have been proposed so far; see, for example, [1 16] and the references. These observers can estimate c z oi for a static target object under some conditions, but they are inapplicable to a moving target object. Here we introduce an appropriate assumption to estimate c z oi. The hand-eye robot has the following structural features: 1) The motion space of the manipulator s end-effector is planar. 2) And the CCD camera of the hand-eye robot is mounted on the manipulator s end-effector at a constant tilt angle, as shown in Figures 1 and 3 (b). Under the following assumption, these structural features generate certain one-to-one correspondence between the depth c z oi and v i on the image plane. Assumption 1 The motion of the target object is constrained in a horizontal plane at an appropriate height so that the height of the target object is lower than the height of the camera. As shown in Figure 4, the value of the coordinate v increases when the camera approaches the target object, which is static on Σ w at a given time, along 3 z-axis. Similarly, the value of the coordinate v decreases when the camera moves away from the target object along 3 z-axis. Consequently, collecting some measured data of set ( c z oi, v i ) in advance, we can estimate the depth c z oi as a function that depends on v i. We call the function c z oi (v i ) depth estimation function. The concrete depth estimation function c ẑ oi (v i ) used in the experiment would be shown in Section Target object velocity estimation The vector s ṗ o, which we call the target object velocity, consists of translational velocity of each marker. Each marker does not behave freely because all markers are attached on a single rigid body. Let us locate the target object frame Σ o at the first marker, as shown in Figure 5. Differentiating the geometric 2 1 u Σ c c y c z v 2 1 c z oi i-th feature point Image plane Figure 4: Relationship between v and c z oi. 8

9 1st marker s p o1 o p o1 o i s p oi i-th marker Standard camera frame Σ s Target object frame Σ o Figure 5: Geometric relation between Σ s and target object. relation s p oi = s p o1 + s R o o p o1o i with respect to time and using o ṗ o1o i 3, we obtain sṗ oi = s ṗ o1 + s ω o ( s R o o p o1 o i ), (14) where o p o1 o i, s R o, and s ω o are the position vector from the first marker to i-th marker on Σ o, the rotational matrix, and angular velocity of the target object frame on Σ s, respectively. Now, the following assumption is introduced. Assumption 2 The rotational motion of the target object is enough small, i.e., s ω o 3. Under this assumption, (14) can be approximated as follows: sṗ oi s ṗ o1. (15) Next, we estimate s ṗ o1 by the difference approximation of the estimated position sˆp o1. When we can estimate the depth c z o1 as stated in the previous subsection, the estimation of s p o1 can be calculated by sˆp o1 (v 1 ) = [ cẑ o1 (v 1) λ x, cẑ o1 (v 1) λ y, c ẑ o1 (v 1 )]. We estimate the translational velocity s ṗ o1 (t), t [t s, t s + t) as follows: where sˆṗ sˆp o1 (t) = o1 (t s ) s p o1 (t s t), (16) t s p o1 (t s t) := c H b (θ(t s )) b H c (θ(t s t)) sˆp o1 (t s t), (17) 1 }{{} 1 b H 1 c (θ(t s)) }{{} [ b ˆp o (t 1 s t), 1 ] t s and t are the time to update the estimation and the time interval, respectively. The frames Σ s at t = t s t and Σ s at t = t s are generally different. Accordingly, (17) transforms the estimated position information of the first marker on Σ s at t = t s t to the quantity on Σ s at t = t s. In general, the sampling period of the image data is longer than that of the controller. And the appropriate value of t is determined by trial and error so as to be longer than the sampling period of the image data. Finally, using (13), (16), and the depth estimation function c ẑ o = [ c ẑ o1 (v 1 ),..., c ẑ ok (v k ) ], the desired angular velocity of joints is rewritten as: { θ d = J + vis ( f, c ẑ o, θ 23 ) K img (f f d (1,1) ) + J img ( f, c ẑ o ) sˆṗ o1 }. (18) 9

10 Camera System f + f d + f K img f J + vis (,, ) + + J (1,1) img (, ) sˆṗ o1 Obj. Motion Estimator cẑ o1 cẑ o Depth Estimator θ d 1 s θd + + K p K v - - Robot with Disturbance Observer θ 1 s θ Figure 6: Block diagram of control system. Table 1: Physical parameters. Symbol (Unit) i = 1 i = 2 i = 3 d i (m) h i (m) α i (rad) π/6 The block diagram of the whole control system is shown in Figure 6. This control system can reduce the visual tracking error by compensating the translational motion of the target object, and therefore can achieve visual tracking precisely. Our proposed method is basically applicable to a n-dof hand-eye robot if only the structural features 1), 2) and Assumption 1, 2 are satisfied. In next section, we evaluate the effectiveness of the proposed control method experimentally. 4 EXPERIMENT This section presents an experimental result to show the effectiveness of our proposed method. We here consider a case that the number of feature points is three, i.e., k = 3. The overviews of the hand-eye robot system and the experimental setup are shown in Figures 7 and 8. The target object is equipped with two markers which are in the same horizontal plane. The 3-DoF planar manipulator is controlled by a PC running a real-time Linux OS. The rotating angle of each joint is obtained from an encoder attached to each DC motor via a counter board. And armature currents based on control law (8) and (18) are interpolated to each DC motor by a D/A board through each DC servo driver. The main physical parameters of the 3-DoF planar manipulator are shown in Table 1. The image resolution and the focal lengths of the CCD camera are pixels, λ x = 8. pixels, and λ y = pixels, respectively. The frame rate of the CCD camera is 12 fps. Hence, the image 1

11 Target Object 2nd marker 1st marker Hand-eye Robot 3rd marker Cart Target Object on Cart (a) Target object on the cart (b) Hand-eye robot and target object Figure 7: Overview of hand-eye robot system. PC (real-time Linux OS) Visual tracking algorithm Coordinates of feature points 1(msec) period Shared memory Counter board D/A board about 8.3 (msec) period DC Servo driver DC Servo driver DC Servo driver PC (Windows OS) Feature point extracting algorithm Image capture board Hand-eye robot 1st joint DC Motor Encoder 2nd joint DC Motor Encoder 3rd joint DC Motor Encoder 64x48 monochrome image 12 (fps) CCD Camera Figure 8: Overview of experimental setup. data is updated every 8.3 ms. On the other hand, the sampling period of the controller is 1 ms. In consideration of the gap between both periods, we set t = 1 ms. The tilt angle of the CCD camera, α 3, is 3 deg. Collecting some measured data of set ( c z o1, v 1 ) and fitting them to a second-order polynomial function c z o1 = av1 2 + bv 1 + c, we obtain a = , b = , c = (19) Therefore, we adopt here the following function with (19) to estimate the depth: cẑ oi (v i ) = avi 2 + bv i + c, i = 1, 2, 3. (2) Note that the estimation of c ẑ o2 (v 2 ) and c ẑ o3 (v 3 ) is also based on (19) and (2) because all markers of the target object are in the same horizontal plane. Measured data of set ( c z o1, v 1 ) and the depth estimation function are shown in Figure 9. The procedure of the experiment is presented as follows: Step 1: We set θ d = [ π/4, π/4, ] and θ d = 3 in (8) to drive the hand-eye robot to the initial configuration, as shown in Figure 1, so that the target object is inside the boundaries of the 11

12 czo1 (m) czˆo1 (v) Measured data v (pixels) Figure 9: Measured data and depth estimation function. π 4 (rad) Σ c Target object on cart c z Σ b π 4 (rad) c x Moving forward Moving backward b x b z.65 (m) Figure 1: Top view of initial configuration. image plane and the 3-DoF planar manipulator is not in a singular configuration. As a result of such control, the target object is located at left side of the image plane against the central axis. Step 2: The target object starts to move forward. If u 1 338, we change the control mode to the proposed visual tracking based on (8) and (18) with f d 1 = [ 338, 21 ], f d 2 = [ 36, 228 ], f d 3 = [ 354, 249 ]. Step 3: The target object moves parallel to b x-axis at an almost constant speed. After moving forward by.23 m, the target object goes backward at the same speed and stops at the start point. The experimental result is shown in Figures 11 and 12. Figure 11 shows the trajectories of feature points on the image plane. Figures 12 (a) and (b) show the time responses of u i u d i, v i vi d, i = 1, 2, 3, respectively, for 6 s in Steps 2 and 3. For comparison, the experimental result in the case of the conventional method, i.e., sˆṗ o1 3 in (18), is shown additionally. This experiment in Step 2 and 3 was carried out with initial condition f 1 () = [ 338, 21 ], f 2 () = [ 34, 228 ], f 3 () = [ 351, 249 ], θ() = [ π/4, π/4, ], θ() = 3 and gain matrices K p = diag{ 144, 144, 144 }, K v = diag{ 48, 48, 48 }, K img = diag{ 4, 4, 4 }. And the gain of the disturbance observer was 15. If they are stabilized asymptotically, then it means that each feature point is kept around the desired one. The target object is going forward and backward in a straight line at a constant speed for t <

13 6 12 Conv. method Proposed method 1st desired point 2nd desired point 3rd desired point 18 v (pixels) u (pixels) Figure 11: Trajectories of feature points on image plane and is static for the rest of the time. The visual tracking error arises in the case of the conventional method. On the other hand, visual tracking by the proposed method works well almost without such error. 5 CONCLUSIONS AND FUTURE WORKS We proposed a visual tracking control method of a hand-eye robot for a moving target object with multiple feature points. Our proposed method can reduce the visual tracking error by compensating the target object motion, and hence can achieve precise visual tracking. Using the structural features of the hand-eye robot and introducing an appropriate assumption, the depth from the camera to the target object can be estimated. The validity of the proposed method was demonstrated by an experiment. The main future works present as follows: Precision improvement of the target object velocity estimation. Development of a dynamic depth estimator for a moving target object. REFERENCES [1] S. Hutchinson, G. D. Hager, and P. I. Corke, A tutorial on visual servo control, IEEE Trans. on Robotics and Automation, 12, (1996). [2] K. Hashimoto, A review on vision-based control of robot manipulators, Advanced Robotics, 17, (23). [3] F. Chaumette and S. Hutchinson, Visual servo control, Part I: Basic approaches, IEEE Robotics and Automation Magazine, 13, 82 9 (26). 13

14 u1 u1 d (pixels) u2 u2 d (pixels) u3 u3 d (pixels) Target obj. is moving forward moving backward Time: t (s) Conv. method Proposed method Target obj. is moving forward moving backward Time: t (s) Conv. method Proposed method Target obj. is moving forward moving backward Time: t (s) Conv. method Proposed method (a) Time response of u i u d i, i = 1, 2, 3 v1 v1 d (pixels) v2 v2 d (pixels) v3 v3 d (pixels) Conv. method Proposed method Target obj. is moving forward moving backward Conv. method Proposed method Target obj. is moving forward Time: t (s) moving backward Conv. method Proposed method Target obj. is moving forward Time: t (s) moving backward Time: t (s) (b) Time response of v i vi d, i = 1, 2, 3 Figure 12: Deviations on image plane. [4] F. Chaumette and S. Hutchinson, Visual servo control, Part II: Advanced approaches, IEEE Robotics and Automation Magazine, 14, (27). [5] F. Chaumette and S. Hutchinson, Visual servoing and visual tracking, Springer Handbook of Robotics, B. Siciliano and O. Khatib (Eds.), Chap. 24, (28). [6] N. Oda, M. Ito, and M. Shibata, Vision-based motion control for robotic systems, IEEJ Trans. on Electrical and Electronic Engineering, 4, (29). [7] M. Shibata and N. Kobayashi, Non-delayed visual tracking of a moving object with target speed compensation, in Proc. IEEE Int. Conf. on Mechatronics (ICM 7), Kumamoto, Japan, No. WA1- A-4 (27). [8] M. Ito and M. Shibata, Non-delayed visual tracking of hand-eye robot for a moving target object, in Proc. ICROS-SICE Int. Joint Conf. 29 (ICCAS-SICE 9), Fukuoka, Japan, pp (29). 14

15 [9] K. Ohnishi, M. Shibata, and T. Murakami, Motion control for advanced mechatronics, IEEE/ASME Trans. on Mechatronics, 1, (1996). [1] L. Matthies, T. Kanade, and R. Szeliski, Kalman filter-based algorithms for estimation depth from image sequences, Int. J. of Computer Vision, 3, (1989). [11] S. Soatto, R. Frezza, and P. Perona, Motion estimation via dynamic vision, IEEE Trans. Automatic Control, 41, (1996). [12] X. Chen and H. Kano, A new state observer for perspective systems, IEEE Trans. Automatic Control, 47, (22). [13] W. E. Dixon, Y. Fang, D. M. Dawson, and T. J. Flynn, Range identification for perspective vision systems, IEEE Trans. Automatic Control, 48, (23). [14] A. Astolfi, D. Karagiannis and R. Ortega, Nonlinear and Adaptive Control with Applications, Communications and Control Engineering Series, Springer-Verlag (28). [15] A. De Luca, G. Oriolo and P. R. Giordano, Feature depth observation for image-based visual servoing: theory and experiments, Int. J. of Robotics Research, 27, (28). [16] F. Morbidi and D. Prattichizzo, Range estimation from a moving camera: an Immersion and Invariance approach, in Proc. IEEE Int. Conf. on Robotics and Automation (ICRA 9), Kobe, Japan, pp (29). 15

Keeping features in the camera s field of view: a visual servoing strategy

Keeping features in the camera s field of view: a visual servoing strategy Keeping features in the camera s field of view: a visual servoing strategy G. Chesi, K. Hashimoto,D.Prattichizzo,A.Vicino Department of Information Engineering, University of Siena Via Roma 6, 3 Siena,

More information

6-dof Eye-vergence visual servoing by 1-step GA pose tracking

6-dof Eye-vergence visual servoing by 1-step GA pose tracking International Journal of Applied Electromagnetics and Mechanics 52 (216) 867 873 867 DOI 1.3233/JAE-16225 IOS Press 6-dof Eye-vergence visual servoing by 1-step GA pose tracking Yu Cui, Kenta Nishimura,

More information

Task selection for control of active vision systems

Task selection for control of active vision systems The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October -5, 29 St. Louis, USA Task selection for control of active vision systems Yasushi Iwatani Abstract This paper discusses

More information

Robot Vision Control of robot motion from video. M. Jagersand

Robot Vision Control of robot motion from video. M. Jagersand Robot Vision Control of robot motion from video M. Jagersand Vision-Based Control (Visual Servoing) Initial Image User Desired Image Vision-Based Control (Visual Servoing) : Current Image Features : Desired

More information

A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot

A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot Giacomo Palmieri 1, Matteo Palpacelli 2, Massimiliano Battistelli 2 1 Università degli Studi e-campus,

More information

A NOUVELLE MOTION STATE-FEEDBACK CONTROL SCHEME FOR RIGID ROBOTIC MANIPULATORS

A NOUVELLE MOTION STATE-FEEDBACK CONTROL SCHEME FOR RIGID ROBOTIC MANIPULATORS A NOUVELLE MOTION STATE-FEEDBACK CONTROL SCHEME FOR RIGID ROBOTIC MANIPULATORS Ahmad Manasra, 135037@ppu.edu.ps Department of Mechanical Engineering, Palestine Polytechnic University, Hebron, Palestine

More information

3D Tracking Using Two High-Speed Vision Systems

3D Tracking Using Two High-Speed Vision Systems 3D Tracking Using Two High-Speed Vision Systems Yoshihiro NAKABO 1, Idaku ISHII 2, Masatoshi ISHIKAWA 3 1 University of Tokyo, Tokyo, Japan, nakabo@k2.t.u-tokyo.ac.jp 2 Tokyo University of Agriculture

More information

Robotics 2 Visual servoing

Robotics 2 Visual servoing Robotics 2 Visual servoing Prof. Alessandro De Luca Visual servoing! objective use information acquired by vision sensors (cameras) for feedback control of the pose/motion of a robot (or of parts of it)

More information

Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment

Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment Ahmet Yasin Yazicioglu,

More information

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta CMPUT 412 Motion Control Wheeled robots Csaba Szepesvári University of Alberta 1 Motion Control (wheeled robots) Requirements Kinematic/dynamic model of the robot Model of the interaction between the wheel

More information

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji

More information

EEE 187: Robotics Summary 2

EEE 187: Robotics Summary 2 1 EEE 187: Robotics Summary 2 09/05/2017 Robotic system components A robotic system has three major components: Actuators: the muscles of the robot Sensors: provide information about the environment and

More information

Arm Trajectory Planning by Controlling the Direction of End-point Position Error Caused by Disturbance

Arm Trajectory Planning by Controlling the Direction of End-point Position Error Caused by Disturbance 28 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xi'an, China, July, 28. Arm Trajectory Planning by Controlling the Direction of End- Position Error Caused by Disturbance Tasuku

More information

Shortest Path Homography-Based Visual Control for Differential Drive Robots

Shortest Path Homography-Based Visual Control for Differential Drive Robots Citation: G. López-Nicolás, C. Sagüés, and J. J. Guerrero. Vision Systems, Shortest Path Homography-Based Visual Control for Differential Drive Robots, chapter 3, pp. 583 596. Edited by Goro Obinata and

More information

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots 2 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-3, 2, Shanghai, China Intermediate Desired Value Approach for Continuous Transition among Multiple

More information

Lecture «Robot Dynamics»: Kinematic Control

Lecture «Robot Dynamics»: Kinematic Control Lecture «Robot Dynamics»: Kinematic Control 151-0851-00 V lecture: CAB G11 Tuesday 10:15 12:00, every week exercise: HG E1.2 Wednesday 8:15 10:00, according to schedule (about every 2nd week) Marco Hutter,

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa University of Pisa Master of Science in Computer Science Course of Robotics (ROB) A.Y. 2016/17 cecilia.laschi@santannapisa.it http://didawiki.cli.di.unipi.it/doku.php/magistraleinformatica/rob/start Robot

More information

An Improved Dynamic Modeling of a 3-RPS Parallel Manipulator using the concept of DeNOC Matrices

An Improved Dynamic Modeling of a 3-RPS Parallel Manipulator using the concept of DeNOC Matrices An Improved Dynamic Modeling of a 3-RPS Parallel Manipulator using the concept of DeNOC Matrices A. Rahmani Hanzaki, E. Yoosefi Abstract A recursive dynamic modeling of a three-dof parallel robot, namely,

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Automatic Control Industrial robotics

Automatic Control Industrial robotics Automatic Control Industrial robotics Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Prof. Luca Bascetta Industrial robots

More information

Introduction to Robotics

Introduction to Robotics Université de Strasbourg Introduction to Robotics Bernard BAYLE, 2013 http://eavr.u-strasbg.fr/ bernard Modelling of a SCARA-type robotic manipulator SCARA-type robotic manipulators: introduction SCARA-type

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

New shortest-path approaches to visual servoing

New shortest-path approaches to visual servoing New shortest-path approaches to visual servoing Ville Laboratory of Information rocessing Lappeenranta University of Technology Lappeenranta, Finland kyrki@lut.fi Danica Kragic and Henrik I. Christensen

More information

Planar Robot Kinematics

Planar Robot Kinematics V. Kumar lanar Robot Kinematics The mathematical modeling of spatial linkages is quite involved. t is useful to start with planar robots because the kinematics of planar mechanisms is generally much simpler

More information

Visual Servoing Utilizing Zoom Mechanism

Visual Servoing Utilizing Zoom Mechanism IEEE Int. Conf. on Robotics and Automation 1995, pp.178 183, Nagoya, May. 12 16, 1995 1 Visual Servoing Utilizing Zoom Mechanism Koh HOSODA, Hitoshi MORIYAMA and Minoru ASADA Dept. of Mechanical Engineering

More information

Kinematic Model of Robot Manipulators

Kinematic Model of Robot Manipulators Kinematic Model of Robot Manipulators Claudio Melchiorri Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI) Università di Bologna email: claudio.melchiorri@unibo.it C. Melchiorri

More information

State Estimation and Parameter Identification of Flexible Manipulators Based on Visual Sensor and Virtual Joint Model

State Estimation and Parameter Identification of Flexible Manipulators Based on Visual Sensor and Virtual Joint Model Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 2001 State Estimation and Parameter Identification of Flexible Manipulators Based on Visual Sensor

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 120M Open access books available International authors and editors Downloads Our

More information

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Department of Engineering, Australian National University, Australia. email: Robert.Mahony@anu.edu.au url: http://engnet.anu.edu.au/depeople/robert.mahony/

More information

Control of a Robot Manipulator for Aerospace Applications

Control of a Robot Manipulator for Aerospace Applications Control of a Robot Manipulator for Aerospace Applications Antonella Ferrara a, Riccardo Scattolini b a Dipartimento di Informatica e Sistemistica - Università di Pavia, Italy b Dipartimento di Elettronica

More information

AKEY issue that impacts camera-based visual servo control

AKEY issue that impacts camera-based visual servo control 814 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 13, NO. 5, SEPTEMBER 2005 Adaptive Homography-Based Visual Servo Tracking for a Fixed Camera Configuration With a Camera-in-Hand Extension Jian

More information

Survey on Visual Servoing for Manipulation

Survey on Visual Servoing for Manipulation Survey on Visual Servoing for Manipulation Danica Kragic and Henrik I Christensen Centre for Autonomous Systems, Numerical Analysis and Computer Science, Fiskartorpsv. 15 A 100 44 Stockholm, Sweden {danik,

More information

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh

More information

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators

Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators Leon Žlajpah and Bojan Nemec Institute Jožef Stefan, Ljubljana, Slovenia, leon.zlajpah@ijs.si Abstract The paper deals

More information

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING Dr. Stephen Bruder Course Information Robot Engineering Classroom UNM: Woodward Hall room 147 NMT: Cramer 123 Schedule Tue/Thur 8:00 9:15am Office Hours UNM: After class 10am Email bruder@aptec.com

More information

Robotics I. March 27, 2018

Robotics I. March 27, 2018 Robotics I March 27, 28 Exercise Consider the 5-dof spatial robot in Fig., having the third and fifth joints of the prismatic type while the others are revolute. z O x Figure : A 5-dof robot, with a RRPRP

More information

10. Cartesian Trajectory Planning for Robot Manipulators

10. Cartesian Trajectory Planning for Robot Manipulators V. Kumar 0. Cartesian rajectory Planning for obot Manipulators 0.. Introduction Given a starting end effector position and orientation and a goal position and orientation we want to generate a smooth trajectory

More information

A Vision-Based Endpoint Trajectory and Vibration Control for Flexible Manipulators

A Vision-Based Endpoint Trajectory and Vibration Control for Flexible Manipulators 27 IEEE International Conference on Robotics and Automation Roma, Italy, -4 April 27 FrA. A Vision-Based Endpoint Trajectory and Vibration Control for Flexible Manipulators Xin Jiang, Atsushi Konno, Member,

More information

Combining IBVS and PBVS to ensure the visibility constraint

Combining IBVS and PBVS to ensure the visibility constraint Combining IBVS and PBVS to ensure the visibility constraint Olivier Kermorgant and François Chaumette Abstract In this paper we address the issue of hybrid 2D/3D visual servoing. Contrary to popular approaches,

More information

Forward kinematics and Denavit Hartenburg convention

Forward kinematics and Denavit Hartenburg convention Forward kinematics and Denavit Hartenburg convention Prof. Enver Tatlicioglu Department of Electrical & Electronics Engineering Izmir Institute of Technology Chapter 5 Dr. Tatlicioglu (EEE@IYTE) EE463

More information

A New Algorithm for Measuring and Optimizing the Manipulability Index

A New Algorithm for Measuring and Optimizing the Manipulability Index DOI 10.1007/s10846-009-9388-9 A New Algorithm for Measuring and Optimizing the Manipulability Index Ayssam Yehia Elkady Mohammed Mohammed Tarek Sobh Received: 16 September 2009 / Accepted: 27 October 2009

More information

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES YINGYING REN Abstract. In this paper, the applications of homogeneous coordinates are discussed to obtain an efficient model

More information

Introduction to Robotics

Introduction to Robotics Introduction to Robotics Ph.D. Antonio Marin-Hernandez Artificial Intelligence Department Universidad Veracruzana Sebastian Camacho # 5 Xalapa, Veracruz Robotics Action and Perception LAAS-CNRS 7, av du

More information

Centre for Autonomous Systems

Centre for Autonomous Systems Robot Henrik I Centre for Autonomous Systems Kungl Tekniska Högskolan hic@kth.se 27th April 2005 Outline 1 duction 2 Kinematic and Constraints 3 Mobile Robot 4 Mobile Robot 5 Beyond Basic 6 Kinematic 7

More information

This is the author s version of a work that was submitted/accepted for publication in the following source:

This is the author s version of a work that was submitted/accepted for publication in the following source: This is the author s version of a work that was submitted/accepted for publication in the following source: Hutchinson, S., Hager, G.D., & Corke, Peter (1996) A tutorial on visual servo control. IEEE Transactions

More information

Path Planning for Image-based Control of Wheeled Mobile Manipulators

Path Planning for Image-based Control of Wheeled Mobile Manipulators Path Planning for Image-based Control of Wheeled Mobile Manipulators Moslem Kazemi, Kamal Gupta, Mehran Mehrandezh Abstract We address the problem of incorporating path planning with image-based control

More information

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 13, No. 3, pp. 387-393 MARCH 2012 / 387 DOI: 10.1007/s12541-012-0049-8 Robotic Grasping Based on Efficient Tracking and Visual Servoing

More information

2 1/2 D visual servoing with respect to planar. contours having complex and unknown shapes

2 1/2 D visual servoing with respect to planar. contours having complex and unknown shapes 2 /2 D visual servoing with respect to planar contours having complex and unknown shapes E. Malis, G. Chesi and R. Cipolla Abstract In this paper we present a complete system for segmenting, matching,

More information

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which

More information

An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory

An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory Roshdy Foaad Abo-Shanab Kafr Elsheikh University/Department of Mechanical Engineering, Kafr Elsheikh,

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

WELL STRUCTURED ROBOT POSITIONING CONTROL STRATEGY FOR POSITION BASED VISUAL SERVOING

WELL STRUCTURED ROBOT POSITIONING CONTROL STRATEGY FOR POSITION BASED VISUAL SERVOING Proceedings of the 001 IEEE International Conference on Robotics & Automation Seoul Korea May 16 001 WELL STRUCTURED ROBOT POSITIONING CONTROL STRATEGY FOR POSITION BASED VISUAL SERVOING M. Bachiller*

More information

Singularity Management Of 2DOF Planar Manipulator Using Coupled Kinematics

Singularity Management Of 2DOF Planar Manipulator Using Coupled Kinematics Singularity Management Of DOF lanar Manipulator Using oupled Kinematics Theingi, huan Li, I-Ming hen, Jorge ngeles* School of Mechanical & roduction Engineering Nanyang Technological University, Singapore

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction MCE/EEC 647/747: Robot Dynamics and Control Lecture 1: Introduction Reading: SHV Chapter 1 Robotics and Automation Handbook, Chapter 1 Assigned readings from several articles. Cleveland State University

More information

1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator

1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator 1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator G. Pajak University of Zielona Gora, Faculty of Mechanical Engineering, Zielona Góra, Poland E-mail: g.pajak@iizp.uz.zgora.pl

More information

Kinematics of the Stewart Platform (Reality Check 1: page 67)

Kinematics of the Stewart Platform (Reality Check 1: page 67) MATH 5: Computer Project # - Due on September 7, Kinematics of the Stewart Platform (Reality Check : page 7) A Stewart platform consists of six variable length struts, or prismatic joints, supporting a

More information

Singularity Handling on Puma in Operational Space Formulation

Singularity Handling on Puma in Operational Space Formulation Singularity Handling on Puma in Operational Space Formulation Denny Oetomo, Marcelo Ang Jr. National University of Singapore Singapore d oetomo@yahoo.com mpeangh@nus.edu.sg Ser Yong Lim Gintic Institute

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) Compare the testing methods for testing path segment and finding first

More information

Kinematic Synthesis. October 6, 2015 Mark Plecnik

Kinematic Synthesis. October 6, 2015 Mark Plecnik Kinematic Synthesis October 6, 2015 Mark Plecnik Classifying Mechanisms Several dichotomies Serial and Parallel Few DOFS and Many DOFS Planar/Spherical and Spatial Rigid and Compliant Mechanism Trade-offs

More information

Monocular SLAM for a Small-Size Humanoid Robot

Monocular SLAM for a Small-Size Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 2, pp. 123 129 (2011) 123 Monocular SLAM for a Small-Size Humanoid Robot Yin-Tien Wang*, Duen-Yan Hung and Sheng-Hsien Cheng Department of Mechanical

More information

Inverse Kinematics of Functionally-Redundant Serial Manipulators under Joint Limits and Singularity Avoidance

Inverse Kinematics of Functionally-Redundant Serial Manipulators under Joint Limits and Singularity Avoidance Inverse Kinematics of Functionally-Redundant Serial Manipulators under Joint Limits and Singularity Avoidance Liguo Huo and Luc Baron Department of Mechanical Engineering École Polytechnique de Montréal

More information

Efficient decoupled pose estimation from a set of points

Efficient decoupled pose estimation from a set of points Efficient decoupled pose estimation from a set of points Omar Tahri and Helder Araujo and Youcef Mezouar and François Chaumette Abstract This paper deals with pose estimation using an iterative scheme.

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Homogeneous coordinates, lines, screws and twists

Homogeneous coordinates, lines, screws and twists Homogeneous coordinates, lines, screws and twists In lecture 1 of module 2, a brief mention was made of homogeneous coordinates, lines in R 3, screws and twists to describe the general motion of a rigid

More information

Kinematic Modeling and Control Algorithm for Non-holonomic Mobile Manipulator and Testing on WMRA system.

Kinematic Modeling and Control Algorithm for Non-holonomic Mobile Manipulator and Testing on WMRA system. Kinematic Modeling and Control Algorithm for Non-holonomic Mobile Manipulator and Testing on WMRA system. Lei Wu, Redwan Alqasemi and Rajiv Dubey Abstract In this paper, we will explore combining the manipulation

More information

Kinematics Analysis of Free-Floating Redundant Space Manipulator based on Momentum Conservation. Germany, ,

Kinematics Analysis of Free-Floating Redundant Space Manipulator based on Momentum Conservation. Germany, , Kinematics Analysis of Free-Floating Redundant Space Manipulator based on Momentum Conservation Mingming Wang (1) (1) Institute of Astronautics, TU Muenchen, Boltzmannstr. 15, D-85748, Garching, Germany,

More information

Experimental study of Redundant Snake Robot Based on Kinematic Model

Experimental study of Redundant Snake Robot Based on Kinematic Model 2007 IEEE International Conference on Robotics and Automation Roma, Italy, 10-14 April 2007 ThD7.5 Experimental study of Redundant Snake Robot Based on Kinematic Model Motoyasu Tanaka and Fumitoshi Matsuno

More information

Jacobian: Velocities and Static Forces 1/4

Jacobian: Velocities and Static Forces 1/4 Jacobian: Velocities and Static Forces /4 Models of Robot Manipulation - EE 54 - Department of Electrical Engineering - University of Washington Kinematics Relations - Joint & Cartesian Spaces A robot

More information

Using Redundancy in Serial Planar Mechanisms to Improve Output-Space Tracking Accuracy

Using Redundancy in Serial Planar Mechanisms to Improve Output-Space Tracking Accuracy Using Redundancy in Serial Planar Mechanisms to Improve Output-Space Tracking Accuracy S. Ambike, J.P. Schmiedeler 2 and M.M. Stanišić 2 The Ohio State University, Columbus, Ohio, USA; e-mail: ambike.@osu.edu

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE Chapter 1. Modeling and Identification of Serial Robots.... 1 Wisama KHALIL and Etienne DOMBRE 1.1. Introduction... 1 1.2. Geometric modeling... 2 1.2.1. Geometric description... 2 1.2.2. Direct geometric

More information

Jacobians. 6.1 Linearized Kinematics. Y: = k2( e6)

Jacobians. 6.1 Linearized Kinematics. Y: = k2( e6) Jacobians 6.1 Linearized Kinematics In previous chapters we have seen how kinematics relates the joint angles to the position and orientation of the robot's endeffector. This means that, for a serial robot,

More information

A New Algorithm for Measuring and Optimizing the Manipulability Index

A New Algorithm for Measuring and Optimizing the Manipulability Index A New Algorithm for Measuring and Optimizing the Manipulability Index Mohammed Mohammed, Ayssam Elkady and Tarek Sobh School of Engineering, University of Bridgeport, USA. Mohammem@bridgeport.edu Abstract:

More information

Robotics (Kinematics) Winter 1393 Bonab University

Robotics (Kinematics) Winter 1393 Bonab University Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots

More information

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Path and Trajectory specification Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Specifying the desired motion to achieve a specified goal is often a

More information

3D Structure Identification from Image Moments

3D Structure Identification from Image Moments 28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 9-23, 28 3D Structure Identification from Image Moments Paolo Robuffo Giordano Alessandro De Luca Giuseppe Oriolo Dipartimento

More information

VISUAL SERVO control is a broad area of mainstream

VISUAL SERVO control is a broad area of mainstream 128 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 18, NO. 1, JANUARY 2010 Adaptive Homography-Based Visual Servo Tracking Control via a Quaternion Formulation Guoqiang Hu, Nicholas Gans, Norman

More information

Industrial Robots : Manipulators, Kinematics, Dynamics

Industrial Robots : Manipulators, Kinematics, Dynamics Industrial Robots : Manipulators, Kinematics, Dynamics z z y x z y x z y y x x In Industrial terms Robot Manipulators The study of robot manipulators involves dealing with the positions and orientations

More information

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING Tomasz Żabiński, Tomasz Grygiel, Bogdan Kwolek Rzeszów University of Technology, W. Pola 2, 35-959 Rzeszów, Poland tomz, bkwolek@prz-rzeszow.pl

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Robotics kinematics and Dynamics

Robotics kinematics and Dynamics Robotics kinematics and Dynamics C. Sivakumar Assistant Professor Department of Mechanical Engineering BSA Crescent Institute of Science and Technology 1 Robot kinematics KINEMATICS the analytical study

More information

Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI

Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI ICCAS25 June 2-5, KINTEX, Gyeonggi-Do, Korea Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI K. Ikeda, M. Minami, Y. Mae and H.Tanaka Graduate school of Engineering, University of

More information

Calibration and Synchronization of a Robot-Mounted Camera for Fast Sensor-Based Robot Motion

Calibration and Synchronization of a Robot-Mounted Camera for Fast Sensor-Based Robot Motion IEEE Int. Conf. on Robotics and Automation ICRA2005, Barcelona, Spain, April 2005 Calibration and Synchronization of a Robot-Mounted Camera for Fast Sensor-Based Robot Motion Friedrich Lange and Gerd Hirzinger

More information

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation UMAR KHAN, LIAQUAT ALI KHAN, S. ZAHID HUSSAIN Department of Mechatronics Engineering AIR University E-9, Islamabad PAKISTAN

More information

REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION

REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION Denny Oetomo Singapore Institute of Manufacturing Technology Marcelo Ang Jr. Dept. of Mech. Engineering

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics MCE/EEC 647/747: Robot Dynamics and Control Lecture 3: Forward and Inverse Kinematics Denavit-Hartenberg Convention Reading: SHV Chapter 3 Mechanical Engineering Hanz Richter, PhD MCE503 p.1/12 Aims of

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

STABILITY OF NULL-SPACE CONTROL ALGORITHMS

STABILITY OF NULL-SPACE CONTROL ALGORITHMS Proceedings of RAAD 03, 12th International Workshop on Robotics in Alpe-Adria-Danube Region Cassino, May 7-10, 2003 STABILITY OF NULL-SPACE CONTROL ALGORITHMS Bojan Nemec, Leon Žlajpah, Damir Omrčen Jožef

More information

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io autorob.github.io Inverse Kinematics Objective (revisited) Goal: Given the structure of a robot arm, compute Forward kinematics: predicting the pose of the end-effector, given joint positions. Inverse

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

A Tool for Kinematic Error Analysis of Robots/Active Vision Systems

A Tool for Kinematic Error Analysis of Robots/Active Vision Systems A Tool for Kinematic Error Analysis of Robots/Active Vision Systems Kanglin Xu and George F. Luger Department of Computer Science University of New Mexico Albuquerque, NM 87131 {klxu,luger}@cs.unm.edu

More information

Control of Snake Like Robot for Locomotion and Manipulation

Control of Snake Like Robot for Locomotion and Manipulation Control of Snake Like Robot for Locomotion and Manipulation MYamakita 1,, Takeshi Yamada 1 and Kenta Tanaka 1 1 Tokyo Institute of Technology, -1-1 Ohokayama, Meguro-ku, Tokyo, Japan, yamakita@ctrltitechacjp

More information

Visual Servo...through the Pages of the Transactions on Robotics (... and Automation)

Visual Servo...through the Pages of the Transactions on Robotics (... and Automation) Dick Volz Festschrift p. 1 Visual Servo...through the Pages of the Transactions on Robotics (... and Automation) Seth Hutchinson University of Illinois Dick Volz Festschrift p. 2 Visual Servo Control The

More information

Robotics 2 Iterative Learning for Gravity Compensation

Robotics 2 Iterative Learning for Gravity Compensation Robotics 2 Iterative Learning for Gravity Compensation Prof. Alessandro De Luca Control goal! regulation of arbitrary equilibium configurations in the presence of gravity! without explicit knowledge of

More information

On the Efficient Second Order Minimization and Image-Based Visual Servoing

On the Efficient Second Order Minimization and Image-Based Visual Servoing 2008 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 2008 On the Efficient Second Order Minimization and Image-Based Visual Servoing Omar Tahri and Youcef Mezouar

More information