Visual Servoing Utilizing Zoom Mechanism

Similar documents
Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Task selection for control of active vision systems

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Survey on Visual Servoing for Manipulation

Keeping features in the camera s field of view: a visual servoing strategy

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Hand-Eye Calibration from Image Derivatives

Increasing the Tracking Region of an Eye-in-Hand System by Singularity and Joint Limit Avoidance

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Visual Servo...through the Pages of the Transactions on Robotics (... and Automation)

timizing Hand/Eye Configuration for Visual-Servo Systems

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach

A New Algorithm for Measuring and Optimizing the Manipulability Index

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

LUMS Mine Detector Project

6-dof Eye-vergence visual servoing by 1-step GA pose tracking

Motion Perceptibility and its Application to Active Vision-Based Servo Control

Singularity Handling on Puma in Operational Space Formulation

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

3D Tracking Using Two High-Speed Vision Systems

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot

126 ACTION AND PERCEPTION

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

A Vision-Based Endpoint Trajectory and Vibration Control for Flexible Manipulators

REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION

Integrating Sensor Placement and Visual Tracking Strategies

WELL STRUCTURED ROBOT POSITIONING CONTROL STRATEGY FOR POSITION BASED VISUAL SERVOING

Choice of image features for depth-axis control in image based visual servo control

Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment

IMAGE MOMENTS have been widely used in computer vision

A Calligraphy Robot - Callibot: Design, Analysis and Applications

Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators

Optical Flow-Based Person Tracking by Multiple Cameras

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Resolution of spherical parallel Manipulator (SPM) forward kinematic model (FKM) near the singularities

A modular neural network architecture for inverse kinematics model learning

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Acquiring Visual-Motor Models for Precision Manipulation with Robot Hands

Robotics 2 Visual servoing

Strategies for Increasing the Tracking Region of an Eye-in-Hand System by Singularity and Joint Limit Avoidance

Experimental study of Redundant Snake Robot Based on Kinematic Model

Continuous Valued Q-learning for Vision-Guided Behavior Acquisition

Robot Vision Control of robot motion from video. M. Jagersand

Control of industrial robots. Kinematic redundancy

Development of Vision System on Humanoid Robot HRP-2

The end effector frame E. The selected or set vector: κ or ζ w.r.t. robot base frame ω1 or ω2 w.r.t. vision frame. y e. The vision frame V

A New Algorithm for Measuring and Optimizing the Manipulability Index

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

New shortest-path approaches to visual servoing

Measurement of Pedestrian Groups Using Subtraction Stereo

Task analysis based on observing hands and objects by vision

Omni-Directional Visual Servoing for Human-Robot Interaction

Properties of Hyper-Redundant Manipulators

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI

1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

Hand-Eye Calibration from Image Derivatives

Control of Snake Like Robot for Locomotion and Manipulation

CS545 Contents IX. Inverse Kinematics. Reading Assignment for Next Class. Analytical Methods Iterative (Differential) Methods

FORCE CONTROL OF LINK SYSTEMS USING THE PARALLEL SOLUTION SCHEME

Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot

Nonholonomic camera space manipulation using cameras mounted on a mobile base

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

Operation Trajectory Control of Industrial Robots Based on Motion Simulation

Assist System for Carrying a Long Object with a Human - Analysis of a Human Cooperative Behavior in the Vertical Direction -

ALL SINGULARITIES OF THE 9-DOF DLR MEDICAL ROBOT SETUP FOR MINIMALLY INVASIVE APPLICATIONS

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

Pick and Place Robot Simulation

Exploiting Visual Constraints in Robot Motion Planning

BabyTigers-98: Osaka Legged Robot Team

Jacobian Learning Methods for Tasks Sequencing in Visual Servoing

Performance of a Partitioned Visual Feedback Controller

Arm Trajectory Planning by Controlling the Direction of End-point Position Error Caused by Disturbance

An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement

STABILITY OF NULL-SPACE CONTROL ALGORITHMS

Vision-Motion Planning with Uncertainty

A deformable model driven method for handling clothes

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping

Written exams of Robotics 2

Motion Sketch: Acquisition of Visual Motion Guided Behaviors

Introduction to visual servoing

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa

An Interactive Technique for Robot Control by Using Image Processing Method

Dynamics modeling of structure-varying kinematic chains for free-flying robots

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors

Development of Redundant Robot Simulator for Avoiding Arbitrary Obstacles Based on Semi-Analytical Method of Solving Inverse Kinematics

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

Planar Robot Kinematics

1 Introduction. 2 Real-time Omnidirectional Stereo

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

arxiv: v1 [cs.cv] 2 May 2016

AMR 2011/2012: Final Projects

Transcription:

IEEE Int. Conf. on Robotics and Automation 1995, pp.178 183, Nagoya, May. 12 16, 1995 1 Visual Servoing Utilizing Zoom Mechanism Koh HOSODA, Hitoshi MORIYAMA and Minoru ASADA Dept. of Mechanical Engineering for Computer-Controlled Machinery, Faculty of Engineering, Osaka University, Suita, Osaka 565, Japan Abstract A camera, which is used as an artificial vision in visual servoing control, often has a zoom. A zoom cannot realize fast motion while an arm can, and it has only one degree of freedom. On the other hand the arm cannot cover wide range of change of images while the zoom can. In this paper, we propose a complementary visual servoing controller of zoom and arm s. First we discuss on the condition that camera position and zoom setting are considered as redundant. Then a visual servoing controller is proposed making use of complementary characteristics of the both s. To show the effectiveness of the proposed controller, experimental results are shown. 1 Introduction Visual information is essential for robot systems to realize given tasks in dynamic environments. There have been many vision researches adopting deliberative approaches in order to describe 3-D scene structure, which are very time consuming, and therefore u- tilizing these methods to real robot applications seems hard. Recently, there have been many studies on visual servoing control schemes utilizing the visual sensors to increase the capability of the robot systems in the dynamic environments(you can find a summary in [1]). In these days, many studies focus on control schemes which make features on the image planes converge to the desired values [2 9]. On the other hand, a camera, which is used as an artificial vision device, often has a zoom. A zoom is a peculiar of artificial vision that animals do not have. Utilizing a zoom can change sensitivity of image features w. r. t. manipulator movement, and can compensate the limit of the physical constraint of the manipulator. It can also change resolvability of the cameras[10]. But the zoom has only one d. o. f. and cannot realize fast motion because of its auto-focusing function, etc. Therefore to utilizing merits of the zoom with the manipulator, a certain combining controller have to be proposed. In this sense, by utilizing this zoom, the ability of visual servoing is enhanced. Nelson and Khosla[10] only focused on the resolvability, not on the complementary and/or redundant characteristics of the zoom and arm s. In this paper, we propose a complementary visual servoing controller of zoom and arm s. First we discuss on the condition that camera position and zoom setting are considered as redundant, which is not strictly true since the former alters the perspective distortion while the latter does not. Then a visual servoing controller is proposed making use of complementary characteristics of the both s. To show the effectiveness of the proposed controller, experimental results are shown. 2 Equivalence between zoom and arm When the arm has enough degrees of freedom to make the image features converge to the given desired features, and the cameras have additional zoom s, the whole system will be a redundant system to accomplish the given task. But positioning by the arm and zooming are not always equivalent(see figure 1). In this section, we will derive the condition to make positioning by the arm and zooming are almost equivalent. zooming approaching by arm Figure 1: Relation between positioning by arm and zooming 2.1 Visual servoing system with zoom A system consisting of an arm and cameras is shown in figure 2. Let Σ w, Σ c and Σ I denote the world coordinate frame fixed on the ground, a camera coordinate frame fixed on the camera base, and an image coordinate frame fixed to the image plane, respectively. Let w p and w φ denote position and orientation vectors of Σ c with respect to Σ w. Let I x R m and I x d R m denote an image feature vector consisting

I desired image image feature I x I x d w w w p c object Figure 2: A system consisting of an arm and camera of feature points, area of the image, contour etc., and a desired image feature vector, respectively. Assuming that the object is fixed to the ground. If the system does not have a zoom, the image feature vector I x is a function of w p and w φ, I x = I x( w p, w φ). (1) By differentiating this equation, a relation between the time-derivative of image feature vector and velocity vector of cameras, Iẋ = J w v, (2) can be obtained, where w v R n denotes a velocity vector of Σ c with respect to Σ w. The matrix J R m n is an image jacobian. If the system has zoom s, the image feature vector I x is a function of w p, w φ and focal length vector f R l, I x = I x( w p, w φ, f). (3) Therefore, the relation between the time-derivative of image feature vector and velocity vector of cameras becomes Iẋ = J ṽ, (4) where ṽ = [ w v T ḟ T ] T R (n+l) is an extended velocity vector, and J R m (n+l) is an extended image jacobian. 2.2 Equivalence between zoom and arm As we mentioned at the beginning of this section, positioning by the arm and zooming are not always equivalent. In this subsection, this equivalence will be discussed. At t = 0, initial vector of image features is assumed to coincide with the desired vector. If the system is not equipped with zoom s, a realizability condition to realize the given desired features, is rank[j(t) I ẋ d (t)] n. (5) Even if additional desired image features are appended, the realizability condition (5) may be satisfied as far as the desired image features are given dependent to the 3D constraint (for example, as far as the desired image features are given by teaching by showing). If the system is equipped with zoom s, the condition that zooming and positioning by the arm are equivalent becomes rank[ J I ẋ d ] = rank[j I ẋ d ]. (6) Because zooming is not equivalent to positioning, this equation is not always satisfied. But if cameras are far from the object, and the image features are viewed in small range, as is often the case with visual servoing, zooming and positioning by the arm are almost equivalent. Consider the case that the sign of equality holds in eq.(5). By applying singular-value decomposition to the matrix [ J I ẋ d ], we can calculate singular values σ 1,, σ n, σ n+1, (σ 1 > > σ n > σ n+1 > ). The condition that the zooming and positioning by the arm are almost equivalent becomes σ n+1 σ 1 σ n. (7) 3 Visual servoing controller utilizing zoom In the previous section, we have derived the condition that zooming is almost equivalent to positioning by the arm. In such a case, the whole system is considered to be a redundant system. The arm can realize fast motion, but when the dynamic range of the desired image feature becomes large, the system cannot realize visual servoing because of the physical constraint. On the other hand, the zoom cannot realize fast motion, but by utilizing this, one can compensate the limit of the physical constraint of the arm. In this paper, we utilize the redundancy to satisfy the following equations: Iẋ + K x ( I x I x d ) = 0, (8) ḟ + K f (f f d ) = 0, (9) where f d is desired focal length vector. The desired focal length is calculated so as to keep the robot arm in an appropriate posture. By tuning the gain matrices K x and K f, we can make use of the redundancy. For compensating fast and small motion, the arm is used, and for compensating slow and large motion, the zoom is used. In this sense, gain K x is selected rather large, while K f is selected rather small. Eq.(4) can be rewritten, Iẋ [ w ] = [ J v J v f ] f. (10) From eq.(9) we can obtain the input u f as u f = K f (f f d ). (11) 2

From eqs.(10) and (11), we can calculate the input u v : u v = J v + (K x ( I x I x d ) + J f ḟ) = J v + {K x ( I x I x d ) J f K f (f f d )}, (12) where J v + denotes a pseudo-inverse matrix of J v. In figure 3, the structure of the proposed visual servoing scheme is shown. f d + eq.(11) u f camera f x xd + eq.(12) uv arm arm and camera system Figure 3: Visual servoing controller utilizing zoom 4 Experiments 4.1 Experimental system In this section, some experimental results will be shown to demonstrate effectiveness of the proposed scheme. In figure 4, a camera-manipulator system used for experiments is shown. The robot manipulator is a puma type 6 d. o. f. robot manipulator, Js-5 by Kawasaki Heavy Industry. In this experiment, the manipulator is used as a 3 d. o. f. manipulator. At the tip of the manipulator, a camera module (Sony, EVI-310) is attached. The structure of the system is shown in figure 5. The video signal from the camera EVI-310 is sent to the tracking module(fujitsu). The tracking module has a function of block correlation to track some pre-memorized patterns[11]. Then the tracking module feeds coordinates of reference patterns to the controller, MVME167(CPU:68040, 33MHz, motorola) in video plane 512 480 [pixel]. The controller calculates control input for the robot by the proposed scheme and sends it to the robot controller via VME- VME bus adapter. It also calculates control input for the zoom and sends via a RS-232C serial port to the camera module EVI-310. A Sun sparc 2 is used to supervise the VME system via ether net. C language is used to write the control program. Sampling time is 49.4 [ms]. In this experiment, the image features are the coordinates of images of two points. 4.2 Experimental results Preliminary experiments are done to demonstrate responses of zoom and arm s. Settling time of the zoom is about 5[s] for compensating 30[pixel] step change of desired image features, while that of the arm is about 3[s]. They depend on the feedback gain of the controller, which is Figure 4: Robot arm with camera, used for experiment Sony Camera with (EVI-310/311) IF-51 Board KAWASAKI Js-5 Robot Controller RS232C Serial Port FUJITSU Tracking Vision VME-VME Bus Adaptor Controller MVME167 (MPU68040) Figure 5: Experimental system Ether Net Host Machine (Sun Sparc 2) 3

also dependent to the bandwidth of cameras and manipulators. But generally speaking, speed of zoom is rather slow because of the auto-focusing, which is indispensable for changing focal length. The initial tip position of the robot is [ w p x, w p y, w p z ] = [9.04 10 3, 1.92 10 1, 7.01 10 1 ][m]. The initial focal length is 13.75 10 3 [m]. Initial image feature vector is to move the arm to the initial posture. Around 15[s], the posture of the arm is almost the same as initial posture by utilizing the zoom, while the final posture is different.without zoom. In this sense, the proposed method utilizing zoom s can make the visual servoing system move in wider range. I x 0 = [x 1 y 1 x 2 y 2 ] T = [122 204 350 256] T, and at t = 0, the desired image feature vector is I x d = [54 140 306 200] T, which is given by teaching by showing. The extended image jacobian J at the desired posture is error norm [pixel] 100 80 60 40 J = 2.43 10 3 0.00 6.93 10 1 3.45 10 3 0.00 2.43 10 3 1.73 10 2 8.61 10 3 2.40 10 3 0.00 3.72 10 2 1.87 10 4 0.00 2.40 10 3 2.40 10 2 1.21 10 4. To examine the equivalence between zooming and positioning, singular values of [J I x 0 I x d ] and [ J I x 0 I x d ] are calculated instead of [J I ẋ d ] and [ J I ẋ d ]. Singular values of [J I x 0 I x d ] are 3.44 10 3, 3.42 10 3, 2.21 10 2 and 1.30. Singular values of [ J I x 0 I x d ] are 2.43 10 4, 3.41 10 3, 1.57 10 3 and 2.00. The fourth singular values are small enough comparing to other three values in both cases, therefore we can apply the proposed controller. The gain matrix K x is selected by trial and error, K x = diag[4.0 10 2 4.0 10 2 4.0 10 2 4.0 10 2 ]. In this paper, results of two cases are shown. case 1 K f = 0, the case that the zoom is not utilized. case 2 K f = 1.0 10 2 In figure 6, error norm of a feature point in the image plane, [x 1 y 1 ] T [x 1d y 1d ] T, is shown. In figure 7, change of focal length f is shown. In figure 8, tip position of the robot, w p, is shown. From figure 6, we can show that the capacity of the proposed method to make image features converge to the desired values, is almost the same as that of a method without utilizing zoom s. Checked in the preliminary experiment, the speed of the zoom is almost double times comparing to that of the arm. Therefore the results in figure 6 have demonstrated that the arm can compensate the slow response of the camera from this experiment. From figures 7 and 8, by applying the proposed method, at the beginning of control, the arm is working so as to eliminate error, and after a while the zooming works focal length [m] 20 0 Figure 6: Result 1 (step response, error norm) 0.017 0.016 0.016 0.015 0.015 0.014 0.013 Figure 7: Result 2 (step response, focal length) Next, we demonstrate how the zoom compensate the range of the arm. We give a desired image features so that the arm by itself cannot realize. Results are shown in figure 9. With the zoom the whole system can eliminate the step error, while without the zoom it stops at around 3[s] because the arm is near a singular posture. From these results, we can show that when change of desired images is large, the zoom can compensate the phisical limitation of the arm, while the arm by itself cannot 4

position X [m] 0.060 0.055 0.050 0.045 0.040 0.035 0.030 0.025 0.020 0.015 0.010 0.005 (a) position w p x eliminate the error. error norm[pixel] 140 120 100 80 60 40 20 without zoom with zoom 0 time[s] position Y [m] position Z [m] 0.71 0.70 0.69 0.68 0.67 0.66 0.65 0.64 (b) position w p y 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.22 0.21 0.20 0.19 (c) position w p z Figure 8: Result 3 (step response, position of tip of robot arm) Figure 9: Result 4 (step response, error norm) 5 Discussion and future works In this paper, a complementary visual servoing controller of zoom and arm s has been proposed. We have discussed on the condition that camera position and zoom setting are considered as redundant, and on a complementary visual servoing controller. In this paper we assume that the image jacobian is known. But in the real environment, the image jacobian is generally unknown. To make the system adaptive to environments, we have to think out the estimator of the image jacobian[12]. As mentioned in the introduction, utilizing a zoom can change sensitivity of image features w. r. t. manipulator movement, resolvability, and can compensate the limit of the physical constraint of the manipulator. These are partly realized in this paper, at the control level, and can be extended to the phase of trajectory planning and higher layer of the vision-based controller. References [1] P. I. Corke. Visual control of robot manipulators a review. In Visual Servoing, pages 1 31. World Scientific, 1993. [2] W. Jang and Z. Bien. Feature-based visual servoing of an eye-in-hand robot with improved tracking performance. In Proc. of IEEE Int. Conf. on Robotics and Automation, pages 2254 2260, 1991. [3] B. Nelson, N. P. Papanikolopoulos, and P. K. Khosla. Visual servoing for robotic assembly. In Visual Servoing, pages 139 164. World Scientific, 1993. [4] N. Maru, H. Kase, et al. Manipulator control by visual servoing with the stereo vision. In Proc. of the 1993 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 1865 1870, 1993. 5

[5] K. Hashimoto, T. Kimoto, T. Ebine, and H. Kimura. Manipulator control with image-based visual servo. In Proc. of IEEE Int. Conf. on Robotics and Automation, pages 2267 2272, 1991. [6] L. E. Weiss, A. C. Sanderson, and C. P. Neuman. Dynamic sensor-based control of robots with visual feedback. IEEE J. of Robotics and Automation, RA- 3(5):404 417, 1987. [7] J. T. Feddema and C. S. G. Lee. Adaptive image feature prediction and control for visual tracking with a hand-eye coordinated camera. IEEE Trans. on System, Man, and Cybernetics, 20(5):1172 1183, 1990. [8] N. P. Papanikolopoulos and P. K. Khosla. Adaptive robotic visual tracking: Theory and experiments. IEEE Trans. on Automatic Control, 38(3):429 445, 1993. [9] N. P. Papanikolopoulos, B. Nelson, and P. K. Khosla. Six degree-of-freedom hand/eye visual tracking with uncertain parameters. In Proc. of IEEE Int. Conf. on Robotics and Automation, pages 174 179, 1994. [10] B. Nelson and P. K. Khosla. Integrating sensor placement and visual tracking strategies. In Proc. of IEEE Int. Conf. on Robotics and Automation, pages 1351 1356, 1994. [11] M. Inaba, T. Kamata, and H. Inoue. Rope handling by mobile hand-eye robots. In Proc. of Int. Conf. on Advanced Robotics, pages 121 126, 1993. [12] K. Hosoda and M. Asada. Versatile visual servoing without knowledge of true jacobian. In Proc. of the 1994 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 186 193, 1994. 6