Comparative Analysis of two Types of Leg-observation-based Visual Servoing Approaches for the Control of a Five-bar Mechanism

Similar documents
End-effector pose estimation of the Monash Epicyclic-Parallel Manipulator through the visual observation of its legs

Smooth Trajectory Planning Along Bezier Curve for Mobile Robots with Velocity Constraints

Measurement of the stereoscopic rangefinder beam angular velocity using the digital image processing method

Trajectory Tracking Control for A Wheeled Mobile Robot Using Fuzzy Logic Controller

Minimal Representation for the Control of the Adept Quattro with Rigid Platform via Leg Observation Considering a Hidden Robot Model

A DYNAMIC ACCESS CONTROL WITH BINARY KEY-PAIR

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION. Ken Sauer and Charles A. Bouman

Contributions to the cinematic and dynamic study of parallel mechanism robots with four degrees of freedom

3D Model Based Pose Estimation For Omnidirectional Stereovision

Using Augmented Measurements to Improve the Convergence of ICP

A {k, n}-secret Sharing Scheme for Color Images

Detection and Recognition of Non-Occluded Objects using Signature Map

Graph-Based vs Depth-Based Data Representation for Multiview Images

the data. Structured Principal Component Analysis (SPCA)

Fitting conics to paracatadioptric projections of lines

KERNEL SPARSE REPRESENTATION WITH LOCAL PATTERNS FOR FACE RECOGNITION

An Alternative Approach to the Fuzzifier in Fuzzy Clustering to Obtain Better Clustering Results

Gray Codes for Reflectable Languages

Drawing lines. Naïve line drawing algorithm. drawpixel(x, round(y)); double dy = y1 - y0; double dx = x1 - x0; double m = dy / dx; double y = y0;

Abstract. Key Words: Image Filters, Fuzzy Filters, Order Statistics Filters, Rank Ordered Mean Filters, Channel Noise. 1.

The Implementation of RRTs for a Remote-Controlled Mobile Robot

A Novel Validity Index for Determination of the Optimal Number of Clusters

FUZZY WATERSHED FOR IMAGE SEGMENTATION

Extracting Partition Statistics from Semistructured Data

Unsupervised Stereoscopic Video Object Segmentation Based on Active Contours and Retrainable Neural Networks

Video Data and Sonar Data: Real World Data Fusion Example

End mills are widely used in industry for high-speed machining. End milling cutters are C H A P T E R 2

We P9 16 Eigenray Tracing in 3D Heterogeneous Media

Vision-based manipulation with the humanoid robot Romeo

ASSESSMENT OF TWO CHEAP CLOSE-RANGE FEATURE EXTRACTION SYSTEMS

Self-Location of a Mobile Robot with uncertainty by cooperation of an heading sensor and a CCD TV camera

Supplementary Material: Geometric Calibration of Micro-Lens-Based Light-Field Cameras using Line Features

An Automatic Laser Scanning System for Accurate 3D Reconstruction of Indoor Scenes

Exploring the Commonality in Feature Modeling Notations

Dr.Hazeem Al-Khafaji Dept. of Computer Science, Thi-Qar University, College of Science, Iraq

Simultaneous image orientation in GRASS

Performance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application

Lecture 02 Image Formation

Accommodations of QoS DiffServ Over IP and MPLS Networks

Introduction to Seismology Spring 2008

Humanoid Walking Control using the Capture Point

特集 Road Border Recognition Using FIR Images and LIDAR Signal Processing

The Minimum Redundancy Maximum Relevance Approach to Building Sparse Support Vector Machines

A radiometric analysis of projected sinusoidal illumination for opaque surfaces

1. Introduction. 2. The Probable Stope Algorithm

Pipelined Multipliers for Reconfigurable Hardware

MATH STUDENT BOOK. 12th Grade Unit 6

Chromaticity-matched Superimposition of Foreground Objects in Different Environments

What are Cycle-Stealing Systems Good For? A Detailed Performance Model Case Study

Algorithms, Mechanisms and Procedures for the Computer-aided Project Generation System

Plane-based Calibration of a Camera with Varying Focal Length: the Centre Line Constraint

Cluster-Based Cumulative Ensembles

Particle Swarm Optimization for the Design of High Diffraction Efficient Holographic Grating

Multi-Piece Mold Design Based on Linear Mixed-Integer Program Toward Guaranteed Optimality

Improved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules

Numerical simulation of hemolysis: a comparison of Lagrangian and Eulerian modelling

Plot-to-track correlation in A-SMGCS using the target images from a Surface Movement Radar

Defect Detection and Classification in Ceramic Plates Using Machine Vision and Naïve Bayes Classifier for Computer Aided Manufacturing

KINEMATIC ANALYSIS OF VARIOUS ROBOT CONFIGURATIONS

DATA ACQUISITION AND PROCESSING OF PARALLEL FREQUENCY SAR BASED ON COMPRESSIVE SENSING

Self-Adaptive Parent to Mean-Centric Recombination for Real-Parameter Optimization

An Optimized Approach on Applying Genetic Algorithm to Adaptive Cluster Validity Index

Control of industrial robots. Control with vision sensors

A Unified Subdivision Scheme for Polygonal Modeling

Compressed Sensing mm-wave SAR for Non-Destructive Testing Applications using Side Information

Fast Rigid Motion Segmentation via Incrementally-Complex Local Models

RANGE DOPPLER ALGORITHM FOR BISTATIC SAR PROCESSING BASED ON THE IMPROVED LOFFELD S BISTATIC FORMULA

timestamp, if silhouette(x, y) 0 0 if silhouette(x, y) = 0, mhi(x, y) = and mhi(x, y) < timestamp - duration mhi(x, y), else

AUTOMATIC TRACKING OF MOVING OBJECT WITH EYE-IN- HAND ROBOT MANIPULATOR

Colouring contact graphs of squares and rectilinear polygons de Berg, M.T.; Markovic, A.; Woeginger, G.

Chapter 2: Introduction to Maple V

Cluster Centric Fuzzy Modeling

CleanUp: Improving Quadrilateral Finite Element Meshes

Gradient based progressive probabilistic Hough transform

Detecting Moving Targets in Clutter in Airborne SAR via Keystoning and Multiple Phase Center Interferometry

On - Line Path Delay Fault Testing of Omega MINs M. Bellos 1, E. Kalligeros 1, D. Nikolos 1,2 & H. T. Vergos 1,2

An Approach to Physics Based Surrogate Model Development for Application with IDPSA

Analysis of input and output configurations for use in four-valued CCD programmable logic arrays

Studying slippage on pushing applications with snake robots

A Partial Sorting Algorithm in Multi-Hop Wireless Sensor Networks

INTEGRATING PHOTOGRAMMETRY AND INERTIAL SENSORS FOR ROBOTICS NAVIGATION AND MAPPING

PARAMETRIC SAR IMAGE FORMATION - A PROMISING APPROACH TO RESOLUTION-UNLIMITED IMAGING. Yesheng Gao, Kaizhi Wang, Xingzhao Liu

Special Relativistic (Flight-)Simulator

A MULTI-SCALE CURVE MATCHING TECHNIQUE FOR THE ASSESSMENT OF ROAD ALIGNMENTS USING GPS/INS DATA

Capturing Large Intra-class Variations of Biometric Data by Template Co-updating

Chemical, Biological and Radiological Hazard Assessment: A New Model of a Plume in a Complex Urban Environment

Active Compliant Motion Control for Grinding Robot

3-D IMAGE MODELS AND COMPRESSION - SYNTHETIC HYBRID OR NATURAL FIT?

Projections. Let us start with projections in 2D, because there are easier to visualize.

An Efficient and Scalable Approach to CNN Queries in a Road Network

Modeling of Wire Electrochemical Machining

Dynamic System Identification and Validation of a Quadrotor UAV

HIGHER ORDER full-wave three-dimensional (3-D) large-domain techniques in

Multiple-Criteria Decision Analysis: A Novel Rank Aggregation Method

Acoustic Links. Maximizing Channel Utilization for Underwater

Time delay estimation of reverberant meeting speech: on the use of multichannel linear prediction

Directed Rectangle-Visibility Graphs have. Abstract. Visibility representations of graphs map vertices to sets in Euclidean space and

Rotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors

BENDING STIFFNESS AND DYNAMIC CHARACTERISTICS OF A ROTOR WITH SPLINE JOINTS

Flow Demands Oriented Node Placement in Multi-Hop Wireless Networks

Transcription:

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia Comparative Analysis of two Types of Leg-observation-based Visual Servoing Approahes for the Control of a Five-bar Mehanism Alessia Vignolo 1, Sébastien Briot 1, Philippe Martinet 1 and Chao Chen 2 1 IRCCyN, UMR CNRS 6597, Éole Centrale de Nantes, Frane alessia.vignolo@gmail.om, {Sebastien.Briot, Philippe.Martinet}@iryn.e-nantes.fr 2 Monash University, Clayton, Australia hao.hen@monash.edu Abstrat Past researh works have proven that the robot end-effetor pose of parallel mehanisms an be effetively estimated by vision. For parallel robots, it was previously proposed to diretly observe the end-effetor. However, this observation may be not possible (e.g. if the robot is milling). Therefore, it has been proposed to use another type of ontroller based on the observation of the leg diretions. Despite interesting results, this ontroller involves the presene of mapping singularities inside the robot workspae (near whih the auray is poor). This paper presents a new approah for visionbased ontrol of the end-effetor: by observing the mehanism legs, it is possible to extrat the Plüker oordinates of their lines and ontrol the end-effetor pose. This paper shows also a omparison between the previous approah, based on the leg diretion, and this new approah, based on the leg line Plüker oordinates. The new approah an be applied to a family of parallel mahines for whih the previous approah is not suitable and has also some advantages regarding the reahable workspae of the end effetor. The simulation results of both the ontrollers applied on a five-bar mehanism are presented. 1 Introdution Compared to serial robots, parallel kinemati manipulators [Leinonen, 1991] are stiffer and an reah higher speeds and aelerations [Merlet, 2006]. However, their ontrol is troublesome beause of the omplex mehanial struture, highly oupled joint motions and many other fators (e.g. learanes, assembly errors, et.) whih degrade stability and auray. Many researh papers fous on the ontrol of parallel mehanisms (see [Merlet, 2012] for a long list of referenes). It is possible to bypass the omplex kinemati struture of the robot and to apply a form of ontrol whih uses an external sensor to estimate the pose of the end-effetor, reduing the stability and auray degradation mentioned earlier. A proven approah for estimating the end-effetor pose is through the use of vision. The most ommon approah onsists of the diret observation of the endeffetor pose [Espiau et al., 1992; Horaud et al., 1998; Martinet et al., 1996]. In some ases, however, it may prove diffiult to observe the end-effetor of the robot, e.g. in the ase of a mahine-tool. A substitute target for the observation must then be hosen and an effetive andidate for this are the legs of the robot, whih are usually designed with slim and retilinear rods [Merlet, 2012]. An appliation of this tehnique was performed in [Andreff et al., 2005] where vision was used to derive a visual servoing sheme based on the observation of the legs of a Gough-Stewart (GS) parallel robot [Gough and Whitehall, 1962]. In that method, the leg diretions (eah diretion represented by a 3D unit vetor) were hosen as visual primitives and ontrol was derived based on their reonstrution from the image. The approah was applied to several types of robots, suh as the Adept Quattro and other robots of the same family [zgur et al., 2011; Andreff and Martinet, 2006]. However, it was proven later that: The mapping between the leg diretion spae and the end-effetor pose spae is not free of singularity whih onsiderably affets the performane in terms of auray and whih do not appear at the same plae as the singularity of the ontrolled robot: finding the singularity of the mapping is a ompliated task whih an be onsiderably simplified by using a tool alled the hidden robot onept [Rosenzveig et al., 2014]. The hidden robot is a virtual robot whose kinematis represents the mapping between the leg diretion spae and the end-effetor pose spae. Thus, the mapping singularities appear if and only if the virtual hidden robot enoun-

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia Some other robot onfig. for the same leg diretions u 1 u 2 C (x,y) L 1 L 2 Figure 1: The PRRRP robot ters kinemati singularities. A general methodology to find the hidden robot model of any parallel robot ontrolled by leg-observation-based visual servoing approah has been defined in [Rosenzveig et al., 2014] and several families of robots have been studied in [Briot and Martinet, 2013; Rosenzveig et al., 2013; 2014]. The approah proposed in [Andreff et al., 2005] annot be applied to any type of robot family: it was shown [Andreff and Martinet, 2006] that it was not possible to ontrol a partiular family of parallel robots for whih the first joints of the legs are prismati joints whose diretions are all parallel. For example, in the ase of the PRRRP 1 robot with parallel P joints (Fig. 1), the pose of the end-effetor an not be estimated using the leg diretions u i as, for the same values of the vetors u 1 and u 2, infinite possible onfigurations of the end-effetor an be found. Regarding this seond point, a solution to bypass the mentioned problem would be to use the Plüker oordinates of the lines passing through the legs instead of the leg diretions only. Using the Plüker oordinates of the lines passing through the legs for the visual servoing is equivalent to use the leg diretion plus their distane and position with respet to the amera frame. Thus, the line passing through the legs are fully defined. Estimating the end-effetor pose in the ase of the PRRRP robot of Fig. 1 is similar to finding the intersetion point of the lines L 1 and L 2 passing through the legs. The aim of this paper is dual: 1. to introdue this new leg servoing sheme based on the use of the Plüker oordinates of the lines passing through the legs; to apply it on a five-bar meha- 1 In the following of the paper, R and P stand for passive revolute and prismati joints, respetively, while R and P stand for ative revolute and prismati joints, respetively. nism; and to analyze the singularity of the mapping involved between the observed line spae and the end-effetor spae, and 2. to ompare this approah with the previous one based on the leg diretion spae in terms of robustness to measurement noise. 2 Leg observation Both ontrol shemes are based on the fat that it is possible to observe the robot legs. In this Setion, the way to extrat the leg diretion and the Plüker oordinates of the line passing through the leg is disussed. 2.1 Line modeling A line L in spae, expressed in the amera frame, is defined by its Binormalized Plüker oordinates [Andreff et al., 2002]: L ( u, n, n) (1) where u is the unit vetor giving the spatial orientation of the line 2, s the unit vetor perpendiular to the soalled interpretation plane of line L (whih is the plane passing through the amera frame origin and the line L) and s a nonnegative salar. The latter are defined by n n = P u where P is the position of any point P on the line, expressed in the amera frame. Notie that, using this notation, the well-known (normalized) Plüker oordinates [Plüker, 1865; Merlet, 2006] are the ouple ( u, n n). The projetion of suh a line in the image plane, expressed in the amera frame, has for harateristi equation [Andreff et al., 2002]: n T p = 0 (2) where p are the oordinates in the amera frame of a point P in the image plane, lying on the line. With the matrix K formed by intrinsi parameters of the amera, one an obtain the line equation pixel oordinates p n from: p n T p p = 0 (3) Indeed, replaing p p with K p in this expression yields: p n T K p = 0 (4) By identifiation of (2) and (3), one obtains p n = K T n K T n, n = KT p n K T p n (5) 2 In the following of the paper, the supersript before the vetor denotes the frame in whih the vetor is expressed ( b for the base frame, for the amera frame and p for the pixel frame). If there is no supersript, the vetor an be written any frame.

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia 2 1 1 2 Let us remark now that eah ylinder edge is a line in spae, with Binormalized Plüker expressed in the amera frame ( u i, n j i, n j i ) (Fig 2). Moreover, any point A i (of oordinates A i in the amera frame) lying on the ylinder axis is at the distane R i from the edge. Consequently, a ylinder edge is entirely defined by the following onstraints, expressed here in the amera frame, although valid in any frame: u i Figure 2: Projetion of a ylinder in the image 1 n jt i A i = R i (9) n jt i n j i = 1 (10) u T i n j i = 0 (11) R i u i The vetor h i = h i h i an be omputed using the edges of the i-th ylindrial leg too, and it is given by h i h i = D i u i (12) h i θ i 2 where D i is the position of the point B i in the amera frame, whih is the losest point of the axis of the i-th leg to the amera. It is given by Figure 3: Visual edges of a ylinder Notie that for numerial reasons, one should use normalized pixel oordinates. Namely, let us define the pixel frame by its origin loated at the image enter (i.e. the intersetion of the image diagonals) and suh that the pixel oordinates vary approximately between 1 and +1, aording to the hoie of the normalizing fator, whih an be the image horizontal dimension pixels, or its vertial dimension, or its diagonal. 2.2 Cylindrial leg observation The legs of parallel robots have usually ylindrial rosssetions [Merlet, 2006]. The edges of the i-th ylindrial leg are given, in the amera frame, by [Andreff et al., 2007] (Figs. 2 and 3): n 1 i = os θ i h i sin θ i u i h i (6) n 2 i = + os θ i h i sin θ i u i h i (7) where os θ i = / h 2 i R2 i h i, sin θ i = R i / h i and ( u i, h i, h i ) are the Binormalized Plüker oordinates of the ylinder axis and R i is the ylinder radius. It was also shown [Andreff et al., 2007] that the leg orientation, expressed in the amera frame, is given by u i = n 1 i n 2 i n 1 i n 2 i (8) D i = 3 Visual servoing shemes R i sin(θ i ) n 1 i + n 2 i n 1 i + n 2 i (13) In this Setion, the ontrol shemes for the visual servoing of a five-bar mehanism are defined and ompared. 3.1 Kinematis of a five-bar mehanism The planar five-bar mehanism (Fig. 4) is a 2 degrees-offreedom (dof ) parallel robot able to ahieve two translations in the plane (, x 0, ) and whih is omposed of two legs: a leg omposed of 3 R joints with an axis direted along z 0 and loated at points A 1, and C, the joint loated at point A 1 being atuated, and a leg omposed of 2 R joints with an axis direted along z 0 and loated at points A 2 and B 2, the joint loated at point A 2 being atuated, and other joints being passive. Thus, the vetor of atuated oordinates is q T = [q 1 q 2 ]. The end-effetor is loated at point C and its ontrolled oordinates along x 0 and are denoted as x and y, respetively. The position of the point C is given by, for i = 1, 2: where C = A i + l 1i v i + l 2i u i (14)

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia u 2 u 1 v 1 C (x, y) v 2 B q 2 L 1 1 A 2 q2 x 0 L A1 2 Figure 4: The planar five-bar mehanism (the gray pairs denote the atuated joints) C is the position of the point C, while A i = [δ i 0] T (δ 1 = l Ai and δ 2 = +l Ai ) is the position of the point A i (the frame is not speified, but it is usually either the base frame or the amera frame), l 1i and l 2i denote the length of the links A i B i and B i C respetively, vetors v i and u i are unit vetors defining the diretion of the links A i B i and B i C respetively. Rearranging (14), we obtain C A i l 1i v i = l 2i u i (15) Then, squarring both sides of (15) and suming the two lines, we get, for i = 1, 2: (x δ i l 1i os q i ) 2 + (y l 1i sin q i ) 2 = l 2 2i (16) Skipping all mathematial derivations, it omes that: ( q i = 2 tan 1 b i ± ) b 2 i 2 i + a2 i (17) i a i a i = 2l 1i (x δ i ), b i = 2l 1i y, i = (x δ i ) 2 +y 2 +l 2 1i l 2 2i Ṫhe first-order kinematis that relates the platform translational veloity τ p to the atuator veloities an be obtained through the differentiation of (16) with respet to time and an be expressed as: where with Thus, or also Aτ p + B q = 0 (18) [ ] l21 u A = T 1 l 22 u T 2 [ ] l11 l B = 21 u T 1 v 1 0 0 l 12 l 22 u T 2 v 2 (19) (20) v i = [ sin q i os q i ] T (21) τ p = A 1 B q = J q (22) q = B 1 Aτ p = J pinv τ p (23) 3.2 Leg-diretion-based visual servoing of a five-bar mehanism Kinematis of a five-bar mehanism using the leg-diretion-based visual servoing tehnique The ontrol of a five-bar mehanism using the legdiretion-based visual servoing tehnique developed in [Andreff et al., 2005] proposes to observe the leg diretion u i to ontrol the robot displaements. u i an be obtained diretly from (15) u i = (C A i l 1i v i ) /l 2i (24) Differentiating (24) with respet to time leads to: u i = ( τ p l 1i v i q i ) /l2i (25) Finally, from (23), it omes that: u i = ( I 3 + l 1 v i a i /b ii ) /l2i τ p = M T ui τ p (26) where I 3 is the (3 3) identity matrix and matrix M T ui is alled the interation matrix. It an be proven that matrix M T ui is of rank 1. As a result, a minimum of two independent legs is neessary to ontrol the end-effetor pose. Anteration matrix M T u an then be obtained by staking the matries M T ui of the two legs (i = 1, 2). Control sheme and interation matrix Visual servoing is based on the so-alled interation matrix M T [Chaumette, 2002] whih relates the instantaneous relative motion T = τ τ s between the amera and the sene, to the time derivative of the vetor s, whih is the vetor staking all the visual primitives (whih an be the legs diretions or the legs Plüker oordinates) that are used through: ṡ = M T (s) T (27) where τ and τ s are respetively the twists of the amera and the sene, both expressed in R, i.e. the amera frame. Then, one ahieves exponential deay of an error e(s, s d ) between the urrent primitive vetor s and the desired one s d using a proportional linearizing and deoupling ontrol sheme of the form: T = λ ˆM T + (s) e(s, s d) (28) where T is used as a pseudo-ontrol variable and the uppersript + orresponds to the matrix pseudo-inverse. The visual primitives being unit vetors, it is more elegant to use the geodesi error rather than the standard vetor differene. Consequently, the error grounding the proposed ontrol law will be: e i = u i u di (29)

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia where u di is the desired value of u i. Finally, a ontrol is hosen suh that E, the vetor staking the errors e i assoiated to k legs (k = 2...4), dereases exponentially, i.e. suh that Ė = λe (30) Then, introduing N T i = [u di ] M T ui (where [...] is the antisymetri matrix assoiated to a 3D vetor [Martinet et al., 1996]), the ombination of (26), (29) and (30) gives τ p = λn T + E (31) where N T is obtained by staking the matries N T i of the two legs legs (i = 1, 2). This expression an be transformed into the ontrol joint veloities using (23): q = λj pinv N T + E (32) 3.3 Line-based visual servoing of a five-bar mehanism In the present subsetion, the ontroller based on the estimation of the Plüker oordinates of the lines passing through the legs is defined. This is the first time that suh a ontroller is proposed. Kinematis of a five-bar mehanism using the line-based visual servoing tehnique The ontrol of a five-bar mehanism using the new linebased visual servoing tehnique proposes to extrat the Plüker oordinates (u i, h i ) 3 of the two legs attahed to the end-effetor in order to ontrol the robot displaements. The ontrol an be done thanks to the fat that the point to ontrol C is the intersetion point of the lines of the two observed ylindrial legs. Applying the formula of the intersetion point between two lines in a plane both expressed in Plüker oordinates, the position of the point C expressed in homogeneous oordinates is given by, for i = 1, 2 [Selig, 2005]: C w = ( (h 1 N) u 2 +(h 2 N) u 1 +(h 1 u 2 ) N : (u 1 u 2 ) N) (33) in whih: (u 1, h 1 ) and (u 2, h 2 ) are the Plüker oordinates of the 1st and the 2nd leg respetively, N is a unit vetor along a oordinate axis, with (u 1 u 2 ) N non-zero. For onverting the point from homogeneous to nonhomogeneous oordinates, the first three oordinates of C w have to be divided by the 4-th one. Moving the right 3 In the paper, h stands for unit vetor, while h stands for non unit vetor. term of (33) to the left side, extending it and naming the equations with f i leads to f 1 = x + h 1z u 2x h 2z u 1x = 0 (34) f 2 = y + h 1z u 2y h 2z u 1y = 0 (35) f 3 = z h 1x u 2x h 1y u 2y = 0 (36) f 4 = w u 1x u 2y + u 2x u 1y = 0 (37) where (x, y, z, w) are the homogenous oordinates of C w ; (u ix, u iy, u iz ) are the Cartesian omponents of the vetor u i ; (h ix, h iy, h iz ) are the Cartesian omponents of the vetor h i. Differentiating (34), (35), (36) and (37) with respet to time leads to ẋ h 2z u 1x + h 1z u 2x + u 2x ḣ 1z u 1x ḣ 2z = 0 (38) ẏ h 2z u 1y + h 1z u 2y + u 2y ḣ 1z u 1y ḣ 2z = 0 (39) ż h 1x u 2x h 1y u 2y u 2y ḣ 1x u 2y ḣ 1y = 0 (40) ẇ u 2y u 1x + u 2x u 1y + u 1y u 2x u 1x u 2y = 0 (41) Finally, putting (38), (39), (40) and (41) in matrix form, it omes that l = P + τ p = M T l τ p (42) [ ] u where l = and p h jk = f j l k is the term of the j-th row and the k-th olumn of P, with j = 1..4 and k = 1..12. Control sheme and interation matrix Beause the vetors h 1 and h 2 are not unit vetors, the ontrol law (27) annot be used as it is. Consequently, it is neessary to use the following error e i = l i l di (43) where l di is the desired value of l i. The ontrol is hosen the same way of (30). From (30) and (43), it omes λe = l l d (44) From (42) and (44), it is easy to derive the following ontrol joint veloities q = J pinv M T + l ( λe + l d ) (45) where J pinv is the pseudo-inverse Jaobian matrix of the robot whih relates the end-effetor twist to the atuator veloities, i.e. J pinv τ p = q. 4 Analysis of the ontroller singularities In this Setion, the ontrol shemes for the visual servoing of a five-bar mehanism are ompared in terms of singularities.

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia u 2 u 1 C D 1 v 1 C (x, y) v 2 B 2 D 2 q 1 A 2 q2 B 2 E A 1 1 x 0 E 2 Figure 5: The hidden robot involved into the legdiretion-based visual servoing approah of a five-bar mehansim (the gray pairs denote the atuated joints) A 1 (a) A 2 x 0 4.1 Singularities of the ontroller using the leg-diretion-based visual servoing tehnique As mentioned in the introdution, the singularity of the mapping involved into the present ontroller an be analyzed through the aid of the hidden robot onept [Rosenzveig et al., 2014]. The aim of this Setion is not to present one again the hidden robot onept whih has been shown several papers [Rosenzveig et al., 2014; Briot and Martinet, 2013; Rosenzveig et al., 2013; 2014] but to diretly use this tool to analyze the singularity of the ontroller. Therefore, we diretly assert that the hidden robot involved into the leg-diretionbased visual servoing approah for a five-bar mehansim is shown Fig. 5. The reader willing to have further explanations is referred to [Rosenzveig et al., 2014]. This virtual mehanism is made of two passive planar parallelogram joints A i B i D i E i linked onto the ground on whih is fixed an atuator at point B i ontrolling the diretion of the link B i C. This speial arrangement of the leg makes it possible, for one given position of the atuator at B i, to maintain the orientation with respet to the base of the link B i C independently of the onfiguration of the passive parallelogram joint. A simple kinemati analysis of this virtual robot shows that: The Type 1 (or serial) singularities [Gosselin and Angeles, 1990] appear when one leg is fully strethed or folded, suh as for a five-bar mehansim (Fig. 6), The Type 2 (or parallel) singularities [Gosselin and Angeles, 1990] appear when the links A 1 and A 2 B 2 are parallel, whih is different from the Type 2 singularities of a five-bar mehanism that appear when the points, B 2 and C are aligned (Fig. 7). As demonstrated in [Rosenzveig et al., 2014], these singularities affet the performane of the ontroller in terms of auray and need to be well handled. An example of singularity loi in the workspae of a D 1 E 1 C A 1 B 2 (b) Figure 6: Examples of Type 1 singularity for a five-bar mehanism (a) and its orresponding hidden robot (b) A 2 D 2 given five-bar mehanism is provided in Fig. 8. 4.2 Singularities of the ontroller using the line-based visual servoing tehnique It is known that the singularity onditions appear when the inverse or forward geometri model degenerates. The geometri models involved in this new ontroller are based on the fat that we an rebuild the end-effetor pose by knowing the intersetion point between the two lines L 1 and L 2 depited in Fig 4. Therefore, the singularities appear when these two lines are parallel (intersetion point at infinity) or oinide (infinity of possible intersetion points). Suh singularity onditions are equivalent to the Type 2 singularity onditions of a fivebar mehanism (Fig. 7(a)). 4.3 Disussion on the ontrol shemes At this step, it appears that the new ontroller has several advantages with respet to the approah proposed in [Andreff et al., 2005] that should be learly pointed out: 1. ontrary to the past approah, as shown Setions E 2 x 0

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia A 1 C unontrollable motion A 2 B 2 L 1 L 2 x 0 y 0.8 singularity loi of the real robot and of the ontroller of ase 2 0.6 0.4 0.2 singularity loi of the ontroller of ase 1 workspae intial end effetor position set of desired positions set of final positions got with the ontroller of ase 1 (a) 0 0 0.5 1 x D 1 unontrollable motion Figure 8: Singularity loi of a five-bar mehanism and its orresponding hidden robot for the following set of parameters: l 1i = 0.3 m, l 2i = 0.35 m, l Ai = 0.275 m. E 1 A 1 (b) Figure 7: Examples of Type 2 singularity for a five-bar mehanism (a) and its orresponding hidden robot (b) A 2 C B 2 E 2 x 0 D 2 3.2 and 3.3, the new one does not need the use of the geometri parameters of the robot (exept the radius of the observed ylinder) for estimating the platform pose. This is a great advantage beause we only need to aurately alibrate the observed ylinders, not the entire robot, for obtaining the best robot auray, 2. the singularities of the new ontroller oinide with those of the real mehanism, whih is a great advantage with respet to the past approah, for whih the singularities are different and thus lead to the derease of the reahable workspae. In the next Setion, the two ontrol shemes are ompared in terms of robustness to measurement noise in order to learly demonstrate whih type of ontroller is better. 5 Comparative analysis of the ontroller performane In order to do a omparative analysis of the two ontrol approahes, it has been reated an Adams model of a five-bar mehanism with the following set of parameters: l 1i = 0.3 m as length of the legs attahed to the ground, l 2i = 0.35 m as length of the legs attahed to the endeffetor, l Ai = 0.275 m as distane between and A i (Fig. 4). The workspae is plotted in Fig. 8. Both the leg-diretion-based ontroller (ase 1) and the line-based ontroller (ase 2) have been applied to suh a model. 5.1 Robustness to measurement noise near the singularity of the hidden robot of the ontroller based on the leg diretion We added noise on the measurements in order to ompare the performane of both types of ontroller. First af all, the model that has been used for the visual servoing is a pinhole amera beause it is simple to implement and is a good approximation of real ameras. In the Fig. 9 a amera with as enter of projetion and the prinipal axis parallel to Z axis is shown. The distane between and the image plane is the foal length f. The 3D point P = (X, Y, Z) is projeted on the image plane at oordinates P = (u, v). The parameters of the amera used for the simulations are: foal length with respet to u = 10 3 pixels; foal length with respet to v = 10 3 pixels; priniple point image along u = 1024 pixels; priniple point image along v = 768 pixels. The measurement noise is introdued like thereafter. The extration of the Plüker oordinates of the leg line is based on the equations of the leg edges. In the simulation, they are projeted to the image plane and onverted from meter to pixel. Then, the edge line intersetions with image boundary are omputed: the oordinates of the intersetion points have to be rounded due to the pixel auray. A new equation of

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia ase 1). Table 1: Results of the simulations. Figure 9: A pinhole amera model the edge line is then reomputed taking into aount the error introdued in the intersetion points between the edge line and the image boundary. In this subsetion the results of the leg-diretion-based and the line-based visual servoing approahes subjeted to measurement noise are shown. The measurements error hoses given by a pixel auray equal to 1. Then, it has been hosen the initial end-effetor pose as (x 0, ) = (0, 0.196)m and a set of desired positions C d of the end-effetor near the singularity of the hidden robot of the leg-diretion-based ontroller: C d1 = ( 0.172, 0.030)m C d2 = ( 0.050, 0.080)m C d3 = (0.036, 0.082)m C d4 = (0.092, 0.070)m C d5 = (0.193, 0.013)m The initial position, the desired positions and the final positions got with the ontroller of ase 2 are shownside the plot of the workspae in Fig. 8 (from left to right, in blak: C d1...c d5 ). The results of all the simulations are then shown the Table 1: for eah desired position, the final position and the error got with both the ontroller are shown. In the Table 1: C d is the set of desired positions, C f1 and C f2 are the set of the final positions in the ontrollers of ase 1 and ase 2 respetively, e 1 (t f ) and e 2 (t f ) are the errors of the ontrollers of ase 1 and ase 2 respetively. The error is omputed as the norm of the differene between the final position (the final time hoses t f = 3s) and the desired position. The graph on Fig. 10 shows the onvergene of the end-effetor pose of both ontroller in the ase C d3 = (0.036, 0.082)m. Upon the results, it is readily found that the singularity of the hidden robot of the ontroller of ase 2 is muh more robust to measurement noise, whih allows to aess, with this new ontroller, the same workspae zones as the real robot (ontrarily to the ontroller of C d C f1 C f2 e 1 (t f ) e 2 (t f ) (-0.172, (-0.157, (-0.173, 0.0184 0.0013 0.030) 0.040) 0.031) (-0.05, (-0.027, (-0.050, 0.0232 0.0004 0.080) 0.083) 0.080) (0.036, (0.061, (0.036, 0.0255 0.0007 0.082) 0.078) 0.083) (0.092, (0.107, (0.092, 0.0158 0.0003 0.070) 0.065) 0.070) (0.193, 0.013) (0.227, -0.020) (0.193, 0.013) 0.0471 0.0001 5.2 Crossing the hidden robot singularity In this subsetion, the end-effetor desired positios hosen suh a way that the end-effetor should ross the hidden robot singularity (singularity of the ontroller of ase 1) shown the Fig. 8. It is shown that in the ase of leg-diretion-based approah the legs diretion onverges to the desired one, but the end-effetor position does not. While in the ase of the line-based approah also the end-effetor pose onverges to the desired one. This is due to the fat that, in the ontroller of ase 1, the five-bar mehanism onverges to another assembly mode of its hidden robot. The end-effetor initial positios the same of the subsetion (5.1), while the hosen desired positios C d = (0.104, 0.036)m. The results of the simulations are shown Fig. 11. 6 Conlusions In this paper, we proposed a new approah for visionbased ontrol of the end-effetor of a parallel robot. This method overomes the disadvantages of the old visionbased ontroller based on the leg diretion, proposed in the past papers, whih are: it is not suitable for some PKM families (e.g. parallel robot whose legs diretions are onstant even if the end-effetors pose hanges), it involves the presene of some models of robots, different from the real one, hidden into the ontroller (named hidden robot model ). The new approah overomes both the problems, and it is based on the Plüker oordinates of leg enter line. Aording to the simulations of both ontrollers on a five-bar mehanism, it was shown that the new approah is better beause its hidden robot model has the same

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia E C [m] 0.12 0.1 0.08 0.06 0.04 0.02 0 0 0.5 1 1.5 2 2.5 3 time [s] (a) End-effetor pose error in the leg-diretionbased ontroller. Errors in [m] 0.7 0.6 0.5 E u1 E u2 0.4 E C 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 time [s] (a) Errors of the leg-diretion-based ontroller. 0.12 E C [m] 0.1 0.08 0.06 0.04 0.02 x 10 4 7.4 7.2 7 6.8 6.6 2.88 2.9 2.92 2.94 2.96 2.98 3 Errors in [m] 0.7 0.6 0.5 0.4 0.3 0.2 E l1 E l2 E C 0 0 0.5 1 1.5 2 2.5 3 time [s] (b) End-effetor pose error in the line-based ontroller. Figure 10: Error in the ase of measurement noise with desired position near the singularity of the leg-diretionbased ontroller: C d3 = (0.036, 0.082)m 0.1 0 0 0.2 0.4 0.6 0.8 1 time [s] (b) Errors of the line-based ontroller. Figure 11: Crossing the hidden robot singularity: C d = (0.104, 0.036)m

Proeedings of Australasian Conferene on Robotis and Automation, 2-4 De 2014, The University of Melbourne, Melbourne, Australia singularities as the real robot. Therefore, it is more robust to measure noise near the singularities of the hidden robot of the old approah and permits the mehanism to pass through singularities. The new method is atually appliable for every parallel robot, both planar and spatial, indeed the five bar has been used just as an example here. Moreover, this method is not affeted by the geometri parameters errors beause it is ompletely independent from them. Referenes [Andreff and Martinet, 2006] N. Andreff and P. Martinet. Vision-based kinemati modelling of some parallel manipulators for ontrol purposes. In Proeedings of EuCoMeS, the First European Conferene on Mehanism Siene, bergurgl, Austria, 2006. [Andreff et al., 2002] N. Andreff, B. Espiau, and R. Horaud. Visual servoing from lines. International Journal of Robotis Researh, 21(8):679 700, 2002. [Andreff et al., 2005] N. Andreff, A. Marhadier, and P. Martinet. Vision-based ontrol of a Gough-Stewart parallel mehanism using legs observation. In Proeedings of the IEEE International Conferene on Robotis and Automation, ICRA 05, pages 2546 2551, Barelona, Spain, April 18-22 2005. [Andreff et al., 2007] N. Andreff, T. Dallej, and P. Martinet. Image-based visual servoing of gough-stewart parallel manipulators using legs observation. International Journal of Robotis Researh, 26(7):677 687, 2007. [Briot and Martinet, 2013] S. Briot and P. Martinet. Minimal representation for the ontrol of Gough- Stewart platforms via leg observation onsidering a hidden robot model. In Proeedings of the 2013 IEEE International Conferene on Robotis and Automation (ICRA 2013), Karlsruhe, Germany, May, 6-10 2013. [Chaumette, 2002] F. Chaumette. La ommande des robots manipulateurs. Hermès, 2002. [Espiau et al., 1992] B. Espiau, F. Chaumette, and P. Rives. A new approah to visual servoing in robotis. IEEE Transations on Robotis and Automation, 8(3), 1992. [Gosselin and Angeles, 1990] C.M. Gosselin and J. Angeles. Singularity analysis of losed-loop kinemati hains. IEEE Transations on Robotis and Automation, 6(3):281 290, 1990. [Gough and Whitehall, 1962] V.E. Gough and S.G. Whitehall. Universal tyre test mahine. In Proeedings of the FISITA 9th International Tehnial Congress, pages 117 317, May 1962. [Horaud et al., 1998] R. Horaud, F. Dornaika, and B. Espiau. Visually guided objet grasping. IEEE Transations on Robotis and Automation, 14(4):525 532, 1998. [Leinonen, 1991] T. Leinonen. Terminology for the theory of mahines and mehanisms. Mehanism and Mahine Theory, 26, 1991. [Martinet et al., 1996] P. Martinet, J. Gallie, and D. Khadraoui. Vision based ontrol law using 3D visual features. In Proeedings of the World Automation Congress, WAC96, Robotis and Manufaturing Systems, volume 3, pages 497 502, Montpellier, Frane, May 1996. [Merlet, 2006] J.P. Merlet. Parallel Robots. Springer, 2nd edition, 2006. [Merlet, 2012] J.P. Merlet. wwwsop.inria.fr/members/jean-pierre.merlet/merlet.html. 2012. [zgur et al., 2011] E. zgur, N. Andreff, and P. Martinet. Dynami ontrol of the quattro robot by the leg edgels. In Proeedings of the IEEE International Conferene on Robotis and Automation, ICRA11, Shanghai, China, May 9-13 2011. [Plüker, 1865] J. Plüker. n a new geometry of spae. Philosophial Transations of the Royal Soiety of London, 155:725 791, 1865. [Rosenzveig et al., 2013] V. Rosenzveig, S. Briot, and P. Martinet. Minimal representation for the ontrol of the Adept Quattro with rigid platform via leg observation onsidering a hidden robot model. In Proeedings of the IEEE/RSJ International Conferene on Intelligent Robots and Systems (IRS 2013), Tokyo Big Sight, Japan, 2013. [Rosenzveig et al., 2014] V. Rosenzveig, S. Briot, P. Martinet, E. zgur, and N. Bouton. A method for simplifying the analysis of leg-based visual servoing of parallel robots. In Pro. 2014 IEEE Int. Conf. on Robotis and Automation (ICRA 2014), Hong Kong, China, May 2014. [Selig, 2005] J.M. Selig. Geometri fundamentals of robotis, 2005. Springer, 2nd edition.