Self-Location of a Mobile Robot with uncertainty by cooperation of an heading sensor and a CCD TV camera

Similar documents
Supplementary Material: Geometric Calibration of Micro-Lens-Based Light-Field Cameras using Line Features

An Automatic Laser Scanning System for Accurate 3D Reconstruction of Indoor Scenes

Trajectory Tracking Control for A Wheeled Mobile Robot Using Fuzzy Logic Controller

Projections. Let us start with projections in 2D, because there are easier to visualize.

ASSESSMENT OF TWO CHEAP CLOSE-RANGE FEATURE EXTRACTION SYSTEMS

Lecture 02 Image Formation

A radiometric analysis of projected sinusoidal illumination for opaque surfaces

Performance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION. Ken Sauer and Charles A. Bouman

A Comprehensive Review of Overlapping Community Detection Algorithms for Social Networks

Video Data and Sonar Data: Real World Data Fusion Example

FUZZY WATERSHED FOR IMAGE SEGMENTATION

Plot-to-track correlation in A-SMGCS using the target images from a Surface Movement Radar

The Mathematics of Simple Ultrasonic 2-Dimensional Sensing

Face and Facial Feature Tracking for Natural Human-Computer Interface

Interactive Order-Independent Transparency

Measurement of the stereoscopic rangefinder beam angular velocity using the digital image processing method

Improved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules

Smooth Trajectory Planning Along Bezier Curve for Mobile Robots with Velocity Constraints

A {k, n}-secret Sharing Scheme for Color Images

Gray Codes for Reflectable Languages

C 2 C 3 C 1 M S. f e. e f (3,0) (0,1) (2,0) (-1,1) (1,0) (-1,0) (1,-1) (0,-1) (-2,0) (-3,0) (0,-2)

Sequential Incremental-Value Auctions

Comparative Analysis of two Types of Leg-observation-based Visual Servoing Approaches for the Control of a Five-bar Mechanism

On - Line Path Delay Fault Testing of Omega MINs M. Bellos 1, E. Kalligeros 1, D. Nikolos 1,2 & H. T. Vergos 1,2

timestamp, if silhouette(x, y) 0 0 if silhouette(x, y) = 0, mhi(x, y) = and mhi(x, y) < timestamp - duration mhi(x, y), else

3D Model Based Pose Estimation For Omnidirectional Stereovision

New Fuzzy Object Segmentation Algorithm for Video Sequences *

Cluster-Based Cumulative Ensembles

A Novel Validity Index for Determination of the Optimal Number of Clusters

An Optimized Approach on Applying Genetic Algorithm to Adaptive Cluster Validity Index

Graph-Based vs Depth-Based Data Representation for Multiview Images

Gradient based progressive probabilistic Hough transform

Detection and Recognition of Non-Occluded Objects using Signature Map

Simultaneous image orientation in GRASS

the data. Structured Principal Component Analysis (SPCA)

Learning Convention Propagation in BeerAdvocate Reviews from a etwork Perspective. Abstract

Abstract. Key Words: Image Filters, Fuzzy Filters, Order Statistics Filters, Rank Ordered Mean Filters, Channel Noise. 1.

An Interactive-Voting Based Map Matching Algorithm

The Implementation of RRTs for a Remote-Controlled Mobile Robot

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering

Weak Dependence on Initialization in Mixture of Linear Regressions

A Novel Bit Level Time Series Representation with Implication of Similarity Search and Clustering

Extracting Partition Statistics from Semistructured Data

Projector Calibration for 3D Scanning Using Virtual Target Images

3-D IMAGE MODELS AND COMPRESSION - SYNTHETIC HYBRID OR NATURAL FIT?

OFF-LINE ROBOT VISION SYSTEM PROGRAMMING USING A COMPUTER AIDED DESIGN SYSTEM S. SRIDARAN. Thesis submitted to the Faculty of the

Interconnection Styles

Z Combinatorial Filters: Sensor Beams, Obstacles, and Possible Paths

Chapter 2: Introduction to Maple V

End-effector pose estimation of the Monash Epicyclic-Parallel Manipulator through the visual observation of its legs

HIGHER ORDER full-wave three-dimensional (3-D) large-domain techniques in

1. Introduction. 2. The Probable Stope Algorithm

A Coarse-to-Fine Classification Scheme for Facial Expression Recognition

The Minimum Redundancy Maximum Relevance Approach to Building Sparse Support Vector Machines

INTEGRATING PHOTOGRAMMETRY AND INERTIAL SENSORS FOR ROBOTICS NAVIGATION AND MAPPING

What are Cycle-Stealing Systems Good For? A Detailed Performance Model Case Study

Accurate Odometry and Error Modelling for a Mobile Robot

Z Combinatorial Filters: Sensor Beams, Obstacles, and Possible Paths

Defect Detection and Classification in Ceramic Plates Using Machine Vision and Naïve Bayes Classifier for Computer Aided Manufacturing

Finding the Equation of a Straight Line

Adaptive Implicit Surface Polygonization using Marching Triangles

Boosted Random Forest

Chromaticity-matched Superimposition of Foreground Objects in Different Environments

Simulation of Crystallographic Texture and Anisotropie of Polycrystals during Metal Forming with Respect to Scaling Aspects

Time delay estimation of reverberant meeting speech: on the use of multichannel linear prediction

A Partial Sorting Algorithm in Multi-Hop Wireless Sensor Networks

MATH STUDENT BOOK. 12th Grade Unit 6

Rotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors

Improving the Perceptual Uniformity of a Gloss Space

System-Level Parallelism and Throughput Optimization in Designing Reconfigurable Computing Applications

Partial Character Decoding for Improved Regular Expression Matching in FPGAs

Drawing lines. Naïve line drawing algorithm. drawpixel(x, round(y)); double dy = y1 - y0; double dx = x1 - x0; double m = dy / dx; double y = y0;

AUTOMATIC TRACKING OF MOVING OBJECT WITH EYE-IN- HAND ROBOT MANIPULATOR

CleanUp: Improving Quadrilateral Finite Element Meshes

Unsupervised Stereoscopic Video Object Segmentation Based on Active Contours and Retrainable Neural Networks

CUTTING FORCES AND CONSECUTIVE DEFORMATIONS AT MILLING PARTS WITH THIN WALLS

13.1 Numerical Evaluation of Integrals Over One Dimension

KERNEL SPARSE REPRESENTATION WITH LOCAL PATTERNS FOR FACE RECOGNITION

Using Augmented Measurements to Improve the Convergence of ICP

Engineering Design of Guidance System of 6R Telerobot based on DTMF

特集 Road Border Recognition Using FIR Images and LIDAR Signal Processing

HEXA: Compact Data Structures for Faster Packet Processing

TARGETLESS CAMERA CALIBRATION

Hitachi Ltd., Production Engineering Research Lab. ABSTRACT

Pipelined Multipliers for Reconfigurable Hardware

Capturing Large Intra-class Variations of Biometric Data by Template Co-updating

An Approach to Physics Based Surrogate Model Development for Application with IDPSA

Calculation of typical running time of a branch-and-bound algorithm for the vertex-cover problem

A Multi-Head Clustering Algorithm in Vehicular Ad Hoc Networks

Manipulation of Graphs, Algebras and Pictures. Essays Dedicated to Hans-Jörg Kreowski on the Occasion of His 60th Birthday

Comprehensive Extrinsic Calibration of a Camera and a 2D Laser Scanner for a Ground Vehicle

Segmentation of brain MR image using fuzzy local Gaussian mixture model with bias field correction

An Efficient and Scalable Approach to CNN Queries in a Road Network

Control of industrial robots. Control with vision sensors

Motion Correction Structured Light using Pattern Interleaving Technique

A Load-Balanced Clustering Protocol for Hierarchical Wireless Sensor Networks

Automated System for the Study of Environmental Loads Applied to Production Risers Dustin M. Brandt 1, Celso K. Morooka 2, Ivan R.

FOREGROUND OBJECT EXTRACTION USING FUZZY C MEANS WITH BIT-PLANE SLICING AND OPTICAL FLOW

Approximate logic synthesis for error tolerant applications

Transcription:

Self-oation of a Mobile Robot ith unertainty by ooperation of an heading sensor and a CCD TV amera E. Stella, G. Ciirelli, A. Distante Istituto Elaborazione Segnali ed Immagini - C.N.R. Via Amendola, 66/5-706 Bari (Italy) E-mail: stella@iesi.ba.nr.it Abstrat Goal-oriented navigation of a mobile robot by landmark based tehniques is a straightforard and suitable approah. Most of methods, normally, give an estimation of the position and orientation of the vehile, but, often, they are not able to provide a good estimate of the unertainty in the measurement. That information is useful in appliation here multisensor fusion is requested. Our method permits to determine the vehile loation and relative unertainty, hen its orientation is obtained by an heading sensor, using a visual landmark based method. The approah is straightforard and suitable for real time performane on general purpose hardare. Experimental results are provided by implementation on our autonomous mobile vehile SAURO. Keyords: Camera oalization, Navigation, Autonomous mobile vehile, Self-oation. The researh is partially supported by Progetto Speiale Robotia (Comitato Sienze Fisihe) of the C.N.R.. Introdution Self-loation is the apability of an autonomous mobile vehile to determine its position in an environment. Visual approahes to self-loation are suitable for navigating a mobile robot in indoor ontexts. To lasses of methods an be onsidered: Absolute tehnique: are those approahes providing an estimation of the loation and orientation of the vehile using artifiial or natural landmarks hose positions in the environment are a-priori knon [, 4, 7]. Relative tehnique: are those methods here only the relative motion of the vehile is estimated (rotation and translation) so for goal oriented appliations the absolute loation of the start point must be knon. Those tehniques, hoever, do not require any a-priori knoledge about the environment [6,, ]. Methods in both lasses ork ith a good performanes (referred to exeution time) and provide an aeptable estimation of the vehile loation and orientation. Most of them, hoever, have a fundamental drabak: they do not provide an estimation of the unertainty in the vehile loation and orientation. In the last years, a ommon tendeny of roboti researhers has been to analyze the problems of the mobile vehiles in term of multisensor fusion frameork, so the estimation of unertainty of sensor measurements has played an important rule. In this paper e propose a straightforard approah to self-loation of a mobile robot by loalizing the Center-ofprojetions (CP) of the amera hen the orientation of its optial axis is given. In fat, using an heading sensor (e have used a ompass), the orientation of the vehile an be reovered, so three landmarks are enough to estimate loation of the amera and relative unertainty. The experimental setup is made up by three beaons (infrared EDS) used as landmarks at knon positions in the environment. Using the exellent angular resolution of CCD ameras, visual angles are evaluated and, as desribed in set., the loation of the CP and optial axis orientation are reovered. The visual angle of a landmark is the angle formed by the ray from the CP to the landmark and the optial axis (see fig.). The tehnique used to estimate the visual angle is similar to [3]. We have used artifiial beaons, in order to speed up the searh for the landmarks in the image, but natural features ould be onsidered. The tehnique is desribed in setion., the estimation of unertainty is explained in set.3.. The experimental setup is desribed in set. 4 and, finally, the results on self-loation of a mobile robot are shon.

P P Y Y (Optial axis) Image Plane α Image plane Figure. 3D amera geometry. CP self-loation tehnique α et P ( ; Y ; ), P ( ; Y ; ) be landmarks in orld referene system O (fig.). et V ( v ; Y v ; v ) be the unknon oordinates of the enter-of-projetion (CP) of the amera. In the ontext of autonomous robot navigation the determination of the loation of the CP in the? plane is all important, so the loation problem is redued to a to-dimensional one. et ( ; ), ( ; ), and CP ( p ; p ) be the projetions onto the plane? of P and P and V (fig.). If the heading angle is assumed to be knon (for example obtained by an heading sensor: gyro, ompass, et.) and the visual angles and an be reovered, then the CP position an be univoally determined as intersetion of the to straight lines r and r, passing among landmarks and, respetively. Considered the parametri equation of a straight line: z = mx + p () here m, the slope of the straight line, an be estimated as: m = atan( + i ) i = ; () hile p the interept of the straight line an be estimated onsidering that the line pass through a landmark. So, summarizing r an be estimated by resolving the system of to equations: = m + p (3) m = atan( + ) Analogously, the straight line r an be estimated and so the CP loation is given by resolving the system of the equations of the to straight lines. The visual angles and an be estimated using the exellent angular resolution of the CCD ameras. Considering the fig.: Figure. D amera geometry = atan( u f ) = atan( u f ) (4) here f is the foal length of the amera and u and u are the distanes on the x-axis (on the image plane) of the projetions of the landmarks from the prinipal point (intersetion of optial axis and image plane). Considering that, u and u in (4), are pixel oordinates referred to the CCD hip (not to the frame grabber), their estimation needs of the sale fator. In fat, as shon in [0]: u k = (p k x? C x) d x s x N x N fx (5) here p k x is the pixel x-oordinate referred to the frame grabber, C x is the prinipal point x-oordinate, s x is the sale fator given by different sampling frequenies of the TV amera and the frame grabber ADC, N x represents the number of the CCD elements in x-diretion hile N fx is the length of the image line in the frame grabber, d x is the distane enter-to-enter of to adjaent d elements in x- diretion. While d x ; N x and N fx are parameters desribed by the TV amera and Frame Grabber manufatures, the prinipal point, foal length, sale fator and lens distortion are amera intrinsi parameters that an be reovered only by amera alibration. [0, 8, 9, 5]. In our experiment, e have used the tehnique desribed by Tsai in [0] to perform amera alibration. 3. Unertainty estimation In order to estimate unertainty e have used some geometri onsideration. In fat, three landmarks are

3 G A F B C D E Figure 3. Unertainty evaluation geometry used and for eah pair of them an estimation of loation has been obtained, globally, e reovered 3 estimates. In fig.3 those 3 estimates are labelled ith A( A ; A ), B( B ; B ) and C( C ; C ). Considering the triangle ABC and min = min( A ; B ; C ), max = max( A ; B ; C ), min = min( A ; B ; C ), max = max( A ; B ; C ), the retangle DEF G inluding ABC has been determined. The retangle has the sides parallel to the axis of the orld referene system and the vertexes oordinates are: D( min ; min ), E( max ; min ), F ( max ; max ) and G( min ; max ). The unertainty has been represented by the ellipse irumsribed to the retangle DEF G as dran in fig.3. For multisensor fusion proposal using frameork as Extended Kalman Filter, a TV amera loation standard deviation independently in - diretion and -diretion an be preferred. So, e propose to assume the enter of the ellipse as the loation estimation and the axis length as standard deviation respetively in and diretions. The unertainty in TV amera orientation is not diretly onneted to the visual loation method but depends exlusively by the unertainty of the heading sensors and an be estimated on statistial basis using repetitively readings of the heading sensor at knon angles. Figure 4. Vie of environment and position of landmarks 4. Experimental results The method for self-loation has been tested on our mobile robot SAURO, shon in fig.5. SAURO is based on a TRC ABMATE mobile base ontrolled by an on-board VME-bus based omputer. The imaging system is the VME board IPP ETEC frame grabber. The proessing is performed on a Motorola 68040 board. Figure 5. Our mobile vehile SAURO

The TV amera used is an ADIMEC M5 mounted on a RS3 ontrolled pan-tilt motorized head. The aquired images are 744 574 pixels. The heading sensor used is a KVH C00 fluxgate ompass onneted by an RS3 to the Motorola 68040 board. The C00 ompass provides on-board firmare to estimate orretion oeffiients to orret the sensor readings respet of magneti influene of metalli omponent of the mobile vehile. The heading angle of the TV amera (referring to fig. (orientation of the optial axis) is so reovered, using ompass as the sum of to angles: = +! (6) Figure 6. a) Experimental results of self-loation on a path. On path are shon the positions here self-loation is applied and the unertainty is represented by ellipses here is the angle beteen the -axis of the orld referene system and the magneti NORTH diretion deteted by the ompass, hile! is the urrent pan angle of the PAN- TIT head. In order to verify the reliability of self-loation in navigation ontext, SAURO as asked to perform a path to a goal position. The path onsisted of the segments shon in fig.6 (Real loations). The navigation strategy is as follos:. The vehile starts from an unknon position ith an unknon orientation.. Using the pan-tilt head the landmarks are searhed for. Then, by self-loation the mobile vehile estimates its position and orientation and relative unertainty in the orld referene system. 3. The rotation and translation in order to go to the next stop position are evaluated and then the vehile moves on. landmark Real Estimated unert. distanes oation oation 963 x = -8.94 x = -76.45 x = 575.53 z = -407.0 z = -04.86 z = 534.89 800 x = -980.6 x = -480.38 x = 79.8 z = -337.75 z = -3474.57 z = 38.33 655 x = 80.69 x = 45.3 x = 60.09 z = -673.60 z = -4.69 z = 550.34 5800 x = 488.3 x = 45.80 x = 578.90 z = -000. z = -645.4 z = 537.85 5736 x = 493.37 x = 03.33 x = 477.56 z = -3356.80 z = -333.6 z = 380.3 444 x = 305.68 x = 809.9 x = 363.59 z = -097.77 z = -747.35 z = 443.90 349 x = 4049.47 x = 385.59 x = 34.08 z = -3665.40 z = -3799.8 z = 59.3 336 x = 458.36 x = 4487.8 x = 395.6 z = -48.84 z = -4978.70 z = 347.06 Table. Path performed by SAURO. The foal length of amera as 6 mm. Data are in mm. The desribed proedure is repeated until the vehile reahes the goal. The positions estimated by self-loation ompared ith real positions are shon in fig.6, hile in tab. the same data are tabulated. The robot loation as verified using a D-50 NIKON TOTA STATION (Theodolite) that an estimate the oordinates of a point in spae ith an error of 5 mm. The D-50 as also used to determine the

3D loations of the landmarks in the orld referene system. The fig.6 shos that the error in self-loation depends on distane from landmarks: the nearer landmarks, the less the error in robot loation. The axis of the ellipse represent the standard deviation of the error by determining the amera loation by the visual self-loation method. Fig.6 shos that error gros ith both: the distane from the landmarks and from vie angle of the amera. In fat, the distane, like vie angle, modify the appearane of the landmarks on the image plane of the amera, determining a groth of the unreliability in the visual angle estimation. The self-loation estimation time is about 40 ms, but the global time depends on the time taken by the pan-tilt head to move in order to permit to the amera to see all landmarks. The speed of the head is 300 deg/se. In order to redue the global time, the head starts to move during the motion of the vehile from a path position to the next. This is possible beause the path positions are a-priori knon. Furthermore, no a speifi onfiguration of the landmarks is requested for self-loation, so the EDs are plaed in the environment in order to have their maximum visibility. Hoever, in real ontext or to redue the self-loation error or to resolve unforeseen landmark olusion situations, e have hosen to use more setup of landmarks. So, beause the path is knon, at eah position the vehile ould hoose that setup of landmarks ill produe loest unertainty in amera loalization. 5. Conlusions This paper desribes a ne method for amera loation suitable for mobile robot navigation in indoor environment orking on a 68040 omputer. Very promising results have been obtained from tests on our SAURO mobile robot. The next step in our researh is to substitute natural landmarks for the EDs and to develop a mobile robot apable of operating as a transportation system in any indoor environment. [4] A.Distante E.Stella. Self-loation of a mobile robot by estimation of amera parameters. Robotis and Autonomous Systems - Elsevier, 5:79 87, 995. [5] P.A.Beardsley et al. The orretion of radial distortion in images. Tehnial report 896/9,Department of Engineering Siene, Univ. of Oxford (UK). [6] H.P.Morave. The stanford art and the mu rover. Proeeding of IEEE, pages 87 884, July 983. [7] H.S.Cho J.H.Kim. Experimental investigation for a mobile robot s position by linear sanning of a landmark. Robotis and Autonomous Systems - Elsevier, 3(), June 994. [8] M.Herniou J.Weng, P.Cohen. Camera alibration ith distortion models and auray evaluation. IEEE PAMI, 4(0), 99. [9] R.Y.Tsai R.K.enz. Tehniques for alibration of sale fator and image enter for high auray 3d vision metrology. Pro. IEEE Int. Conf. Robotis Automat., Marh 987. [0] R.Y.Tsai. A versatile amera alibration tehnique for high-auray 3d mahine vision metrology using offthe-shelf tv ameras and lenses. IEEE J. Robotis Automat., RA-3(4), 987. [] T.D Orazio A.Distante G.Attolio.Caponetti E. Stella. A vision algorithm for mobile vehile navigation. SPIE Proeedings on Advanes in Intelligent Roboti systems, November 99. Referenes [] A.Kak A.Kosaka. Fast vision-guided mobile robot navigation using model-based reasoning and predition of unertainties. CVGIP:Image Understanding, 56(3), November 99. [] O.Binford D.J.Kriegman, E.Triendl. Stereo vision and navigation in buildings for mobile robots. IEEE trans. on Robotis and Automation, 5(6):79 803, 989. [3] E.Krotkov. Mobile robot loalization using a single image. Pro. of IEEE Conf. on Robotis and Automation, 989.