Short Papers. Adaptive Navigation of Mobile Robots with Obstacle Avoidance

Similar documents
10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

Motion Control (wheeled robots)

1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Task selection for control of active vision systems

Optimal Trajectory Generation for Nonholonomic Robots in Dynamic Environments

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta

Hand-Eye Calibration from Image Derivatives

Time-to-Contact from Image Intensity

An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles

Self-calibration of a pair of stereo cameras in general position

A Robust Two Feature Points Based Depth Estimation Method 1)

Motion Planning of Multiple Mobile Robots for Cooperative Manipulation and Transportation

Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators

Keeping features in the camera s field of view: a visual servoing strategy

Prediction-Based Path Planning with Obstacle Avoidance in Dynamic Target Environment

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

Centre for Autonomous Systems

Design of Obstacle Avoidance System for Mobile Robot using Fuzzy Logic Systems

1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT

Kinematics of Wheeled Robots

Two-View Geometry (Course 23, Lecture D)

MEM380 Applied Autonomous Robots Winter Robot Kinematics

Visual Servoing Utilizing Zoom Mechanism

Optical Flow-Based Person Tracking by Multiple Cameras

10. Cartesian Trajectory Planning for Robot Manipulators

Mobile Robot Kinematics

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control

Segmentation and Tracking of Partial Planar Templates

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Singularity Loci of Planar Parallel Manipulators with Revolute Joints

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT

Robotics 2 Iterative Learning for Gravity Compensation

arxiv: v1 [cs.cv] 2 May 2016

Photometric Stereo with Auto-Radiometric Calibration

SHIP heading control, also known as course keeping, has

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony

Expanding gait identification methods from straight to curved trajectories

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

Adaptive tracking control scheme for an autonomous underwater vehicle subject to a union of boundaries

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Elastic Bands: Connecting Path Planning and Control

CANAL FOLLOWING USING AR DRONE IN SIMULATION

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Dominant plane detection using optical flow and Independent Component Analysis

Autonomous and Mobile Robotics. Whole-body motion planning for humanoid robots (Slides prepared by Marco Cognetti) Prof.

Tracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Module 1 Session 1 HS. Critical Areas for Traditional Geometry Page 1 of 6

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Space Robot Path Planning for Collision Avoidance

Vision-Motion Planning with Uncertainty

Background for Surface Integration

Fingerprint Classification Using Orientation Field Flow Curves

Collision-free Path Planning Based on Clustering

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Combining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving

Neuro-adaptive Formation Maintenance and Control of Nonholonomic Mobile Robots

Mobile Robots Locomotion

Model Based Perspective Inversion

arxiv: v1 [cs.ro] 2 Sep 2017

Convex combination of adaptive filters for a variable tap-length LMS algorithm

CHAPTER 3 MATHEMATICAL MODEL

Minimum Bounding Boxes for Regular Cross-Polytopes

Camera Calibration Using Line Correspondences

A Factorization Method for Structure from Planar Motion

Chapter 3 Path Optimization

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano tel:

Cooperative Conveyance of an Object with Tethers by Two Mobile Robots

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Global Trajectory Generation for Nonholonomic Robots in Dynamic Environments

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

Visual Recognition: Image Formation

Light source estimation using feature points from specular highlights and cast shadows

Simplified Walking: A New Way to Generate Flexible Biped Patterns

A Short SVM (Support Vector Machine) Tutorial

A Novel Stereo Camera System by a Biprism

Final Exam Practice Fall Semester, 2012

Collision Avoidance Method for Mobile Robot Considering Motion and Personal Spaces of Evacuees

A Survey of Light Source Detection Methods

Optimal Paint Gun Orientation in Spray Paint Applications Experimental Results

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Stable Grasp and Manipulation in 3D Space with 2-Soft-Fingered Robot Hand

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

A threshold decision of the object image by using the smart tag

Introduction to Robotics

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Long-term motion estimation from images

Transcription:

596 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 Short Papers Adaptive Navigation of Mobile Robots with Obstacle Avoidance Atsushi Fujimori, Peter N. Nikiforuk, and Madan M. Gupta Abstract A local navigation technique with obstacle avoidance, called adaptive navigation, is proposed for mobile robots in which the dynamics of the robot are taken into consideration. The only information needed about the local environment is the distance between the robot and the obstacles in three specified directions. The navigation law is a first-order differential equation and navigation to the goal and obstacle avoidance are achieved by switching the direction angle of the robot. The effectiveness of the technique is demonstrated by means of simulation examples. Index Terms Adaptive navigation, mobile robot, navigation, obstacle avoidance. I. INTRODUCTION Robot navigation problems can be generally classified as global or local [], depending upon the environment surrounding the robot. In global navigation, the environment surrounding the robot is known and a path which avoids the obstacles is selected. In one example of the global navigation techniques, graphical maps which contain information about the obstacles are used to determine a desirable path []. In local navigation, the environment surrounding the robot is unknown, or only partially known, and sensors have to be used to detect the obstacles and a collision avoidance system must be incorporated into the robot to avoid the obstacles. The artifical potential field approach is one of the well-known techniques which has been developed for this purpose [], [] [6]. Krogh, for example, used a generalized potential field approach to obstacle avoidance []. Kilm and Khosla used instead harmonic potential functions for obstacle avoidance [6]. On the other hand, Krogh and Fang used the dynamic generation of subgoals using local feedback information [5]. Robots do have dynamic characteristics and such characteristic must be taken into account when path planning. The stabilization of the navigation laws is also important. This paper focuses on the local navigation problem taking into consideration the dynamics of the robot. The goal which the robot should reach is known, but the geometry and the location of the obstacles are unknown. The robot discussed in this paper can move in three directions, forward, left or right, and has three distance sensors to detect the obstacles in real-time. The direction angle of the robot s movement is determined by the direction angle of the goal and sensor signals. The navigation law is given by a first-order differential equation, by means of which the goal can be reached while avoiding the obstacles. The conditions for obstacle avoidance and stability are given. The advantages of the proposed navigation scheme are that less local information is required than in some other Manuscript received June 4, 994; revised April 995. This paper was recommended for publication by Associate Editor R. Chatila and Editor S. E. Salcudean upon evaluation of the reviewers comments. A. Fujimori is with the Department of Energy and Mechanical Engineering, Shizuoka University, Chome, Hamamatsu Japan 4. P. N. Nikiforuk and M. M. Gupta are with the Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada S7N 5A9 (e-mail: GUPTAM@sask.usask.ca). Publisher Item Identifier S 4-96X(97)86-. techniques and the navigation law is simpler and more flexible than ones using the artificial potential field methods. It is for this reason that it is called adaptive navigation in this paper. II. -D MOBILE ROBOT AND ADAPTIVE NAVIGATION Consider the two-dimensional (-D) navigation problem shown in Fig., where the position and velocity of a robot are represented by the Cartesian coordinates (x(t);y(t)) and (t), where t is time. The starting and goal points of the robot are, respectively, (x ;y ) and (; ). Its directional angle is (t); ( (t) < ), which is measured from the x-axis, and has the initial value. There may be obstacles in the plane of motion and the objective is to navigate the robot to the goal while avoiding the obstacles. The following assumptions are made with respect to the robot. (A) It moves only in the forward direction. (A) It turns to the right or left with the minimum rotation radius r min. (A) Its velocity (t) is constant except near the goal. The equations of motion of the robot are then _x(t) =(t)cos(t) () _y(t)=(t)sin(t): Assumption (A) can be represented as the following inequality: j (t)j _ (t) : () r min Consider a quadratic index E(t) = [(t) (t)] () where (t) is a desirable direction angle for navigating the robot to the goal, or for avoiding obstacles. If there were no obstacles and Assumption (A) was not taken into account the robot would instantly turn toward the goal at the start of its motion and then move directly toward it. This is the optimal path [8], but impossible due to the dynamics of the robot. For this reason, the following navigation law is proposed: _(t) =[(t) (t)] (4) where is a positive constant. The gradient descent method is then used to determine the direction angle to achieve the minimum E(t). Since this process can be used not only to navigate the robot to its goal, but also to avoid obstacles by switching (t) based on the information available from the three distance sensors, (t) is called the direction angle command and the technique adaptive navigation. For a better understanding of what is involved, consider the case of navigation without an obstacle. Let t (t) be the desirable directional angle for navigating the robot to the goal. In this case, (t) = t where t = (t) (t) +((t)(t)) (t) ((t) >(t)) (5) tan y(t) ; ( (t) < ) (6) x(t) Adaptive as used in this paper is not the same as conventionally used in automatic control. The direction angle command (t) is selected adaptively taking into account the sensor information and the goal direction. 4 96X/97$. 997 IEEE

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 597 Two Positive Detection Case: In Fig. (b) and (c) an obstacle is detected in front of the robot; that is, d c > In the former, d c >d r cos and d l <. This indicates that a collision-free space extends to the left and a(t) is then given by a(t) =(t) + sgn(d l d r) " + sgn(d l d r): () Fig.. Navigation with obstacles. and (t) is the position angle. When the robot is far from the goal, t (t) is essentially constant. Since (4) is then a linear first-order differential equation with a positive constant ; (t) monotonously approaches t (t) for a step input t (t) and the maximum of _ (t) is _ () = j() t (t)j. It is expected that the robot turns to the goal and approaches it directly. However, since (t) changes with the coordinates of the robot (x(t);y(t)); t(t)also changes with time. When the robot is near the goal, consideration must be given, therefore, to the stability of the navigation law and this is done in the latter. III. AVOIDANCE BEHAVIOR The robot has three distance sensors which measure the distances between it and the obstacles. Let d c ; d r and d l be the distances in the center, right and left directions, as shown in Fig. (a), where d r and d l are inclined from the center by an angle. The following assumptions are made for the distance sensors and obstacles. (A4) The maximum measurable range of the distance sensors is d max. When the sensor does not detect an obstacle or the distance is greater than d max, the sensor produces a negative output. (A5) All obstacles are modeled as convex polygons whose vertex angle is sufficiently larger than. (A6) The distances between the obstacles is greater than d max. Assumption (A4) represents a practical situation. The purpose of Assumption (A5) is to ensure the detection of the obstacles by the three distance sensors. Assumption (A6), as will be explained later, is needed for obstacle avoidance. Three Positive Detection Case: Fig. shows all aspects under which the robot encounters obstacles. When the obstacles are detected in the three directions, and d l d r as shown in Fig. (a), the distance in the left direction is greater than in the right direction and the robot should steer to the left. Letting " be the angle shown in Fig. (a), the robot should turn to the left by " to avoid the obstacle, and when d l <d r, the robot should turn to the right by ". Embedding this avoidance behavior into the navigation control law (4), the desirable direction angle for avoiding the obstacle a(t) is given by where a(t) =(t)+sgn(d l d r ) " (7) " =tan ((d cos d c ; )=d sin ); j"j < sgn(x) = (8) d max(d r ;d l ) (9) + (x ) (x<) () In the latter, since d c <d l cos and d r <, the right margin may be larger than the left margin. However, it may happen that an obstacle exists in the right side but is not detected at this instant. To assure obstacle avoidance, the robot should turn to the left by ", as shown in Fig. (c), and the desirable direction angle is given by (7). When d c <, as shown in Fig. (d), steering is not needed because there is no obstacle in the center direction. Therefore, the desirable direction angle is a(t) = (t). One Positive Detection Case: Two cases are considered as shown in Fig. (e) and (f). In the former, steering is not needed because of d c < ; that is, a(t) =(t). In the latter, steering action is needed because d c >. However, since both d r and d l are negative, the right and left margins are equal and the robot is set, therefore, to turn to the left by where a(t) =(t)+ : () As shown in Fig. (a) (f), the information from the three sensors does not completely define the geometry of the obstacles. However, since d c; d r and d l are updated in real-time, and because of Assumption (A5), it is almost possible to avoid the obstacles by using a(t) given by (7) (). IV. CONDITIONS FOR STABILITY AND OBSTACLE AVOIDANCE A. Stability of Navigation Law Since navigation to the goal and obstacle avoidance are performed by switching the direction angle command (t) as described in the next section, the adaptive navigation can be separately analyzed for the stability of the navigation law without the obstacles and the obstacle avoidance condition. First, the stability of the navigation law is discussed using the Lyapunov stability method. Theorem : Without obstacles in the -D plane, and with the following inequality x(t) + y(t) > (t) () (t) asymptotically approaches (t). Proof: From (4) to (6) _ (t) = _ t = (t)= _ x_y _xy +( y ) x x = + tan x + y (cos sin cos sin ) (x + y ) cos = sin( ) x + y = x + sin( y ): (4) The equilibrium point of (4) is =. Since E(t) =at = and E(t) > for 6= ;E(t)is a candidate for a Lyapunov function. The time derivative of E(t) is _E(t) =( )( ) =( ) ( )+ = ( ) x + y x +y sin( ) sin( ) x + y : (5)

598 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 (a) (b) (c) (d) (e) (f) Fig.. Avoiding behaviors of the mobile robot. (a) Three positive detections (d l d r ). (b) Two positive detections, Case (d c >d r)cos; d l < ). (c) Two positive detections, Case (d c < d l ) cos ; d r < ). (d) Two positive detections, Case (d c < ). (e) One positive detection, Case (d l > ;d c;d r < ). (f) One positive detection, Case (d c > ;d l ;d r < ). Since, j j from (5) sin( ) < : (6) The sufficient condition for _E(t) is given by (). An insight into the navigation of mobile robots can be obtained from Theorem. Suppose that the starting position (x ;y ) is sufficiently far from the goal (; ), and that (t) = (= const) for t. From Theorem, (t) asymptotically approaches (t) while satisfying (). However, if the robot is near the goal, () may be violated and E(t) _ > may occur. To ensure that the robot reaches the goal, an operation is needed near the goal. This is discussed in the following section. B. Condition for Obstacle Avoidance In general, the additional steering angle a(t) (t) changes with time t. However, when the three distances d c ; d r, and d l are measured for one side of a convex obstacle, a(t) (t) does not change because the additional steering angle is determined from the geometry of the obstacle as shown in Fig. (a) (c) and (f). A sufficient condition for obstacle avoidance is given in the following theorem.

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 599 TABLE I PARAMETERS OF MOBILE ROBOT USED IN SIMULATION Theorem : Suppose that the velocity of a robot is constant and Assumption (A5) is satisfied. When the robot detects one side of an obstacle as shown in Fig. (a) (c) and (f), a sufficient condition for obstacle avoidance is given by d max Si (7) where Si(x) is an integral sinusoidal function defined as x sin t Si(x) dt: (8) t Proof: a(t) is a constant unless the location of the sensor detection changes. Suppose that the robot encounters an obstacle at t = t and detects one side of the obstacle as shown in Fig. (a) (c) and (f). a(t) is then written as a(t) =(t )+(= const) (9) where is a steering angle to avoid the obstacle and is given by = " for the conditons in Fig. (a) and (c), and = + " and = for the conditions shown in Fig. (b) and (f), respectively. According to the avoidance behavior described in the previous section, the maximum of jj is. Since it is sufficient to consider the avoidance behavior here, t can be set as without loss of generality. Integrating (4) with (9), (t) is given by In particular, = (t) =e t () + t e (t) ad = () + ( e t ) for t : () (t) =() + ( et ) for t : () Let (t) be the distance in the direction of (). The time derivative of (t); that is, the velocity of the robot in the direction of (), is given by (t) _ = cos((t) ()). The distance which the robot moves in the direction of () during the avoidance behavior is then given by () = cos((t) ())dt: () In (), let t be the time that (t )=() + ( < ); that is t = log : () Since () and () are the linear first-order systems, the trajectory of () with < in t [; ) is the same as the one of Fig.. Trajectories of mobile robot without obstacles. () with = in t [t ; ) even if is negative. Therefore, () becomes () = sin et dt = t sin d = Si(): (4) The collision of the robot with the obstacle is avoided if () d max. Moreover, max () = Si : (5) Therefore, (7) is the sufficient condition. Theorem gives a lower value of d max for obstacle avoidance. If Theorem and Assumption (6) are satisfied, () in all detection cases which a steering action is needed is shorter than d max. Collision with all the obstacles is almost accomplished. V. DESIGN OF ADAPTIVE NAVIGATION A. Combination of Avoidance Behavior with Navigation A basic concept is that when obstacles are detected, priority must be given to avoidance behavior over navigating to the goal. However, if the navigating action covers the avoidance behavior, the robot is navigated to the goal according to t(t) given in (5). This concept can be realized by switching the direction angle command (t) in (4). Consider now the possibility of the navigation action even if the robot detects obstacles. In Fig. (a) (c), and (f), a steering action is needed to avoid obstacles. If the following inequalities are maintained. or t (t) a(t) (t) (6) t (t) a(t) (t) (7) then navigating to the goal is sufficient for avoiding obstacles. Therefore, the direction angle command in (4) is given by (t) = t(t). In Fig. (e), the steering action is not needed. (t) is also given by t (t) if the following conditions are held t (t) (t); d l < ; d r > (8)

6 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 Fig. 4. Direction angle (t) and the desired direction angle (t) of Case in Example. or t (t) (t); d l > ; d r < : (9) In Fig. (d), the steering action is also not needed. Unlike Fig. (e), the obstacles are detected in both the right and left directions. Thus, (t) is given by (t) =a(t)=(t). For convenience, (t) = t(t)and (t) =a(t)are, respectively, called the navigation and avoidance modes hereafter. Summarizing the above discussion, an algorithm to decide (t) is given as follows: begin if d c < and d l ; d r> then (t) =a(t) else if d c > then if t (t) a(t) (t), ort(t) a(t)(t) then (t) =t(t) else (t) =a(t) end if else if d l < and t (t) (t), ord r <and t (t) (t) then (t) =t(t) else (t) =a(t) end if end B. Design of Navigation Parameters The navigation parameters are ; (t) and d max. For the navigation and avoidance modes j(t) (t)j <and, respectively. Using () and (4), is given by = () r min in order for Assumption (A) to apply. When () is violated, the robot may not reach the goal. To make the robot reach the goal, the navigation law has to be switched to another mode mentioned below. Let t f be the time which the following equation is maintained x(t f ) + y(t f ) = r min : () Equations () and () were used to derive (). Then (t) and (t) are switched as (t) = (4) ( t<t f) (t f ) (t f t t f + t s) () Fig. 5. Derivative of E(t) of Case in Example. (t) = ( t<t f) () (t t r f ) (t f t t f + t s ) where r min t s : (4) To distinguish between the navigation and avoidance modes, a mode using the lower equations in () and (), called the final mode, is used. From () and (), the final arrival location error e f is calculated as e f =r min sin j(t f ) (t f )j : (5) Since S i ( ) ' 7 from [9], the maximum measurable range of the 5 distance sensors d max is given by the following using (7) and (). d max 7 rmin: (6) 5 VI. NUMERICAL SIMULATION Three simulation examples were used to evaluate the adaptive navigation technique. In these simulations, the goal was placed at the origin (, ) and, for simplicity, the obstacles were made rectangular. Table I lists the initial conditions and the parameters of the robots used in the numerical simulation. Example (No Obstacle): Fig. shows the trajectories of the robot. Arrows at the starting point indicate the initial directions of the robot. In Case, since >, the robot steered to the left and reached the goal. Conversely, in Case, since <, the robot steered to the right. In both cases, the navigation mode was switched to the final mode at the switching boundary shown by the dashed line. The radius of the circle, according to (), was r min. Figs. 4 and 5, respectively, show the direction angle (t), the direction angle command (t) and the derivative of E(t) of Case. The robot was navigated to the goal by the navigation mode until t =.8 s as denoted by the switching point in Fig. 5. Since E(t) _ was negative in the navigation mode, (t) asymptotically approached (t). Although E(t) _ was positive after the switching point, the robot was navigated to the goal by switching to the final mode; that is, the direction angle was fixed at (.8) and the velocity of the robot decreased according to (). Example (with One Obstacle): Fig. 6 shows the trajectories of the robot when an obstacle was present. The same starting point and initial direction angle were used as in Cases,, and. The difference among them was the maximum measurable range of

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 6 Fig. 6. Trajectories of the mobile robot with one obstacle. Fig. 8. Trajectories of the mobile robot with six obstacles. Fig. 7. Direction and desired direction angles of Case in Example. Fig. 9. Direction and desired direction angles of Case in Example. distance d max =r min (> 7 5 r min) in Case, d max = 7 5 r min in Case and d max = r min(< 7 5 rmin) in Case. It is seen from Fig. 6 that Theorem is valid for obstacle avoidance. Fig. 7 shows the response of (t) and (t) for Case. When an obstacle was detected first, the navigation mode was switched to the avoidance mode and (t) changed from 8 to 7. Although (t) sometimes changed discontinuously, the trajectory of the robot was smooth according to () and (4) as shown in Fig. 6. In Cases,, and, the center sensor first detected the obstacle. This situation corresponded to Fig. (f), and (t) was given by (). On the other hand, since the situation of Case 4 corresponded to Fig. (b), the robot moved to the goal over the obstacle. Example (with Multiple Obstacles): In this case, there were six obstacles as shown in Fig. 8. The robot started from three different starting points with three initial direction angles. Consequently, although the paths to the goal were different, collisions with the obstacles were avoided. Fig. 9 shows the responses of (t) and (t) of Case. By adaptively changing the modes, the robot avoided obstacles using information from the three distance sensors and reached the goal. VII. CONCLUDING REMARKS A new navigation technique, called adaptive navigation, for robots with obstacle avoidance has been proposed in this paper. The robot had three distance sensors and the navigation law was given by a first-order differential equation. Navigation to the goal and obstacle avoidance were achieved by switching the desirable direction angle. Conditions for the stability and obstacle avoidance were discussed, and an algorithm which navigates the robot to the goal while avoiding obstacles was presented. Three simulations examples were given to substantiate the validness of the theorems and the algorithm derived. Since this paper mainly focused on an explanation of the concept of the adaptive navigation, a number of conditions were imposed on the robot, and also on the obstacles. They included the direction of movement and the velocity of the robot, and the shape and location of the obstacles. Some may be removed through future research. REFERENCES [] R. B. Tilove, Local obstacle avoidance for mobile robots based on the method of artificial potentials, in Proc. IEEE Conf. Robotics Automat., Cincinnati, OH, 99, pp. 566 57. [] A. R. Brooks, Solving the find-path problem by good representation of free space, IEEE Trans. Syst., Man, Cybern., vol. SMC-, pp. 9 97, Mar./Apr. 98. [] B. H. Krogh, A generalized potential field approach to obstacle avoidance control, in Proc. Int. Robot. Res. Conf., Bethlehem, PA, Aug. 984. [4] O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots, in Proc. IEEE Conf. Robot. Automat., 985, pp. 5 55.

6 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 [5] B. H. Krogh and D. Feng, Dynamic generation of subgoals for autonomous mobile robots using local feedback information, IEEE Trans. Automat. Contr., vol. 4, pp. 48 49, May 989. [6] J.-O. KiIm and P. K. Khosla, Real-time obstacle avoidance using harmonic potential functions, IEEE Trans. Robot. Automat., vol. 8, pp. 8 49, June 99. [7] D. Feng and B. H. Krogh, Dynamic steering control of conventionallysteered mobile robots, in Proc. IEEE Conf. Robot. Automat., Cincinnati, OH, 99, pp. 9 95. [8] A. E. Bryson and Y.-C. Ho, Applied Optimal Control. New York: Wiley, 975. [9] Akademii Nauk SSSR, Tables of the Exponential Integral Functions, 954. Fig.. Coordinate systems and the camera model. Model-Independent Recovery of Object Orientations T. N. Tan, K. D. Baker, and G. D. Sullivan Abstract A novel algorithm is presented for determining the orientation of road vehicles in traffic scenes using video images. The algorithm requires no specific -D vehicle models and only uses local image gradient values. It may easily be implemented in real-time. Experimental results with a variety of vehicles in routine traffic scenes are included to demonstrate the effectiveness of the algorithm. Index Terms Ground-plane constraint, model-based vision, recognition, traffic scene analysis, vehicle localization. I. INTRODUCTION The determination of object position and orientation from monocular perspective images is a fundamental problem in robot vision. Existing algorithms (e.g., [] [5]) typically entail the extraction of symbolic image features (e.g., line segments, vertices, ribbons, etc.) and the matching of such features with -D object models. Both feature extraction and matching are error-prone and time-consuming. Line segment extraction, for instance, is inherently an ill-defined problem, and is often critically dependent on empirically determined tuning parameters such as threshold and scale to achieve acceptable performance. Feature matching often involves the computation of interfeature relationships and combinatorial search [6]. The problems of existing algorithms are, to a large extent, due to the large number of unknown pose parameters (three for orientation and three for position) that need to be computed. In many practical vision applications, however, the pose of an object often has a much smaller number of degrees of freedom (dof) because of known physical constraints. For example, under normal conditions, road vehicles are constrained to lie on the known groundplane (GP). Furthermore, vehicles only have one stable pose the wheels must rest on the GP. This ground-plane constraint (GPC) reduces the number of dofs of rigid objects from 6 to. The three dofs are most conveniently described by the location (X; Y ) on the GP, and the orientation () about the normal of the GP. Although Manuscript received January 9, 995; revised July, 995. This paper was recommended for publication by Associate Editor R. A. Jarvis and Editor A. J. Koivo upon evaluation of the reviewers comments. The authors are with the Department of Computer Science, The University of Reading, Whiteknights, Reading, Berkshire RG6 6AY, England (e-mail: T.Tan@reading.ac.uk). Publisher Item Identifier S 4-96X(97)87-. our primary interest in this paper is related to traffic scene analysis, other similar applications such as the location and recognition of objects on a table, or parts on a conveyor belt, are commonplace. Other common vision problems are also subject to an equivalent planar constraint, such as the location of landmarks by means of a robot-mounted camera which translates parallel to the ground [7]. In previous work [8], [9], we have shown that the GPC significantly simplifies model-based object localization. In particular, each match between a -D image and a -D model line yields a simple independent constraint on the orientation. This allows the parameter to be computed independently of the location parameters X and Y. However, when the number of vehicle models to be considered is large, the model-based orientation recovery algorithm described in [8] and [9] could still be very slow. The structure of common road vehicles consists of predominantly two sets of parallel lines one along the length direction and one along the width direction [5]. This fact is exploited here to devise a novel algorithm which allows model-independent determination of vehicle orientations. The algorithm also eliminates the need for explicit symbolic feature extraction and image-to-model matching. The computational cost is thus substantially reduced. In fact, since the algorithm only requires local gradient data, the orientation can be determined directly from the input video images on-the-fly, and the overall algorithm can easily be implemented in real-time. We describe the coordinate systems in the next section, and introduce the orientation constraint in Section III. The algorithm for recovering the orientation parameter is detailed in Section IV, and experimental results are given in Section V. II. COORDINATE SYSTEMS AND CAMERA GEOMETRY In this paper, lower-case bold letters are used to denote (row) vectors, and upper-case bold letters to symbolize matrices. The world coordinate system (WCS) is defined on the GP, with its X w Y w plane coincidental with the GP and its +Z w -axis pointing upwards (see Fig. ). For simplicity, the X m Y m plane of the model coordinate system (MCS) is also chosen to be on the GP. The X m - and the Y m-axis of the MCS are aligned, respectively, with the width and the length direction of the vehicle. The z m-axis also points upwards. Under the MCS so defined, the unit direction vectors of the two predominant sets of parallel lines of the vehicle are m x = ( ) for the widthwise set, and m y = ( ) for the lengthwise set (both vectors are expressed in the MCS). The transformation from the MCS to the WCS is described by a rotation angle (the object orientation) about the vertical axis and a translation (X; Y ) on the GP. The camera is a pinhole camera 4 96X/97$. 997 IEEE