Short Papers. Adaptive Navigation of Mobile Robots with Obstacle Avoidance
|
|
- Clement Carson
- 6 years ago
- Views:
Transcription
1 596 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 Short Papers Adaptive Navigation of Mobile Robots with Obstacle Avoidance Atsushi Fujimori, Peter N. Nikiforuk, and Madan M. Gupta Abstract A local navigation technique with obstacle avoidance, called adaptive navigation, is proposed for mobile robots in which the dynamics of the robot are taken into consideration. The only information needed about the local environment is the distance between the robot and the obstacles in three specified directions. The navigation law is a first-order differential equation and navigation to the goal and obstacle avoidance are achieved by switching the direction angle of the robot. The effectiveness of the technique is demonstrated by means of simulation examples. Index Terms Adaptive navigation, mobile robot, navigation, obstacle avoidance. I. INTRODUCTION Robot navigation problems can be generally classified as global or local [], depending upon the environment surrounding the robot. In global navigation, the environment surrounding the robot is known and a path which avoids the obstacles is selected. In one example of the global navigation techniques, graphical maps which contain information about the obstacles are used to determine a desirable path []. In local navigation, the environment surrounding the robot is unknown, or only partially known, and sensors have to be used to detect the obstacles and a collision avoidance system must be incorporated into the robot to avoid the obstacles. The artifical potential field approach is one of the well-known techniques which has been developed for this purpose [], [] [6]. Krogh, for example, used a generalized potential field approach to obstacle avoidance []. Kilm and Khosla used instead harmonic potential functions for obstacle avoidance [6]. On the other hand, Krogh and Fang used the dynamic generation of subgoals using local feedback information [5]. Robots do have dynamic characteristics and such characteristic must be taken into account when path planning. The stabilization of the navigation laws is also important. This paper focuses on the local navigation problem taking into consideration the dynamics of the robot. The goal which the robot should reach is known, but the geometry and the location of the obstacles are unknown. The robot discussed in this paper can move in three directions, forward, left or right, and has three distance sensors to detect the obstacles in real-time. The direction angle of the robot s movement is determined by the direction angle of the goal and sensor signals. The navigation law is given by a first-order differential equation, by means of which the goal can be reached while avoiding the obstacles. The conditions for obstacle avoidance and stability are given. The advantages of the proposed navigation scheme are that less local information is required than in some other Manuscript received June 4, 994; revised April 995. This paper was recommended for publication by Associate Editor R. Chatila and Editor S. E. Salcudean upon evaluation of the reviewers comments. A. Fujimori is with the Department of Energy and Mechanical Engineering, Shizuoka University, Chome, Hamamatsu Japan 4. P. N. Nikiforuk and M. M. Gupta are with the Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada S7N 5A9 ( GUPTAM@sask.usask.ca). Publisher Item Identifier S 4-96X(97)86-. techniques and the navigation law is simpler and more flexible than ones using the artificial potential field methods. It is for this reason that it is called adaptive navigation in this paper. II. -D MOBILE ROBOT AND ADAPTIVE NAVIGATION Consider the two-dimensional (-D) navigation problem shown in Fig., where the position and velocity of a robot are represented by the Cartesian coordinates (x(t);y(t)) and (t), where t is time. The starting and goal points of the robot are, respectively, (x ;y ) and (; ). Its directional angle is (t); ( (t) < ), which is measured from the x-axis, and has the initial value. There may be obstacles in the plane of motion and the objective is to navigate the robot to the goal while avoiding the obstacles. The following assumptions are made with respect to the robot. (A) It moves only in the forward direction. (A) It turns to the right or left with the minimum rotation radius r min. (A) Its velocity (t) is constant except near the goal. The equations of motion of the robot are then _x(t) =(t)cos(t) () _y(t)=(t)sin(t): Assumption (A) can be represented as the following inequality: j (t)j _ (t) : () r min Consider a quadratic index E(t) = [(t) (t)] () where (t) is a desirable direction angle for navigating the robot to the goal, or for avoiding obstacles. If there were no obstacles and Assumption (A) was not taken into account the robot would instantly turn toward the goal at the start of its motion and then move directly toward it. This is the optimal path [8], but impossible due to the dynamics of the robot. For this reason, the following navigation law is proposed: _(t) =[(t) (t)] (4) where is a positive constant. The gradient descent method is then used to determine the direction angle to achieve the minimum E(t). Since this process can be used not only to navigate the robot to its goal, but also to avoid obstacles by switching (t) based on the information available from the three distance sensors, (t) is called the direction angle command and the technique adaptive navigation. For a better understanding of what is involved, consider the case of navigation without an obstacle. Let t (t) be the desirable directional angle for navigating the robot to the goal. In this case, (t) = t where t = (t) (t) +((t)(t)) (t) ((t) >(t)) (5) tan y(t) ; ( (t) < ) (6) x(t) Adaptive as used in this paper is not the same as conventionally used in automatic control. The direction angle command (t) is selected adaptively taking into account the sensor information and the goal direction. 4 96X/97$. 997 IEEE
2 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST Two Positive Detection Case: In Fig. (b) and (c) an obstacle is detected in front of the robot; that is, d c > In the former, d c >d r cos and d l <. This indicates that a collision-free space extends to the left and a(t) is then given by a(t) =(t) + sgn(d l d r) " + sgn(d l d r): () Fig.. Navigation with obstacles. and (t) is the position angle. When the robot is far from the goal, t (t) is essentially constant. Since (4) is then a linear first-order differential equation with a positive constant ; (t) monotonously approaches t (t) for a step input t (t) and the maximum of _ (t) is _ () = j() t (t)j. It is expected that the robot turns to the goal and approaches it directly. However, since (t) changes with the coordinates of the robot (x(t);y(t)); t(t)also changes with time. When the robot is near the goal, consideration must be given, therefore, to the stability of the navigation law and this is done in the latter. III. AVOIDANCE BEHAVIOR The robot has three distance sensors which measure the distances between it and the obstacles. Let d c ; d r and d l be the distances in the center, right and left directions, as shown in Fig. (a), where d r and d l are inclined from the center by an angle. The following assumptions are made for the distance sensors and obstacles. (A4) The maximum measurable range of the distance sensors is d max. When the sensor does not detect an obstacle or the distance is greater than d max, the sensor produces a negative output. (A5) All obstacles are modeled as convex polygons whose vertex angle is sufficiently larger than. (A6) The distances between the obstacles is greater than d max. Assumption (A4) represents a practical situation. The purpose of Assumption (A5) is to ensure the detection of the obstacles by the three distance sensors. Assumption (A6), as will be explained later, is needed for obstacle avoidance. Three Positive Detection Case: Fig. shows all aspects under which the robot encounters obstacles. When the obstacles are detected in the three directions, and d l d r as shown in Fig. (a), the distance in the left direction is greater than in the right direction and the robot should steer to the left. Letting " be the angle shown in Fig. (a), the robot should turn to the left by " to avoid the obstacle, and when d l <d r, the robot should turn to the right by ". Embedding this avoidance behavior into the navigation control law (4), the desirable direction angle for avoiding the obstacle a(t) is given by where a(t) =(t)+sgn(d l d r ) " (7) " =tan ((d cos d c ; )=d sin ); j"j < sgn(x) = (8) d max(d r ;d l ) (9) + (x ) (x<) () In the latter, since d c <d l cos and d r <, the right margin may be larger than the left margin. However, it may happen that an obstacle exists in the right side but is not detected at this instant. To assure obstacle avoidance, the robot should turn to the left by ", as shown in Fig. (c), and the desirable direction angle is given by (7). When d c <, as shown in Fig. (d), steering is not needed because there is no obstacle in the center direction. Therefore, the desirable direction angle is a(t) = (t). One Positive Detection Case: Two cases are considered as shown in Fig. (e) and (f). In the former, steering is not needed because of d c < ; that is, a(t) =(t). In the latter, steering action is needed because d c >. However, since both d r and d l are negative, the right and left margins are equal and the robot is set, therefore, to turn to the left by where a(t) =(t)+ : () As shown in Fig. (a) (f), the information from the three sensors does not completely define the geometry of the obstacles. However, since d c; d r and d l are updated in real-time, and because of Assumption (A5), it is almost possible to avoid the obstacles by using a(t) given by (7) (). IV. CONDITIONS FOR STABILITY AND OBSTACLE AVOIDANCE A. Stability of Navigation Law Since navigation to the goal and obstacle avoidance are performed by switching the direction angle command (t) as described in the next section, the adaptive navigation can be separately analyzed for the stability of the navigation law without the obstacles and the obstacle avoidance condition. First, the stability of the navigation law is discussed using the Lyapunov stability method. Theorem : Without obstacles in the -D plane, and with the following inequality x(t) + y(t) > (t) () (t) asymptotically approaches (t). Proof: From (4) to (6) _ (t) = _ t = (t)= _ x_y _xy +( y ) x x = + tan x + y (cos sin cos sin ) (x + y ) cos = sin( ) x + y = x + sin( y ): (4) The equilibrium point of (4) is =. Since E(t) =at = and E(t) > for 6= ;E(t)is a candidate for a Lyapunov function. The time derivative of E(t) is _E(t) =( )( ) =( ) ( )+ = ( ) x + y x +y sin( ) sin( ) x + y : (5)
3 598 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 (a) (b) (c) (d) (e) (f) Fig.. Avoiding behaviors of the mobile robot. (a) Three positive detections (d l d r ). (b) Two positive detections, Case (d c >d r)cos; d l < ). (c) Two positive detections, Case (d c < d l ) cos ; d r < ). (d) Two positive detections, Case (d c < ). (e) One positive detection, Case (d l > ;d c;d r < ). (f) One positive detection, Case (d c > ;d l ;d r < ). Since, j j from (5) sin( ) < : (6) The sufficient condition for _E(t) is given by (). An insight into the navigation of mobile robots can be obtained from Theorem. Suppose that the starting position (x ;y ) is sufficiently far from the goal (; ), and that (t) = (= const) for t. From Theorem, (t) asymptotically approaches (t) while satisfying (). However, if the robot is near the goal, () may be violated and E(t) _ > may occur. To ensure that the robot reaches the goal, an operation is needed near the goal. This is discussed in the following section. B. Condition for Obstacle Avoidance In general, the additional steering angle a(t) (t) changes with time t. However, when the three distances d c ; d r, and d l are measured for one side of a convex obstacle, a(t) (t) does not change because the additional steering angle is determined from the geometry of the obstacle as shown in Fig. (a) (c) and (f). A sufficient condition for obstacle avoidance is given in the following theorem.
4 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST TABLE I PARAMETERS OF MOBILE ROBOT USED IN SIMULATION Theorem : Suppose that the velocity of a robot is constant and Assumption (A5) is satisfied. When the robot detects one side of an obstacle as shown in Fig. (a) (c) and (f), a sufficient condition for obstacle avoidance is given by d max Si (7) where Si(x) is an integral sinusoidal function defined as x sin t Si(x) dt: (8) t Proof: a(t) is a constant unless the location of the sensor detection changes. Suppose that the robot encounters an obstacle at t = t and detects one side of the obstacle as shown in Fig. (a) (c) and (f). a(t) is then written as a(t) =(t )+(= const) (9) where is a steering angle to avoid the obstacle and is given by = " for the conditons in Fig. (a) and (c), and = + " and = for the conditions shown in Fig. (b) and (f), respectively. According to the avoidance behavior described in the previous section, the maximum of jj is. Since it is sufficient to consider the avoidance behavior here, t can be set as without loss of generality. Integrating (4) with (9), (t) is given by In particular, = (t) =e t () + t e (t) ad = () + ( e t ) for t : () (t) =() + ( et ) for t : () Let (t) be the distance in the direction of (). The time derivative of (t); that is, the velocity of the robot in the direction of (), is given by (t) _ = cos((t) ()). The distance which the robot moves in the direction of () during the avoidance behavior is then given by () = cos((t) ())dt: () In (), let t be the time that (t )=() + ( < ); that is t = log : () Since () and () are the linear first-order systems, the trajectory of () with < in t [; ) is the same as the one of Fig.. Trajectories of mobile robot without obstacles. () with = in t [t ; ) even if is negative. Therefore, () becomes () = sin et dt = t sin d = Si(): (4) The collision of the robot with the obstacle is avoided if () d max. Moreover, max () = Si : (5) Therefore, (7) is the sufficient condition. Theorem gives a lower value of d max for obstacle avoidance. If Theorem and Assumption (6) are satisfied, () in all detection cases which a steering action is needed is shorter than d max. Collision with all the obstacles is almost accomplished. V. DESIGN OF ADAPTIVE NAVIGATION A. Combination of Avoidance Behavior with Navigation A basic concept is that when obstacles are detected, priority must be given to avoidance behavior over navigating to the goal. However, if the navigating action covers the avoidance behavior, the robot is navigated to the goal according to t(t) given in (5). This concept can be realized by switching the direction angle command (t) in (4). Consider now the possibility of the navigation action even if the robot detects obstacles. In Fig. (a) (c), and (f), a steering action is needed to avoid obstacles. If the following inequalities are maintained. or t (t) a(t) (t) (6) t (t) a(t) (t) (7) then navigating to the goal is sufficient for avoiding obstacles. Therefore, the direction angle command in (4) is given by (t) = t(t). In Fig. (e), the steering action is not needed. (t) is also given by t (t) if the following conditions are held t (t) (t); d l < ; d r > (8)
5 6 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 Fig. 4. Direction angle (t) and the desired direction angle (t) of Case in Example. or t (t) (t); d l > ; d r < : (9) In Fig. (d), the steering action is also not needed. Unlike Fig. (e), the obstacles are detected in both the right and left directions. Thus, (t) is given by (t) =a(t)=(t). For convenience, (t) = t(t)and (t) =a(t)are, respectively, called the navigation and avoidance modes hereafter. Summarizing the above discussion, an algorithm to decide (t) is given as follows: begin if d c < and d l ; d r> then (t) =a(t) else if d c > then if t (t) a(t) (t), ort(t) a(t)(t) then (t) =t(t) else (t) =a(t) end if else if d l < and t (t) (t), ord r <and t (t) (t) then (t) =t(t) else (t) =a(t) end if end B. Design of Navigation Parameters The navigation parameters are ; (t) and d max. For the navigation and avoidance modes j(t) (t)j <and, respectively. Using () and (4), is given by = () r min in order for Assumption (A) to apply. When () is violated, the robot may not reach the goal. To make the robot reach the goal, the navigation law has to be switched to another mode mentioned below. Let t f be the time which the following equation is maintained x(t f ) + y(t f ) = r min : () Equations () and () were used to derive (). Then (t) and (t) are switched as (t) = (4) ( t<t f) (t f ) (t f t t f + t s) () Fig. 5. Derivative of E(t) of Case in Example. (t) = ( t<t f) () (t t r f ) (t f t t f + t s ) where r min t s : (4) To distinguish between the navigation and avoidance modes, a mode using the lower equations in () and (), called the final mode, is used. From () and (), the final arrival location error e f is calculated as e f =r min sin j(t f ) (t f )j : (5) Since S i ( ) ' 7 from [9], the maximum measurable range of the 5 distance sensors d max is given by the following using (7) and (). d max 7 rmin: (6) 5 VI. NUMERICAL SIMULATION Three simulation examples were used to evaluate the adaptive navigation technique. In these simulations, the goal was placed at the origin (, ) and, for simplicity, the obstacles were made rectangular. Table I lists the initial conditions and the parameters of the robots used in the numerical simulation. Example (No Obstacle): Fig. shows the trajectories of the robot. Arrows at the starting point indicate the initial directions of the robot. In Case, since >, the robot steered to the left and reached the goal. Conversely, in Case, since <, the robot steered to the right. In both cases, the navigation mode was switched to the final mode at the switching boundary shown by the dashed line. The radius of the circle, according to (), was r min. Figs. 4 and 5, respectively, show the direction angle (t), the direction angle command (t) and the derivative of E(t) of Case. The robot was navigated to the goal by the navigation mode until t =.8 s as denoted by the switching point in Fig. 5. Since E(t) _ was negative in the navigation mode, (t) asymptotically approached (t). Although E(t) _ was positive after the switching point, the robot was navigated to the goal by switching to the final mode; that is, the direction angle was fixed at (.8) and the velocity of the robot decreased according to (). Example (with One Obstacle): Fig. 6 shows the trajectories of the robot when an obstacle was present. The same starting point and initial direction angle were used as in Cases,, and. The difference among them was the maximum measurable range of
6 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST Fig. 6. Trajectories of the mobile robot with one obstacle. Fig. 8. Trajectories of the mobile robot with six obstacles. Fig. 7. Direction and desired direction angles of Case in Example. Fig. 9. Direction and desired direction angles of Case in Example. distance d max =r min (> 7 5 r min) in Case, d max = 7 5 r min in Case and d max = r min(< 7 5 rmin) in Case. It is seen from Fig. 6 that Theorem is valid for obstacle avoidance. Fig. 7 shows the response of (t) and (t) for Case. When an obstacle was detected first, the navigation mode was switched to the avoidance mode and (t) changed from 8 to 7. Although (t) sometimes changed discontinuously, the trajectory of the robot was smooth according to () and (4) as shown in Fig. 6. In Cases,, and, the center sensor first detected the obstacle. This situation corresponded to Fig. (f), and (t) was given by (). On the other hand, since the situation of Case 4 corresponded to Fig. (b), the robot moved to the goal over the obstacle. Example (with Multiple Obstacles): In this case, there were six obstacles as shown in Fig. 8. The robot started from three different starting points with three initial direction angles. Consequently, although the paths to the goal were different, collisions with the obstacles were avoided. Fig. 9 shows the responses of (t) and (t) of Case. By adaptively changing the modes, the robot avoided obstacles using information from the three distance sensors and reached the goal. VII. CONCLUDING REMARKS A new navigation technique, called adaptive navigation, for robots with obstacle avoidance has been proposed in this paper. The robot had three distance sensors and the navigation law was given by a first-order differential equation. Navigation to the goal and obstacle avoidance were achieved by switching the desirable direction angle. Conditions for the stability and obstacle avoidance were discussed, and an algorithm which navigates the robot to the goal while avoiding obstacles was presented. Three simulations examples were given to substantiate the validness of the theorems and the algorithm derived. Since this paper mainly focused on an explanation of the concept of the adaptive navigation, a number of conditions were imposed on the robot, and also on the obstacles. They included the direction of movement and the velocity of the robot, and the shape and location of the obstacles. Some may be removed through future research. REFERENCES [] R. B. Tilove, Local obstacle avoidance for mobile robots based on the method of artificial potentials, in Proc. IEEE Conf. Robotics Automat., Cincinnati, OH, 99, pp [] A. R. Brooks, Solving the find-path problem by good representation of free space, IEEE Trans. Syst., Man, Cybern., vol. SMC-, pp. 9 97, Mar./Apr. 98. [] B. H. Krogh, A generalized potential field approach to obstacle avoidance control, in Proc. Int. Robot. Res. Conf., Bethlehem, PA, Aug [4] O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots, in Proc. IEEE Conf. Robot. Automat., 985, pp
7 6 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 [5] B. H. Krogh and D. Feng, Dynamic generation of subgoals for autonomous mobile robots using local feedback information, IEEE Trans. Automat. Contr., vol. 4, pp , May 989. [6] J.-O. KiIm and P. K. Khosla, Real-time obstacle avoidance using harmonic potential functions, IEEE Trans. Robot. Automat., vol. 8, pp. 8 49, June 99. [7] D. Feng and B. H. Krogh, Dynamic steering control of conventionallysteered mobile robots, in Proc. IEEE Conf. Robot. Automat., Cincinnati, OH, 99, pp [8] A. E. Bryson and Y.-C. Ho, Applied Optimal Control. New York: Wiley, 975. [9] Akademii Nauk SSSR, Tables of the Exponential Integral Functions, 954. Fig.. Coordinate systems and the camera model. Model-Independent Recovery of Object Orientations T. N. Tan, K. D. Baker, and G. D. Sullivan Abstract A novel algorithm is presented for determining the orientation of road vehicles in traffic scenes using video images. The algorithm requires no specific -D vehicle models and only uses local image gradient values. It may easily be implemented in real-time. Experimental results with a variety of vehicles in routine traffic scenes are included to demonstrate the effectiveness of the algorithm. Index Terms Ground-plane constraint, model-based vision, recognition, traffic scene analysis, vehicle localization. I. INTRODUCTION The determination of object position and orientation from monocular perspective images is a fundamental problem in robot vision. Existing algorithms (e.g., [] [5]) typically entail the extraction of symbolic image features (e.g., line segments, vertices, ribbons, etc.) and the matching of such features with -D object models. Both feature extraction and matching are error-prone and time-consuming. Line segment extraction, for instance, is inherently an ill-defined problem, and is often critically dependent on empirically determined tuning parameters such as threshold and scale to achieve acceptable performance. Feature matching often involves the computation of interfeature relationships and combinatorial search [6]. The problems of existing algorithms are, to a large extent, due to the large number of unknown pose parameters (three for orientation and three for position) that need to be computed. In many practical vision applications, however, the pose of an object often has a much smaller number of degrees of freedom (dof) because of known physical constraints. For example, under normal conditions, road vehicles are constrained to lie on the known groundplane (GP). Furthermore, vehicles only have one stable pose the wheels must rest on the GP. This ground-plane constraint (GPC) reduces the number of dofs of rigid objects from 6 to. The three dofs are most conveniently described by the location (X; Y ) on the GP, and the orientation () about the normal of the GP. Although Manuscript received January 9, 995; revised July, 995. This paper was recommended for publication by Associate Editor R. A. Jarvis and Editor A. J. Koivo upon evaluation of the reviewers comments. The authors are with the Department of Computer Science, The University of Reading, Whiteknights, Reading, Berkshire RG6 6AY, England ( T.Tan@reading.ac.uk). Publisher Item Identifier S 4-96X(97)87-. our primary interest in this paper is related to traffic scene analysis, other similar applications such as the location and recognition of objects on a table, or parts on a conveyor belt, are commonplace. Other common vision problems are also subject to an equivalent planar constraint, such as the location of landmarks by means of a robot-mounted camera which translates parallel to the ground [7]. In previous work [8], [9], we have shown that the GPC significantly simplifies model-based object localization. In particular, each match between a -D image and a -D model line yields a simple independent constraint on the orientation. This allows the parameter to be computed independently of the location parameters X and Y. However, when the number of vehicle models to be considered is large, the model-based orientation recovery algorithm described in [8] and [9] could still be very slow. The structure of common road vehicles consists of predominantly two sets of parallel lines one along the length direction and one along the width direction [5]. This fact is exploited here to devise a novel algorithm which allows model-independent determination of vehicle orientations. The algorithm also eliminates the need for explicit symbolic feature extraction and image-to-model matching. The computational cost is thus substantially reduced. In fact, since the algorithm only requires local gradient data, the orientation can be determined directly from the input video images on-the-fly, and the overall algorithm can easily be implemented in real-time. We describe the coordinate systems in the next section, and introduce the orientation constraint in Section III. The algorithm for recovering the orientation parameter is detailed in Section IV, and experimental results are given in Section V. II. COORDINATE SYSTEMS AND CAMERA GEOMETRY In this paper, lower-case bold letters are used to denote (row) vectors, and upper-case bold letters to symbolize matrices. The world coordinate system (WCS) is defined on the GP, with its X w Y w plane coincidental with the GP and its +Z w -axis pointing upwards (see Fig. ). For simplicity, the X m Y m plane of the model coordinate system (MCS) is also chosen to be on the GP. The X m - and the Y m-axis of the MCS are aligned, respectively, with the width and the length direction of the vehicle. The z m-axis also points upwards. Under the MCS so defined, the unit direction vectors of the two predominant sets of parallel lines of the vehicle are m x = ( ) for the widthwise set, and m y = ( ) for the lengthwise set (both vectors are expressed in the MCS). The transformation from the MCS to the WCS is described by a rotation angle (the object orientation) about the vertical axis and a translation (X; Y ) on the GP. The camera is a pinhole camera 4 96X/97$. 997 IEEE
10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T
3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground
More informationVariable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots
Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Jingyu Xiang, Yuichi Tazaki, Tatsuya Suzuki and B. Levedahl Abstract This research develops a new roadmap
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationMotion Control (wheeled robots)
Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,
More information1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator
1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator G. Pajak University of Zielona Gora, Faculty of Mechanical Engineering, Zielona Góra, Poland E-mail: g.pajak@iizp.uz.zgora.pl
More informationMassachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior
More informationTask selection for control of active vision systems
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October -5, 29 St. Louis, USA Task selection for control of active vision systems Yasushi Iwatani Abstract This paper discusses
More informationOptimal Trajectory Generation for Nonholonomic Robots in Dynamic Environments
28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 28 Optimal Trajectory Generation for Nonholonomic Robots in Dynamic Environments Yi Guo and Tang Tang Abstract
More informationCMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta
CMPUT 412 Motion Control Wheeled robots Csaba Szepesvári University of Alberta 1 Motion Control (wheeled robots) Requirements Kinematic/dynamic model of the robot Model of the interaction between the wheel
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationAn Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory
An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory Roshdy Foaad Abo-Shanab Kafr Elsheikh University/Department of Mechanical Engineering, Kafr Elsheikh,
More informationRecovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles
177 Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles T. N. Tan, G. D. Sullivan and K. D. Baker Department of Computer Science The University of Reading, Berkshire
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationMotion Planning of Multiple Mobile Robots for Cooperative Manipulation and Transportation
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 19, NO. 2, APRIL 2003 223 Motion Planning of Multiple Mobile Robots for Cooperative Manipulation and Transportation Atsushi Yamashita, Member, IEEE, Tamio
More informationKinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators
Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators Leon Žlajpah and Bojan Nemec Institute Jožef Stefan, Ljubljana, Slovenia, leon.zlajpah@ijs.si Abstract The paper deals
More informationKeeping features in the camera s field of view: a visual servoing strategy
Keeping features in the camera s field of view: a visual servoing strategy G. Chesi, K. Hashimoto,D.Prattichizzo,A.Vicino Department of Information Engineering, University of Siena Via Roma 6, 3 Siena,
More informationPrediction-Based Path Planning with Obstacle Avoidance in Dynamic Target Environment
48 Prediction-Based Path Planning with Obstacle Avoidance in Dynamic Target Environment Zahraa Y. Ibrahim Electrical Engineering Department University of Basrah Basrah, Iraq Abdulmuttalib T. Rashid Electrical
More informationMotion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji
More informationA Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles
Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for
More informationCentre for Autonomous Systems
Robot Henrik I Centre for Autonomous Systems Kungl Tekniska Högskolan hic@kth.se 27th April 2005 Outline 1 duction 2 Kinematic and Constraints 3 Mobile Robot 4 Mobile Robot 5 Beyond Basic 6 Kinematic 7
More informationDesign of Obstacle Avoidance System for Mobile Robot using Fuzzy Logic Systems
ol. 7, No. 3, May, 2013 Design of Obstacle Avoidance System for Mobile Robot using Fuzzy ogic Systems Xi i and Byung-Jae Choi School of Electronic Engineering, Daegu University Jillyang Gyeongsan-city
More information1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination
1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination Iwona Pajak 1, Grzegorz Pajak 2 University of Zielona Gora, Faculty of Mechanical Engineering,
More informationS-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT
S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT 1 SOE YU MAUNG MAUNG, 2 NU NU WIN, 3 MYINT HTAY 1,2,3 Mechatronic Engineering Department, Mandalay Technological University, The Republic
More informationKinematics of Wheeled Robots
CSE 390/MEAM 40 Kinematics of Wheeled Robots Professor Vijay Kumar Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania September 16, 006 1 Introduction In this chapter,
More informationTwo-View Geometry (Course 23, Lecture D)
Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the
More informationMEM380 Applied Autonomous Robots Winter Robot Kinematics
MEM38 Applied Autonomous obots Winter obot Kinematics Coordinate Transformations Motivation Ultimatel, we are interested in the motion of the robot with respect to a global or inertial navigation frame
More informationVisual Servoing Utilizing Zoom Mechanism
IEEE Int. Conf. on Robotics and Automation 1995, pp.178 183, Nagoya, May. 12 16, 1995 1 Visual Servoing Utilizing Zoom Mechanism Koh HOSODA, Hitoshi MORIYAMA and Minoru ASADA Dept. of Mechanical Engineering
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More information10. Cartesian Trajectory Planning for Robot Manipulators
V. Kumar 0. Cartesian rajectory Planning for obot Manipulators 0.. Introduction Given a starting end effector position and orientation and a goal position and orientation we want to generate a smooth trajectory
More informationMobile Robot Kinematics
Mobile Robot Kinematics Dr. Kurtuluş Erinç Akdoğan kurtuluserinc@cankaya.edu.tr INTRODUCTION Kinematics is the most basic study of how mechanical systems behave required to design to control Manipulator
More informationStable Trajectory Design for Highly Constrained Environments using Receding Horizon Control
Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Yoshiaki Kuwata and Jonathan P. How Space Systems Laboratory Massachusetts Institute of Technology {kuwata,jhow}@mit.edu
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationRobotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007
Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem
More informationSingularity Loci of Planar Parallel Manipulators with Revolute Joints
Singularity Loci of Planar Parallel Manipulators with Revolute Joints ILIAN A. BONEV AND CLÉMENT M. GOSSELIN Département de Génie Mécanique Université Laval Québec, Québec, Canada, G1K 7P4 Tel: (418) 656-3474,
More informationMobile Robot Path Planning in Static Environments using Particle Swarm Optimization
Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers
More informationMOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT
MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT Hongjun ZHU ABSTRACT:In order to better study the trajectory of robot motion, a motion trajectory planning and simulation based
More informationRobotics 2 Iterative Learning for Gravity Compensation
Robotics 2 Iterative Learning for Gravity Compensation Prof. Alessandro De Luca Control goal! regulation of arbitrary equilibium configurations in the presence of gravity! without explicit knowledge of
More informationarxiv: v1 [cs.cv] 2 May 2016
16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer
More informationPhotometric Stereo with Auto-Radiometric Calibration
Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp
More informationSHIP heading control, also known as course keeping, has
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 20, NO. 1, JANUARY 2012 257 Disturbance Compensating Model Predictive Control With Application to Ship Heading Control Zhen Li, Member, IEEE, and Jing
More informationAutonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment
Planning and Navigation Where am I going? How do I get there?? Localization "Position" Global Map Cognition Environment Model Local Map Perception Real World Environment Path Motion Control Competencies
More informationBearing only visual servo control of a non-holonomic mobile robot. Robert Mahony
Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Department of Engineering, Australian National University, Australia. email: Robert.Mahony@anu.edu.au url: http://engnet.anu.edu.au/depeople/robert.mahony/
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationProc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016
Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Outdoor Robot Navigation Based on View-based Global Localization and Local Navigation Yohei Inoue, Jun Miura, and Shuji Oishi Department
More informationAdaptive tracking control scheme for an autonomous underwater vehicle subject to a union of boundaries
Indian Journal of Geo-Marine Sciences Vol. 42 (8), December 2013, pp. 999-1005 Adaptive tracking control scheme for an autonomous underwater vehicle subject to a union of boundaries Zool Hilmi Ismail 1
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationA Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision
A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has
More informationElastic Bands: Connecting Path Planning and Control
Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis
More informationCANAL FOLLOWING USING AR DRONE IN SIMULATION
CANAL FOLLOWING USING AR DRONE IN SIMULATION ENVIRONMENT Ali Ahmad, Ahmad Aneeque Khalid Department of Electrical Engineering SBA School of Science & Engineering, LUMS, Pakistan {14060006, 14060019}@lums.edu.pk
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Control Part 4 Other control strategies These slides are devoted to two advanced control approaches, namely Operational space control Interaction
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationAutonomous and Mobile Robotics. Whole-body motion planning for humanoid robots (Slides prepared by Marco Cognetti) Prof.
Autonomous and Mobile Robotics Whole-body motion planning for humanoid robots (Slides prepared by Marco Cognetti) Prof. Giuseppe Oriolo Motivations task-constrained motion planning: find collision-free
More informationTracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time
Tracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time Zhihua Zou, Jing Xiao Department of Computer Science University of North Carolina Charlotte zzou28@yahoo.com, xiao@uncc.edu
More informationExpress Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 7, NO. 2, APRIL 1997 429 Express Letters A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation Jianhua Lu and
More informationModule 1 Session 1 HS. Critical Areas for Traditional Geometry Page 1 of 6
Critical Areas for Traditional Geometry Page 1 of 6 There are six critical areas (units) for Traditional Geometry: Critical Area 1: Congruence, Proof, and Constructions In previous grades, students were
More informationMobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS
Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationSpace Robot Path Planning for Collision Avoidance
Space Robot Path Planning for ollision voidance Yuya Yanoshita and Shinichi Tsuda bstract This paper deals with a path planning of space robot which includes a collision avoidance algorithm. For the future
More informationVision-Motion Planning with Uncertainty
Vision-Motion Planning with Uncertainty Jun MIURA Yoshiaki SHIRAI Dept. of Mech. Eng. for Computer-Controlled Machinery, Osaka University, Suita, Osaka 565, Japan jun@ccm.osaka-u.ac.jp Abstract This paper
More informationBackground for Surface Integration
Background for urface Integration 1 urface Integrals We have seen in previous work how to define and compute line integrals in R 2. You should remember the basic surface integrals that we will need to
More informationFingerprint Classification Using Orientation Field Flow Curves
Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification
More informationCollision-free Path Planning Based on Clustering
Collision-free Path Planning Based on Clustering Lantao Liu, Xinghua Zhang, Hong Huang and Xuemei Ren Department of Automatic Control!Beijing Institute of Technology! Beijing, 100081, China Abstract The
More informationRevision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems
4 The Open Cybernetics and Systemics Journal, 008,, 4-9 Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems K. Kato *, M. Sakawa and H. Katagiri Department of Artificial
More informationCombining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving
Combining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving Xi Xiong Jianqiang Wang Fang Zhang Keqiang Li State Key Laboratory of Automotive Safety and Energy, Tsinghua University
More informationNeuro-adaptive Formation Maintenance and Control of Nonholonomic Mobile Robots
Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 50 Neuro-adaptive Formation Maintenance and Control of Nonholonomic
More informationMobile Robots Locomotion
Mobile Robots Locomotion Institute for Software Technology 1 Course Outline 1. Introduction to Mobile Robots 2. Locomotion 3. Sensors 4. Localization 5. Environment Modelling 6. Reactive Navigation 2 Today
More informationModel Based Perspective Inversion
Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk
More informationarxiv: v1 [cs.ro] 2 Sep 2017
arxiv:1709.00525v1 [cs.ro] 2 Sep 2017 Sensor Network Based Collision-Free Navigation and Map Building for Mobile Robots Hang Li Abstract Safe robot navigation is a fundamental research field for autonomous
More informationConvex combination of adaptive filters for a variable tap-length LMS algorithm
Loughborough University Institutional Repository Convex combination of adaptive filters for a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository
More informationCHAPTER 3 MATHEMATICAL MODEL
38 CHAPTER 3 MATHEMATICAL MODEL 3.1 KINEMATIC MODEL 3.1.1 Introduction The kinematic model of a mobile robot, represented by a set of equations, allows estimation of the robot s evolution on its trajectory,
More informationMinimum Bounding Boxes for Regular Cross-Polytopes
Minimum Bounding Boxes for Regular Cross-Polytopes Salman Shahid Michigan State University shahids1@cse.msu.edu Dr. Sakti Pramanik Michigan State University pramanik@cse.msu.edu Dr. Charles B. Owen Michigan
More informationCamera Calibration Using Line Correspondences
Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationChapter 3 Path Optimization
Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for
More informationPath Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano tel:
Marcello Restelli Dipartimento di Elettronica e Informazione Politecnico di Milano email: restelli@elet.polimi.it tel: 02 2399 3470 Path Planning Robotica for Computer Engineering students A.A. 2006/2007
More informationCooperative Conveyance of an Object with Tethers by Two Mobile Robots
Proceeding of the 11th World Congress in Mechanism and Machine Science April 1-4, 2004, Tianjin, China China Machine Press, edited by Tian Huang Cooperative Conveyance of an Object with Tethers by Two
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationGlobal Trajectory Generation for Nonholonomic Robots in Dynamic Environments
7 IEEE International Conference on Robotics and Automation Roma, Italy, -4 April 7 WeD.4 Global Trajectory Generation for Nonholonomic Robots in Dynamic Environments Yi Guo, Yi Long and Weihua Sheng Abstract
More informationRedundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators
56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationSimplified Walking: A New Way to Generate Flexible Biped Patterns
1 Simplified Walking: A New Way to Generate Flexible Biped Patterns Jinsu Liu 1, Xiaoping Chen 1 and Manuela Veloso 2 1 Computer Science Department, University of Science and Technology of China, Hefei,
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationFinal Exam Practice Fall Semester, 2012
COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:
More informationCollision Avoidance Method for Mobile Robot Considering Motion and Personal Spaces of Evacuees
Collision Avoidance Method for Mobile Robot Considering Motion and Personal Spaces of Evacuees Takeshi OHKI, Keiji NAGATANI and Kazuya YOSHIDA Abstract In the case of disasters such as earthquakes or Nuclear/Biological/Chemical(NBC)
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationOptimal Paint Gun Orientation in Spray Paint Applications Experimental Results
438 IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 8, NO. 2, APRIL 2011 Optimal Paint Gun Orientation in Spray Paint Applications Experimental Results Pål Johan From, Member, IEEE, Johan
More informationVehicle Localization. Hannah Rae Kerner 21 April 2015
Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular
More informationStable Grasp and Manipulation in 3D Space with 2-Soft-Fingered Robot Hand
Stable Grasp and Manipulation in 3D Space with 2-Soft-Fingered Robot Hand Tsuneo Yoshikawa 1, Masanao Koeda 1, Haruki Fukuchi 1, and Atsushi Hirakawa 2 1 Ritsumeikan University, College of Information
More informationTextureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade
Textureless Layers CMU-RI-TR-04-17 Qifa Ke, Simon Baker, and Takeo Kanade The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract Layers are one of the most well
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More informationA threshold decision of the object image by using the smart tag
A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (
More informationIntroduction to Robotics
Introduction to Robotics Ph.D. Antonio Marin-Hernandez Artificial Intelligence Department Universidad Veracruzana Sebastian Camacho # 5 Xalapa, Veracruz Robotics Action and Perception LAAS-CNRS 7, av du
More informationHOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder
HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical
More informationLong-term motion estimation from images
Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras
More information