THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF ELECTRICAL ENGINEERING

Size: px
Start display at page:

Download "THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF ELECTRICAL ENGINEERING"

Transcription

1 THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF ELECTRICAL ENGINEERING TRAJECTORY FOLLOWING CONTROL ALGORITHM FOR GROUND ROBOTS NAVIGATING PLANNING ALGORITHM PATHS MATTHEW MALENCIA SPRING 2016 A thesis submitted in partial fulfillment of the requirements for a baccalaureate degree in Mechanical Engineering with honors in Electrical Engineering Reviewed and approved* by the following: Minghui Zhu Dorothy Quiggle Assistant Professor of Engineering Thesis Supervisor Jeffrey Mayer Associate Professor of Electrical Engineering Honors Adviser * Signatures are on file in the Schreyer Honors College.

2 i ABSTRACT Autonomous robotic systems have many military and industrial applications. Research on the control of ground robots can lead to advancements in unmanned military vehicles and driverless cars. Many studies have been conducted in this field, especially in regards to pathplanning algorithms. While this path-planning research is extensive, its implementation is lacking in many studies. The studies in which experimentation exists contain controllers with poor tracking performance. This study seeks to address this tracking performance problem. It is believed that a smart design of a PID controller can improve the tracking performance. This study aims to design a PID controller for ground robots following planning algorithm paths. This study uses MATLAB to simulate the controller, and it uses a differential drive robot to implement the controller to follow a trajectory generated by an RRT* path planning algorithm. It is found that a PID controller using cross-track error provides strong tracking performance in both simulation and experimentation. It is also found that the addition of a PID controller using heading error improves tracking error. Based on these results, the study recommends further development of the control algorithm to include velocity control and continued experimentation in dynamic environments.

3 ii TABLE OF CONTENTS LIST OF FIGURES... iii LIST OF TABLES... iv ACKNOWLEDGEMENTS... v Chapter 1 Introduction and Background... 1 Path Planning... 3 PID Control... 7 Controller Tuning... 9 Robot Modeling Chapter 2 Methods Calculating Error Term Goal Radius Heading Error Chapter 3 Evaluation Simulation Results Cross-Track Error Controller Cross-Track Error Controller with Heading Compensation Highlighted Comparisons of Simulated Results Experimental Results Chapter 4 Discussion Cross-Track Error Controller Performance Cross-Track Error Controller with Heading Compensation Performance Performance in Experiments Chapter 5 Conclusion Appendix A Differential Drive Robot MATLAB Model Appendix B MATLAB Code For Cross-Track Error Controller Appendix C MATLAB Code with Added Heading Error Controller Appendix D Khepera Control Code for Experimentation References... 67

4 iii LIST OF FIGURES Figure 1. Example navigation experiment with poor tracking performance[1]... 2 Figure 2. Rapidly-exploring Random Trees (RRT) algorithm [1]... 4 Figure 3. Rapidly-exploring Random Graphs (RRG) algorithm [6]... 6 Figure 4. Optimal Rapidly-exploring Random Trees (RRT*) algorithm [6]... 7 Figure 5. Sample system responses with varying degrees of stability and response times [8] 10 Figure 6. Step response showing variables used to calculate tuning constants [9] Figure 7. Unicycle Model of mobile ground robots [11] Figure 8. Differential Drive Model for mobile ground robots [11] Figure 9. Cross-track error represented visually as the distance d [12] Figure 10. Equations for calculating constants of path line segment Figure 11. Calculation of distance d, representing the cross-track error [12] Figure 12. Heading error magnitude calculation using dot product Figure 13. Step input performance for cross-track error controller, K p = 0.5 and K d = Figure 14. Step input performance for cross-track error controller with K p = 3 and K d = Figure 15. Left 90 degree turn with cross-track error controller with K p = 5 and K d = Figure 16. Right 90 degree turn with cross-track error controller with K p = 5 and K d = Figure 17. Left 90 degree turn with cross-track error controller with K p = 3 and K d = Figure 18. Left 90 degree turn with cross-track error controller with K p = 0.5 and K d = Figure 19. Left 90 degree turn transition, cross-track error controller, K p = 5 and K d = Figure 20. Path 1 performance for cross-track error controller with K p = 0.5 and K d = Figure 21. Path 1 performance for cross-track error controller with K p = 3 and K d = Figure 22. Path 2 performance for cross-track error controller with K p = 0.5 and K d = Figure 23. Path 2 performance for cross-track error controller with K p = 3 and K d = Figure 24. Path 3 performance for cross-track error controller with K p = 0.5 and K d =

5 iv Figure 25. Path 3 performance for cross-track error controller with K p = 3 and K d = Figure 26. Left 90 degree turn for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.3 and K d = Figure 27. Left 90 degree turn with cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d = Figure 28. Left 90 degree turn transition with cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d = Figure 29. Path 1 for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.5 and K d = Figure 30. Path 1 for cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d = Figure 31. Path 2 for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.5 and K d = Figure 32. Path 2 for cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d = Figure 33. Path 3 for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.5 and K d = Figure 34. Path 3 for cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d = Figure 35. Left 90 degree turn, focused on transition, comparison of performance between crosstrack error controller without (top) and with heading error controller (bottom) Figure 36. Path 1 segment comparison of performance between cross-track error controller (top) and cross-track error controller with heading error controller (bottom) Figure 37. Path 2 comparison of performance between cross-track error controller (top) and crosstrack error controller with heading error controller (bottom) Figure 38. Path 3 segment comparison of performance between cross-track error controller (top) and cross-track error controller with heading error controller (bottom) Figure 39. Experimental results of cross-track error controller navigating a path generated by the RRT* algorithm to avoid a small obstacle Figure 40. Experimental results of cross-track error controller navigating a path generated by the RRT* algorithm to avoid a large obstacle Figure 41. Experimental results of cross-track error controller with heading error controller navigating a path generated by the RRT* algorithm to avoid a large obstacle... 40

6 v LIST OF TABLES Table 1. Formulas used to find constants through Ziegler-Nichols tuning method... 11

7 vi ACKNOWLEDGEMENTS There are several people that I would like to thank for their contributions to this thesis as well as their supervision through its completion. First, I would like to thank Dr. Minghui Zhu for his guidance and technical mentorship throughout the research and thesis writing process, and Dr. Jeffrey Mayer for his professional guidance as an honors adviser for this thesis. Next, I would like to thank the graduate and undergraduate students in the Multi-agent Networks Laboratory that extended their help in the research process. Specifically, I would like to thank Nurali Virani for his guidance and Pinyao Guo for his research help. Lastly, I would like to extend my gratitude to my family, friends, and professors for their continued of my academic and professional endeavors.

8 1 Chapter 1 Introduction and Background In recent years, there has been extensive research conducted on the general problem of path planning. This topic has become popular because the applications of autonomous control have expanded. This expansion of autonomous control is due to the technological advancements that have caused a decrease in cost, an increase of capabilities, and an improvement in reliability of robotics. Robotic systems that use autonomous control are seen in military and industrial applications such as assembling, patrolling, guarding, and attacking, as well civilian applications such as transportation. Much of the research on path planning is focused on developing an optimal path to a desired location while avoiding obstacles; this research is primarily conducted via simulation. While the path planning algorithms are strong and show great success in reaching the goal while avoiding obstacles, many studies lack implementation of these algorithms in the real world. Many studies involve strong simulated results with limited real world testing. The research papers that have conducted real world tests show that the main problem is with tracking performance. While the path following algorithms provide a safe path to the goal location, the robot is unable to follow the path with tight tracking [1]. This tracking problem has the potential to cause the robot to collide with obstacles or to not reach the goal location with precision. An example of this problem can be seen in Figure 1. The experimenters in this example use a skid steer four wheel drive robot to test the performance of a spline-based RRT path planning algorithm in a cluttered indoor environment. Parts a, b, and c in figure 1 show the path generated by the path planning algorithm, denoted by the orange-colored lines, and the obstacles that need to be avoided by the robot, seen as dark blue lines with light blue safety areas; the safety area is to compensate for the size of robot. The cyan-colored line in Figure 1d shows the real

9 position date of the mobile robot. As seen in this figure, the tracking performance of the controller is 2 poor; the robot wavers from the path generated by the planner and the error in its final location is visible. Figure 1. Example navigation experiment with poor tracking performance[1] The focus of this paper is to address the type of problem seen in the above example that is encountered when implementing path planning algorithms. Specifically, the goal of this research is to test PID control in the application of ground robots following a path generated by a planning algorithm. The rest of this chapter will focus on describing the background knowledge needed throughout the paper. This includes a general introduction to different types of path planning and a description of the path planning algorithm that is used in this study. This chapter will also provide background information on PID control, tuning methods for controllers, and modeling of ground robots. The following chapters will detail the methods used in this study, the results, and a discussion of those results.

10 Path Planning 3 Creating path planning algorithms is a challenging problem because the applications of these algorithms include cluttered and often dynamic environments as well as differential constraints on the movement of the robot that will be using the algorithm [1]. Various path-planning algorithms have been created to try to solve this problem. A common type of algorithm that has been used for this problem is probabilistic searching. Two of the most widely applied algorithms to date are Probabilistic Roadmaps (PRMs) and Rapidly-exploring Random Trees (RRT). This section will give a brief introduction to PRMs to serve as a reference. More depth will be given about RRTs because the research of this paper focuses on RRT path planning. Probabilistic Roadmaps use local planning to try to link a random point to a nearby sample point in order to construct a roadmap of the space. Next, the algorithm uses graph search to finalize the path planning based on the locations of the initial point and the goal point. Research on this type of path planning is being continued because of the algorithms lack of robustness in dynamic and rapidly changing environments; this algorithm attempts to conduct the entirety of the path planning before any action - any motion by the robot - occurs, and therefore is highly susceptible to environmental changes [2]. Rapidly-exploring Random Trees attempt to improve robustness to environmental changes by reducing the time needed for path planning. This is accomplished by minimizing exploration. The RRT planner expands towards regions that have been unvisited rather than reverting to areas that have already been explored by the planner. As the name suggests, the RRT expands through the formation of random trees. These trees connect to create a tree spanning from the initial location to the goal location [1]. This path planning algorithm is contained within a metric space, X, where a path is formed from initial point, x int, within the space. A goal point, x goal, or a goal region, X goal, is defined to be the end of the path. Obstacles are defined as regions X obs. Such regions must be avoided by the path planning algorithm. Also, it is assumed that these obstacle regions are not explicitly stated. Rather, the algorithm must check

11 4 whether its current state intersects the state of the obstacle. States in X obs can correspond to physical areas in the environment as well as velocity bounds and other differential bounds, depending on the application. The RRT algorithm creates a tree with vertices entirely contained within X free, the complement of the X obs space. The figure below shows the steps taken to generate Rapidly-exploring Random Trees. The first vertex of the tree, T, as is designated by x init, must be an element of X free. During each iteration of the algorithm, a random state, x rand, is chosen from the space, X. Step 4 finds the closest vertex to this randomly selected state, x rand. Next, an input, u, is chosen such that the distance from x near to x rand is minimized. This step also ensures that the state is not part of X obs. The new state, x new, is added to T as the new vertex, and an edge is created to connect x near to x new. These steps then repeat to continue the progression of the tree. This algorithm can be seen in Figure 2. Figure 2. Rapidly-exploring Random Trees (RRT) algorithm [1] Rapidly-exploring Random Trees have been chosen for many application, including the this study, because of the algorithm s advantages. First, the expansion of an RRT is biased towards unexplored regions of space. The RRT quickly expands in a few directions, and then begins to branch off from these few initial branches. It was found that trees formed by random vertices and random inputs actually had a strong bias towards exploring places that were already occupied by the tree. A similar bias towards "familiar" places was suffered by random walk algorithms. Another advantage of RRT is that

12 5 path planning algorithms using this method can be constructed without using the traditional approach of two prescribed states between which the algorithm navigates. This makes RRT more versatile and broadens the types of applications that these path planning algorithms can be used for. Lastly, the RRT algorithm is relatively simple. The number of edges is kept to a minimum while ensure that the tree remains fully connected at all times. This makes it easier to use than probabilistic roadmaps [3]. The popularity of the RRT algorithm has led to further research to improve its accuracy and efficiency. An example of an improvement is dynamic-domain RRTs that allow navigation through narrow passageways [4]. Also, the algorithm has been improved to direct tree expansion towards areas of maximum utility in order to improve efficiency in finding the shortest path to the goal [5]. Additionally, there have been developments in a spline-based RRT algorithm created to attempt to solve the problems associated with the discretization of the action and time interval that is seen with traditional RRT algorithms [1]. One such development of this algorithm is the Optimal RRT, known as RRT*. This specific adaptation of the RRT algorithm was chosen to be used in this paper for the testing of tracking performance of a path following controller. The RRT* algorithm is a combination of Rapidly-exploring Random Trees (RRT) and Rapidlyexploring Random Graphs (RRG). The RRG algorithm is similar to the RRT algorithm that is explained above. The main difference is that each time a new state, x new, is added to the tree, new connections are attempted from all other vertices within a defined radius. Successful connections create a new edge. From this, it can be seen that the tree of the RRT algorithm is a subgraph of the RRG graph, where the RRG graph is undirected and can possibly contain cycles connected a set of vertices that is shared with the RRT tree [6]. The differences between the two algorithms can be seen when comparing the figure above to the steps of the RRG algorithm, seen in Figure 3.

13 6 Figure 3. Rapidly-exploring Random Graphs (RRG) algorithm [6] The extra connections that the RRG algorithm makes has a distinct advantage. It can find shorter paths that the RRT algorithm would not have taken. The main disadvantage of the RRG algorithm, though, is that these extra connections can create unwanted cycles. The RRT* algorithm seeks to improve upon the RRG algorithm by avoiding the formation of cycles. The avoidance of cycles is accomplished by the RRT* algorithm by removing redundant edges. If an edge is not a contained within the shortest path from the initial location to the vertex of interest, that edge is removed. Similar to the RRT algorithm, the RRT* algorithm is a subset of the graph created using RRG. The added part of the RRT* algorithm allows the RRG algorithm to find any shorter paths that were missed by the RRT algorithm while removing any extraneous paths and cycles from the RRG algorithm. This means that the RRT* algorithm is effectively an improvement of the RRT algorithm that ensures that all vertices are reached through a minimum-cost path [6]. Figure 4 shows the steps of this algorithm and the similarities it has with RRT and RRG.

14 7 Figure 4. Optimal Rapidly-exploring Random Trees (RRT*) algorithm [6] In this study, the RRT* path-planning algorithm is used. Related work is being conducted to test and improve this path-planning algorithm in dynamic environments and in the application of multiple coordinating robots. This study is focused on developing a controller to be used in the experimental implementation of robots following a path that is generated by the RRT* path-planning algorithm. PID Control Automated systems used a variety of control systems. One the most widely used controllers is a proportional-integral-derivative (PID) controller. This controller relies on a feedback loop in order to continually calculate an error value. The error value is the difference between the measured or observed value of an output variable and the input desired value of that variable. This error value is then used to adjust a control variable which causes a change in the output variable; the output value is continually adjusted to attempt to match the input value. Examples of output variables that are measured for these

15 8 controller are the temperature of an oven or the speed of a car. In these examples, the control variables - that which is being adjusted to affect the output variable - are the car's throttle and the power supplied to the oven's heating element. The input variables for these situations are the desired speed of the car and the desired temperature of the oven. As the name suggests, a PID controller consists of three parts: the proportional term, integral term, and derivative term. Depending on the applications, controllers can consist of some or all of these terms. For certain applications, proportional control (P controller) is sufficient; for others, two terms, such as a PD or PI controller, are sufficient. The following will explain the uses and advantages of each of the PID terms. The output of the proportional term is equal the current error value multiplied by some proportionality constant. This constant is called the proportional gain constant, and is designated by K p. This term can be easily understood using the example of cruise control. In this example, the error is the difference between the actual speed of the car and the desired speed (the speed set by the driver). If the car is going much slower than the set speed, the error value will be high. Therefore, the output of the proportional term will be high. This will cause the throttle to be opened significantly, as opposed to the small change in throttle position that would be caused by a small error. The amount of throttle change can be changed by proportional gain constant. A value of K p that is too small may cause system responses to be too small; e.g. a small increase in the throttle may not increase the speed of the car. Increasing K p can bring the output value to the desired value faster. However, a K p value that is too high may cause a system to become unstable. While theory shows that using proportional gain will cause the output to converge to the desired value, real systems often have offset error when only proportional gain is used. This steady state error can be eliminated by the adding an integral term to the controller. The output of the integral term is proportional to the magnitude and duration of the error. This term's output is equal to an integral gain, K i, multiplied by the time integral of instantaneous error; the integral value is the sum of the accumulated offset that for which the proportional term did not

16 compensate. By integrating the error, this term eliminates the steady state error due to the proportional 9 term. However, this integral term accelerates the output variable towards the desired value which can cause overshoot. The last part of the PID controller is the derivative term. The output of this term is proportional to the rate of change of the error value in time. Multiplying the time derivative of error by the derivative gain, K d, gives the derivative term's output. Adding a derivative term to a controller improves stability of the system, and decreases settling time. However, derivative terms are clairvoyant in theory; they rely on future values of the output variable in order to calculate the derivative of the error value. In real systems, a slight time delay is used in order to implement the derivative term in a controller [7]. As will be seen in chapter 2, this study develops a PID controller based on the cross-track error of a ground robot relative to a path generated by the RRT* path-planning algorithm. It will also be seen that a second PID controller based on the orientation error of the ground robot was developed in this study to improve upon the performance achieved with the first PID controller. Controller Tuning As described in the previous section, PID controllers contain three different gains: the proportional gain, K p ; the integral gain, K i ; and the derivative gain, K d. Because these three constants have the freedom to be adjusted to any number, there are infinite combinations of gains. However, a successful PID controller must be tuned such that the gains cause a desired output. This output can be defined in many ways; the output of a system must be stable, must settle within a reasonable or specified time, and must not overshoot the desired value by too much. Figure 5 shows multiple examples of system outputs - each with different gains - due to a step input; the trade-offs between overshoot, speed of the response, and stability can be seen [8].

17 10 Figure 5. Sample system responses with varying degrees of stability and response times [8] In order to create the desired output, the gains of the PID controller must be tuned to find the optimal combination of gain constants. There are several experimental methods for tuning PID controller gains. These methods are experimental in that they require tests on the process in order to tune the gains, as opposed to mathematical tuning methods that find PID gain values directly from a transfer function without experimentation. Two common methods that are used for tuning are the Ziegler-Nichols' method and the Good Gain method. The Ziegler-Nichols method involves a few basic steps to determine the gain constants. First, K i and K d are set to zero, creating the equivalent of a P controller. K p is then increased until the output is stable, consistent oscillations. The value of K p that creates these oscillations is called the ultimate gain, and is designated K u. This ultimate gain value is then used to determine the gain constants. The formulas used to find controller parameters are seen to be different depending on the source, application, and

18 11 desired output. Table 1 shows some of the formulas that are commonly used to determine the controller parameters [9]. Table 1. Formulas used to find constants through Ziegler-Nichols tuning method [9] K p T i T d P Controller U/LR Inf 0 PI Controller 0.9U/LR 3.3L 0 PID Controller 1.2U/LR 2L 0.5L = Ti/4 These formulas are based on the response to a step input of amplitude U. The variables that are used to calculate the constants in this example are the slope of the response, R, and the lag time, L. These variables are seen graphically in Figure 6. Figure 6. Step response showing variables used to calculate tuning constants [9] The Ziegler-Nichols method is a powerful tool because it employs a straightforward, practical method of determining the parameters of a PID controller. While there exists some variability in the formulas used to determine gain values, the Ziegler-Nichols method provides strong disturbance rejection. The main disadvantage of the Ziegler-Nichols method is that it yields an aggressive proportional gain which causes high overshoot. While this overshoot it acceptable for some applications, others require overshoot to be minimal. The Good Gain method attempts to improve upon the Ziegler-Nichols method by using a less aggressive gain to give the control loop better stability. The first step is to create the equivalent of a P

19 12 controller by setting K i and K d equal to zero. Next, K p is increased until overshoot is seen, as opposed to the stable consistent oscillations seen in the Ziegler-Nichols method. Ideally, a K p value should be found that there is some overshoot with barely any visible undershoot. Next, the integral time Ti is set equal to 1.5 times T ou, where T ou is the time between the overshoot and the undershoot of the P controller response. The addition of the I-term will change the output. Therefore, a common adjustment is to reduce the K p value to 80% of its original value. Lastly, the K d value is found using the T i -T d relationship used in the Ziegler-Nichols method [10]. While the Good Gain method improves stability, it is predominantly used for PI controllers and does not include strong methodology for finding the derivative gain constant; the Good Gain method just borrows relationships from the Ziegler-Nichols method to find K d. In this study, the PID controllers are tuning using a manual method similar to the Good Gain method. This method primarily focuses on increasing K p until some overshoot is seen with very little undershoot following the overshoot. Using this K p value, a reasonable K d is easily found manually to create a stable, desirable response of the system. Robot Modeling To simulate a PID controller, the system that is being controlled must be modeled. The system models this study focuses on are for ground robots. The chosen ground robots for this study are twowheeled, differential drive robots. Two assumptions are made in modeling of the ground robot: each wheel has a single point of contact with the ground, and the wheel do not slip when rolling on the ground. With these assumptions, a variety of models can be chosen, depending on the constraints of the problem and the real world situation being represented. The most basic model is the unicycle model. With inputs of velocity and angular velocity, the unicycle model considers the robot as a whole and is only concerned with general movement. This model does not take into consideration the actual dynamics of the robot and its means of maneuvering, yet it can

20 13 still be used for modeling most ground robots [11]. The equations of motion for this unicycle model are seen in Figure 7. Figure 7. Unicycle Model of mobile ground robots [11] To create a more accurate model than the unicycle, two-wheeled representations are used. There are two types of two-wheeled models: the bicycle model with two wheels in line, and the differential drive model with two wheel side on a single axel. The bicycle model is well suited for simulating four wheeled vehicles with front wheel steering. The differential drive model, unlike a car, has a zero degree turning radius, although it can be constrained to behave like the bicycle model or a four wheeled vehicle. In this study, the differential drive model is used, although the bicycle model could easily be substituted into the simulations for other applications [11]. Figure 8. Differential Drive Model for mobile ground robots [11]

21 As seen in Figure 8, the differential drive model has many similarities to the unicycle model. 14 However, where the unicycle model uses velocity as an input, this model calculates the velocity to be the average between the velocity of the right and left wheels. Additionally, the unicycle model receives angular velocity as an input. The differential drive model calculates this value using the speeds of the wheels. It can also be seen in Figure 8 that this model depends on the specifications of the robot, specifically the distance between the two wheels and the radius of the wheels. This differential drive model is used in the simulations in order to obtain consistent results with the implementation, which uses a two-wheeled differential drive robot. The MATLAB code used to model the differential drive robot can be found in Appendix A.

22 Chapter 2 15 Methods The modeling and simulation in this study rely predominantly on the use of MATLAB. The differential equation solvers contained within MATLAB, as well as the general functionality, make it well suited for this application. Specifically, the differential equation solver ode45 was chosen for a few reasons: it is flexible and applicable to many situations; it provides improved accuracy as compared to the ode23 solver; and it is computationally fast compared to some of the other choices that MATLAB offers. Calculating Error Term This study focuses on the use of PID control for the application of autonomous control of ground robots. As seen in the background section, PID control relies on an error term to calculate an output variable. For the control of ground robots, there are a few choices of error terms as well as countless ways to calculate the error. Two of the most common types of errors for ground robot applications are distance to a goal location and cross track error. In the first case, the distance between the robot and a goal location is trying to be minimized to zero. For the application of a path following robot, a set of goal locations can be set, and the distance-to-goal error term can be used to sequentially reach goal locations until arriving at the final goal location. This type of error is computationally simple, only the robots location and the goal location are need. However, it does not constrain the direction of approach towards the goal; therefore, obstacles near the desired path could obstruct the robot using this type of error. Because of the potential for obstruction collision with distance-to-goal error, this study uses cross track error. Cross track error is the normal distance between the line of desired travel and the location of the robot. In the case of a car driving down a one way road, the cross track error is the car's distance from being centered on the road. This cross track error can be determined by various means. In many applications, LiDAR and/or RADAR sensors are used to detect this distance. This study uses the absolute

23 location of the robot to calculate the cross track error. This method was chosen because of the 16 implementation equipment in the lab; a motion capture system is used to provide the coordinate position of the ground robots being tested. This choice allows for easier implementation, and lower cost because additional sensors are not needed. The error term that is calculated in this study is the normal distance between the current location of the robot and the line formed by the path planning algorithm. The path generated by the algorithm is a sequence of points in xy coordinate plane. In order to calculate error, the controller only uses the segment of this path that it is close to location of the robot. Figure 9 shows graphically the line segment from the generated path, the location of robot, and the error distance that is being calculated. Figure 9. Cross-track error represented visually as the distance d [12] In this figure, the location of the robot is defined as (x o, y o ). The cross-track error, which is defined as the normal distance between the location of the robot and the desired path of travel, is labeled as the distance d. The line segment of the generated path is in slope-intercept form, defined as ax+by+c=0 where the constants are calculated by two consecutive points of the path. The definitions of these constants are seen in Figure 10 where (x i-1, y i-1 ) is the starting point of the line segment and (x i, y i ) is the ending point of the line segment [12]. Figure 10. Equations for calculating constants of path line segment

24 17 The calculation of the distance d provides the cross-track error for the given line segment and any location of the robot (x o, y o ).This distance d is calculated using the location of the robot (x o, y o ) and either the two points that define the line segment or the constants a, b, and c that define that line. Ultimately, both of these calculations of d come from projecting vector r, defined from point (x o, y o ) to (x i-1, y i-1 ),onto vector v, defined to be perpendicular to the line segment defined by (x i-1, y i-1 ) and (x i, y i ). The calculation of the value of d, using both the two points and the constants a, b, and c that define the line segment, can be seen in Figure 11. Figure 11. Calculation of distance d, representing the cross-track error [12] Goal Radius The previous section showed that cross-track error is calculated using a line segment defined by the points (x i-1, y i-1 ) and (x i, y i ), which are consecutive points in the sequence of points defined by the path planning algorithm. The goal of the controller is to reduce this cross track error to zero quickly, long before the robot arrives at point (x i, y i ), the end of the line segment. When arrival at this ending point is achieved, the controller must calculate a new line segment for the robot to follow. This is achieved by incrementing i, thus changing the end point of the previous line segment to be the starting point of the new line segment. The next sequential point defined by the path planning algorithm then becomes the end point of the new line segment. Ideally, the transition between using one line segment and the next line segment in the error calculation will occur when the robot arrives at the end point. In practice, this is impossible for both simulation and implementation. This criterion would require that the location of the robot and the location of the end point of the line segment be exactly equal; the more precise the measurement of these locations

25 is, the more likely that they will never be exactly equal. If the robot is not able to achieve this exact 18 location, a new line segment will never be defined and the robot will continue forever in the direction of its last known line segment. In order to prevent this, a goal radius is created. The new line segment of the path is created when the location of the robot is within a threshold distance of the end point of the old line segment. To achieve this, the distance between the location of the robot and the end point of the line segment is continuously calculated. When this distances falls below the threshold that is set as the goal radius, the new line segment is created. Choosing a value for the goal radius can be difficult. If the chosen value is too small, the robot may never achieve the goal; the location of the robot will never be within the goal radius of the end point and the robot will again continue forever in the direction of its last known line segment. A goal radius that is too large can also cause problems. Because arrival within the goal radius constitutes successful arrival at the end point, the location of the robot can have a maximum error value equal goal radius while still being considered to be a success. Therefore, a large goal radius value can lead to a larger error being considered acceptable behavior; the controller can be seen as successful because the robot is achieving each goal location when in reality the error is quite large. A second potential problem with a large goal radius is the collision of the robot with obstacles. The robot will move onto a new line segment at a distance equal to the goal radius away from the end point of the previous line segment. If this distance is large, the robot may potentially cut corners. Sharp angles between one line segment and the next in the planned path are most likely due to an obstacle in the environment that the robot must avoid. A large goal radius will cause the robot's error to be greater around the connecting points of line segments, especially if the line segments form a sharp angle. This increased error increases the likelihood that the robot collides with an obstacle in the environment.

26 Heading Error 19 The PID controller designed using the cross-track error and goal radius described above provided strong tracking performance for most situations. These results are shown in the next section. One problem that was encountered, though, was tracking performance when the heading of the robot and the direction of the line segment were vastly different. This large difference in headings occurred in two situations: the starting orientation of the robot faced generally away from the desired starting direction of travel, and the path contained sharp turns. The next chapters will show and discuss the performance in this situation. However, it is in this section that the method of the solution will be discussed. To combat this problem, a second PID controller was developed. This controller, however, does not use position as its variable of interest as did the previous controller. Rather, this controller used direction, or angular orientation, as its output. Therefore, the error value for this controller is calculated as the difference between the actual orientation of the robot and its desired orientation. In order to calculate this error, the heading of the line segment and the heading of the robot must be known. The heading of the robot is known in both the simulation and the implementation; phi is an output of the differential equation solver of the model and it is an output of the motion capture system. The heading of the desired line of travel is not given as easily. Rather, it must be calculated using the points given by the path planning algorithm. When calculating the heading error value, instinct is to find the difference between the heading angle (either in radians or degrees) of the robot and the heading angle of the line segment. While this approach works for many case, it causes problems when the direction vectors are close but the heading angles are vastly difference. For example, two vectors at heading angles of 10 degrees and 350 degrees are only twenty degree apart, but the difference of these angles is 340 degrees. A similar problem is seen in an example where one heading angle is 170 degrees and the other is negative 170 degrees. To calculate a reliable error value, the angle measured from the desired heading to the actual heading must be found. This can be found by using vector representations of the headings. It is evident

27 20 from the above sections that the heading vector of the robot can be easily found using the theta value that is output from the differential solver or motion capture system. It is also evident that the heading vector of the desired line of travel is formed by the constants a and b that define the line segment. The angle between these vectors can then be found using the dot product, as seen in Figure 12, where the x vector represents the direction of the robot and the y vector is the desired direction. Figure 12. Heading error magnitude calculation using dot product The theta value seen in the above figure represents the magnitude of the heading error and ranges from zero to 180 degrees. This value is always positive, though, failing to indicate the direction of theta. The error value of the controller must be signed in order to indicate the direction of compensation, i.e. the direction the robot must turn to reduce the error. This theta value is assigned a positive or negative value using the cross product. Using the notation of Figure 12, the cross product x cross y will be positive if theta measured from the direction of robot to the desired direction is counter clockwise; this cross product will be negative if theta from x to y is in the clockwise direction. The sign of the cross product in conjunction with the magnitude of theta from the dot product provides an error value that can then be used by a PID controller to compensation for error in the robots orientation.

28 Chapter 3 21 Evaluation Simulation Results This section presents the simulation results of developed controllers. Both the cross-track error PID controller and the controller with added heading compensation were developed in MATLAB and tested using the differential drive model. The cross-track error PID controller can be found in Appendix B, and the controller with the added heading compensation can be found in Appendix C. The following figures present the tracking performance of the simulated differential drive robot for various conditions. The first condition that is tested similar to a step input, which is used to investigate the basic performance. The second condition involves shallow angled turns with long travel between checkpoints. This is represented by path 1. The third situation tests the performance through the sharp turns of path 2. The last situation tested involves shallow angles with a short distance between checkpoints. This situation is modeled using path 3, and is most representative of the sequence of points that is created by the RRT* path-planning algorithm. In addition to the four situations listed above, this study tests the performance of both controllers with different controller gains. It will be seen that throughout these simulations, the value K i is set to zero. The integral term is used to reduce steady-state error, which, in the simulations, can be seen to be minimal when using a PD controller. Therefore, the integral term is not needed to achieve strong tracking performance. The role that this integral term place in real world application will be addressed in the next chapter. In all of the following figures, the desired path of travel, representing the generated path from the RRT* algorithm, is represented by the dotted line in the xy plane. The solid line shows the actual path taken by the simulated robot.

29 Cross-Track Error Controller 22 The first two figures in this sections, Figures 13 and 14, show the performance of the simulated robot for the equivalent of a step input. In these two figures, the simulated robot begins with the same orientation as the desired path of travel and a finite cross-track error. The behavior of the robot is evaluated using two different sets of reasonable gain values that were found using the tuning method described earlier. Figure 13. Step input performance for cross-track error controller, K p = 0.5 and K d = 0.5

30 23 Figure 14. Step input performance for cross-track error controller with K p = 3 and K d = 1 In the previous two figures, the equivalent of a step input was used, such that the robot was given a starting orientation equal to the desired path of travel while the cross-track error was finite. In the following figures, an inverted situation is tested. In this situation, the robot is desired to make a right angle turn. When the robot arrives at the turn, the cross-track error is equal to the goal radius while the orientation has an error of 90 degrees from the desired direction of travel. Figures 15 and 16 show that the performance for the right hand and left hand turns are identical, although mirrored. Therefore, following examples show only left hand turns.

31 24 Figure 15. Left 90 degree turn with cross-track error controller with K p = 5 and K d = 3 Figure 16. Right 90 degree turn with cross-track error controller with K p = 5 and K d = 3

32 25 Figure 17. Left 90 degree turn with cross-track error controller with K p = 3 and K d = 1 Figure 18. Left 90 degree turn with cross-track error controller with K p = 0.5 and K d = 0.5

33 26 Figure 19 highlights the performance of the robot at the transition between the first and second line segments of a left 90 degree turn. This figure shows the main problem of using a controller based purely on cross-track error. This problem, and its solution, will be discussed in the following chapter. Figure 19. Left 90 degree turn transition, cross-track error controller, K p = 5 and K d = 3 Figures 20 and 21 test the tracking performance of the robot with the first simulated path. This path, named path 1, represents a possible path generated by the RRT* path-planning algorithm that involves shallow angled turns with long travel between checkpoints. The situation is tested with multiple combinations of controlled gain values in order to provide insights into the effects of the gains on the tracking performance.

34 27 Figure 20. Path 1 performance for cross-track error controller with K p = 0.5 and K d = 0.5 Figure 21. Path 1 performance for cross-track error controller with K p = 3 and K d = 1 Figures 22 and 23 show the tracking performance of the robot with the path 2, a path with sharp turns. The situation is tested with multiple combinations of controlled gain values in order to provide insights into the effects of the gains on the tracking performance.

35 28 Figure 22. Path 2 performance for cross-track error controller with K p = 0.5 and K d = 0.5 Figure 23. Path 2 performance for cross-track error controller with K p = 3 and K d = 1

36 29 The tracking performance of the robot for path 3 is shown in Figures 24 and 25. This path is the closest representation to the path that is expected to be generated by the RRT* path-planning algorithm. The turns of path 3 are generally shallow and the distance between checkpoints is small. Figure 24. Path 3 performance for cross-track error controller with K p = 0.5 and K d = 0.5 Figure 25. Path 3 performance for cross-track error controller with K p = 3 and K d = 1

37 Cross-Track Error Controller with Heading Compensation 30 This section shows the simulated results of the controller after the heading compensation is added. The following figures show the tracking performance for the same situations seen in Figure 17 through 25 of the previous section. Specifically, this section shows the performance for a 90 degree turn, and the performance for path 1, path 2, and path 3. All of these paths are tested with different controller gain values to show the impact that the gains have on the performance. These results will be discussed in the next chapter. Figure 26. Left 90 degree turn for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.3 and K d =

38 31 Figure 27. Left 90 degree turn with cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d = Figure 28. Left 90 degree turn transition with cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d =

39 32 Figure 29. Path 1 for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.5 and K d = Figure 30. Path 1 for cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d =

40 33 Figure 31. Path 2 for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.5 and K d = Figure 32. Path 2 for cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d =

41 34 Figure 33. Path 3 for cross-track error controller with K p = 1 and K d = and heading error controller with K p = 0.5 and K d = Figure 34. Path 3 for cross-track error controller with K p = 5 and K d = and heading error controller with K p = 0.3 and K d =

Introduction to Intelligent System ( , Fall 2017) Instruction for Assignment 2 for Term Project. Rapidly-exploring Random Tree and Path Planning

Introduction to Intelligent System ( , Fall 2017) Instruction for Assignment 2 for Term Project. Rapidly-exploring Random Tree and Path Planning Instruction for Assignment 2 for Term Project Rapidly-exploring Random Tree and Path Planning Introduction The objective of this semester s term project is to implement a path planning algorithm for a

More information

Non-holonomic Planning

Non-holonomic Planning Non-holonomic Planning Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering http://users.wpi.edu/~zli11 Recap We have learned about RRTs. q new q init q near q rand But the standard

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) How to generate Delaunay Triangulation? (3 pts) Explain the difference

More information

Configuration Space of a Robot

Configuration Space of a Robot Robot Path Planning Overview: 1. Visibility Graphs 2. Voronoi Graphs 3. Potential Fields 4. Sampling-Based Planners PRM: Probabilistic Roadmap Methods RRTs: Rapidly-exploring Random Trees Configuration

More information

Final Report: Dynamic Dubins-curve based RRT Motion Planning for Differential Constrain Robot

Final Report: Dynamic Dubins-curve based RRT Motion Planning for Differential Constrain Robot Final Report: Dynamic Dubins-curve based RRT Motion Planning for Differential Constrain Robot Abstract This project develops a sample-based motion-planning algorithm for robot with differential constraints.

More information

Probabilistic Methods for Kinodynamic Path Planning

Probabilistic Methods for Kinodynamic Path Planning 16.412/6.834J Cognitive Robotics February 7 th, 2005 Probabilistic Methods for Kinodynamic Path Planning Based on Past Student Lectures by: Paul Elliott, Aisha Walcott, Nathan Ickes and Stanislav Funiak

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

Advanced Robotics Path Planning & Navigation

Advanced Robotics Path Planning & Navigation Advanced Robotics Path Planning & Navigation 1 Agenda Motivation Basic Definitions Configuration Space Global Planning Local Planning Obstacle Avoidance ROS Navigation Stack 2 Literature Choset, Lynch,

More information

Assignment 1: Control and obstacle avoidance

Assignment 1: Control and obstacle avoidance Assignment 1: Control and obstacle avoidance ETH Zurich Individual Demonstration: Thursday, 08.10.2015 at 15:15 Individual Software Due: Thursday, 08.10.2015 at 23:00 Group Work Demonstration: Monday,

More information

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Yoshiaki Kuwata and Jonathan P. How Space Systems Laboratory Massachusetts Institute of Technology {kuwata,jhow}@mit.edu

More information

FACOLTÀ DI INGEGNERIA DELL INFORMAZIONE ELECTIVE IN ROBOTICS. Quadrotor. Motion Planning Algorithms. Academic Year

FACOLTÀ DI INGEGNERIA DELL INFORMAZIONE ELECTIVE IN ROBOTICS. Quadrotor. Motion Planning Algorithms. Academic Year FACOLTÀ DI INGEGNERIA DELL INFORMAZIONE ELECTIVE IN ROBOTICS Quadrotor Motion Planning Algorithms Prof. Marilena Vendittelli Prof. Jean-Paul Laumond Jacopo Capolicchio Riccardo Spica Academic Year 2010-2011

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

University of Nevada, Reno. Dynamic Path Planning and Replanning for Mobile Robot Team Using RRT*

University of Nevada, Reno. Dynamic Path Planning and Replanning for Mobile Robot Team Using RRT* University of Nevada, Reno Dynamic Path Planning and Replanning for Mobile Robot Team Using RRT* A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer

More information

Robot Motion Control Matteo Matteucci

Robot Motion Control Matteo Matteucci Robot Motion Control Open loop control A mobile robot is meant to move from one place to another Pre-compute a smooth trajectory based on motion segments (e.g., line and circle segments) from start to

More information

Planning in Mobile Robotics

Planning in Mobile Robotics Planning in Mobile Robotics Part I. Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision Making and Control Czech Technical University in Prague Tuesday 26/07/2011

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

COMPLETE AND SCALABLE MULTI-ROBOT PLANNING IN TUNNEL ENVIRONMENTS. Mike Peasgood John McPhee Christopher Clark

COMPLETE AND SCALABLE MULTI-ROBOT PLANNING IN TUNNEL ENVIRONMENTS. Mike Peasgood John McPhee Christopher Clark COMPLETE AND SCALABLE MULTI-ROBOT PLANNING IN TUNNEL ENVIRONMENTS Mike Peasgood John McPhee Christopher Clark Lab for Intelligent and Autonomous Robotics, Department of Mechanical Engineering, University

More information

Chapter 3 Path Optimization

Chapter 3 Path Optimization Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for

More information

Obstacle Avoidance Project: Final Report

Obstacle Avoidance Project: Final Report ERTS: Embedded & Real Time System Version: 0.0.1 Date: December 19, 2008 Purpose: A report on P545 project: Obstacle Avoidance. This document serves as report for P545 class project on obstacle avoidance

More information

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION ABSTRACT Mark A. Mueller Georgia Institute of Technology, Computer Science, Atlanta, GA USA The problem of autonomous vehicle navigation between

More information

Planning & Decision-making in Robotics Planning Representations/Search Algorithms: RRT, RRT-Connect, RRT*

Planning & Decision-making in Robotics Planning Representations/Search Algorithms: RRT, RRT-Connect, RRT* 16-782 Planning & Decision-making in Robotics Planning Representations/Search Algorithms: RRT, RRT-Connect, RRT* Maxim Likhachev Robotics Institute Carnegie Mellon University Probabilistic Roadmaps (PRMs)

More information

Super Assembling Arms

Super Assembling Arms Super Assembling Arms Yun Jiang, Nan Xiao, and Hanpin Yan {yj229, nx27, hy95}@cornell.edu Abstract Although there are more and more things personal robots can do for us at home, they are unable to accomplish

More information

CIS 390 Fall 2015 Robotics: Planning and Perception Kostas Daniilidis Homework 6

CIS 390 Fall 2015 Robotics: Planning and Perception Kostas Daniilidis Homework 6 CIS 390 Fall 2015 Robotics: Planning and Perception Kostas Daniilidis Homework 6 Lab session: Thursday October 27, 2016 10:30 AM, 2016 Code submission: Friday October 28, 2016 23:59 PM 1 Introduction In

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning

Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning Computer Science 336 http://www.cs.jhu.edu/~hager/teaching/cs336 Professor Hager http://www.cs.jhu.edu/~hager Recall Earlier Methods

More information

Visibility Graph. How does a Mobile Robot get from A to B?

Visibility Graph. How does a Mobile Robot get from A to B? Robot Path Planning Things to Consider: Spatial reasoning/understanding: robots can have many dimensions in space, obstacles can be complicated Global Planning: Do we know the environment apriori? Online

More information

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena CS 4758/6758: Robot Learning Spring 2010: Lecture 9 Why planning and control? Video Typical Architecture Planning 0.1 Hz Control 50 Hz Does it apply to all robots and all scenarios? Previous Lecture: Potential

More information

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment Planning and Navigation Where am I going? How do I get there?? Localization "Position" Global Map Cognition Environment Model Local Map Perception Real World Environment Path Motion Control Competencies

More information

Spring 2016, Robot Autonomy, Final Report (Team 7) Motion Planning for Autonomous All-Terrain Vehicle

Spring 2016, Robot Autonomy, Final Report (Team 7) Motion Planning for Autonomous All-Terrain Vehicle Spring 2016, 16662 Robot Autonomy, Final Report (Team 7) Motion Planning for Autonomous All-Terrain Vehicle Guan-Horng Liu 1, Samuel Wang 1, Shu-Kai Lin 1, Chris Wang 2, and Tiffany May 1 Advisors: Mr.

More information

Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

More information

Sampling-based Planning 2

Sampling-based Planning 2 RBE MOTION PLANNING Sampling-based Planning 2 Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering http://users.wpi.edu/~zli11 Problem with KD-tree RBE MOTION PLANNING Curse of dimension

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

6.141: Robotics systems and science Lecture 10: Implementing Motion Planning

6.141: Robotics systems and science Lecture 10: Implementing Motion Planning 6.141: Robotics systems and science Lecture 10: Implementing Motion Planning Lecture Notes Prepared by N. Roy and D. Rus EECS/MIT Spring 2011 Reading: Chapter 3, and Craig: Robotics http://courses.csail.mit.edu/6.141/!

More information

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,

More information

Robot Motion Planning

Robot Motion Planning Robot Motion Planning James Bruce Computer Science Department Carnegie Mellon University April 7, 2004 Agent Planning An agent is a situated entity which can choose and execute actions within in an environment.

More information

Attack Resilient State Estimation for Vehicular Systems

Attack Resilient State Estimation for Vehicular Systems December 15 th 2013. T-SET Final Report Attack Resilient State Estimation for Vehicular Systems Nicola Bezzo (nicbezzo@seas.upenn.edu) Prof. Insup Lee (lee@cis.upenn.edu) PRECISE Center University of Pennsylvania

More information

Adaptive Cruise Control

Adaptive Cruise Control Teacher Notes & Answers 7 8 9 10 11 12 TI-Nspire Investigation Student 50 min Introduction Basic cruise control is where the car s computer automatically adjusts the throttle so that the car maintains

More information

Build and Test Plan: IGV Team

Build and Test Plan: IGV Team Build and Test Plan: IGV Team 2/6/2008 William Burke Donaldson Diego Gonzales David Mustain Ray Laser Range Finder Week 3 Jan 29 The laser range finder will be set-up in the lab and connected to the computer

More information

Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning

Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning Computer Science 336 http://www.cs.jhu.edu/~hager/teaching/cs336 Professor Hager http://www.cs.jhu.edu/~hager Recall Earlier Methods

More information

ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space

ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu

More information

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial Lecture VI: Constraints and Controllers Parts Based on Erin Catto s Box2D Tutorial Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a

More information

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters ISSN (e): 2250 3005 Volume, 06 Issue, 12 December 2016 International Journal of Computational Engineering Research (IJCER) A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

More information

Lecture VI: Constraints and Controllers

Lecture VI: Constraints and Controllers Lecture VI: Constraints and Controllers Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a chair human body parts trigger of a gun opening

More information

Fast Automated Estimation of Variance in Discrete Quantitative Stochastic Simulation

Fast Automated Estimation of Variance in Discrete Quantitative Stochastic Simulation Fast Automated Estimation of Variance in Discrete Quantitative Stochastic Simulation November 2010 Nelson Shaw njd50@uclive.ac.nz Department of Computer Science and Software Engineering University of Canterbury,

More information

Lesson 1: Introduction to Pro/MECHANICA Motion

Lesson 1: Introduction to Pro/MECHANICA Motion Lesson 1: Introduction to Pro/MECHANICA Motion 1.1 Overview of the Lesson The purpose of this lesson is to provide you with a brief overview of Pro/MECHANICA Motion, also called Motion in this book. Motion

More information

Navigation methods and systems

Navigation methods and systems Navigation methods and systems Navigare necesse est Content: Navigation of mobile robots a short overview Maps Motion Planning SLAM (Simultaneous Localization and Mapping) Navigation of mobile robots a

More information

Geometric Path Planning McGill COMP 765 Oct 12 th, 2017

Geometric Path Planning McGill COMP 765 Oct 12 th, 2017 Geometric Path Planning McGill COMP 765 Oct 12 th, 2017 The Motion Planning Problem Intuition: Find a safe path/trajectory from start to goal More precisely: A path is a series of robot configurations

More information

Lecture Schedule Week Date Lecture (W: 3:05p-4:50, 7-222)

Lecture Schedule Week Date Lecture (W: 3:05p-4:50, 7-222) 2017 School of Information Technology and Electrical Engineering at the University of Queensland Lecture Schedule Week Date Lecture (W: 3:05p-4:50, 7-222) 1 26-Jul Introduction + 2 2-Aug Representing Position

More information

The Chase Problem (Part 1) David C. Arney

The Chase Problem (Part 1) David C. Arney The Chase Problem (Part 1) David C. Arney We build systems like the Wright brothers built airplanes build the whole thing, push it off a cliff, let it crash, and start all over again. --- R. M. Graham

More information

Cognitive Robotics Robot Motion Planning Matteo Matteucci

Cognitive Robotics Robot Motion Planning Matteo Matteucci Cognitive Robotics Robot Motion Planning Robot Motion Planning eminently necessary since, by definition, a robot accomplishes tasks by moving in the real world. J.-C. Latombe (1991) Robot Motion Planning

More information

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Eric Christiansen Michael Gorbach May 13, 2008 Abstract One of the drawbacks of standard reinforcement learning techniques is that

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) Compare the testing methods for testing path segment and finding first

More information

1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator

1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator 1498. End-effector vibrations reduction in trajectory tracking for mobile manipulator G. Pajak University of Zielona Gora, Faculty of Mechanical Engineering, Zielona Góra, Poland E-mail: g.pajak@iizp.uz.zgora.pl

More information

An Educational Rigid-Body Dynamics Physics Engine TJHSST Senior Research Project Proposal Computer Systems Lab

An Educational Rigid-Body Dynamics Physics Engine TJHSST Senior Research Project Proposal Computer Systems Lab An Educational Rigid-Body Dynamics Physics Engine TJHSST Senior Research Project Proposal Computer Systems Lab 2009-2010 Neal Milstein April 9, 2010 Abstract The goal of this project is to create a rigid-body

More information

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta CMPUT 412 Motion Control Wheeled robots Csaba Szepesvári University of Alberta 1 Motion Control (wheeled robots) Requirements Kinematic/dynamic model of the robot Model of the interaction between the wheel

More information

CS 4649/7649 Robot Intelligence: Planning

CS 4649/7649 Robot Intelligence: Planning CS 4649/7649 Robot Intelligence: Planning Probabilistic Roadmaps Sungmoon Joo School of Interactive Computing College of Computing Georgia Institute of Technology S. Joo (sungmoon.joo@cc.gatech.edu) 1

More information

Motion Planning. Howie CHoset

Motion Planning. Howie CHoset Motion Planning Howie CHoset Questions Where are we? Where do we go? Which is more important? Encoders Encoders Incremental Photodetector Encoder disk LED Photoemitter Encoders - Incremental Encoders -

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

A Simplified Vehicle and Driver Model for Vehicle Systems Development

A Simplified Vehicle and Driver Model for Vehicle Systems Development A Simplified Vehicle and Driver Model for Vehicle Systems Development Martin Bayliss Cranfield University School of Engineering Bedfordshire MK43 0AL UK Abstract For the purposes of vehicle systems controller

More information

SPARTAN ROBOTICS FRC 971

SPARTAN ROBOTICS FRC 971 SPARTAN ROBOTICS FRC 971 Controls Documentation 2015 Design Goals Create a reliable and effective system for controlling and debugging robot code that provides greater flexibility and higher performance

More information

Motion Planning for Humanoid Robots

Motion Planning for Humanoid Robots Motion Planning for Humanoid Robots Presented by: Li Yunzhen What is Humanoid Robots & its balance constraints? Human-like Robots A configuration q is statically-stable if the projection of mass center

More information

Particle Filter Localization

Particle Filter Localization E190Q Autonomous Mobile Robots Lab 4 Particle Filter Localization INTRODUCTION Determining a robots position in a global coordinate frame is one of the most important and difficult problems to overcome

More information

NAVIGATION AND RETRO-TRAVERSE ON A REMOTELY OPERATED VEHICLE

NAVIGATION AND RETRO-TRAVERSE ON A REMOTELY OPERATED VEHICLE NAVIGATION AND RETRO-TRAVERSE ON A REMOTELY OPERATED VEHICLE Karl Murphy Systems Integration Group Robot Systems Division National Institute of Standards and Technology Gaithersburg, MD 0899 Abstract During

More information

Ground Plane Motion Parameter Estimation For Non Circular Paths

Ground Plane Motion Parameter Estimation For Non Circular Paths Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew

More information

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano tel:

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano   tel: Marcello Restelli Dipartimento di Elettronica e Informazione Politecnico di Milano email: restelli@elet.polimi.it tel: 02 2399 3470 Path Planning Robotica for Computer Engineering students A.A. 2006/2007

More information

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT 1 SOE YU MAUNG MAUNG, 2 NU NU WIN, 3 MYINT HTAY 1,2,3 Mechatronic Engineering Department, Mandalay Technological University, The Republic

More information

CANAL FOLLOWING USING AR DRONE IN SIMULATION

CANAL FOLLOWING USING AR DRONE IN SIMULATION CANAL FOLLOWING USING AR DRONE IN SIMULATION ENVIRONMENT Ali Ahmad, Ahmad Aneeque Khalid Department of Electrical Engineering SBA School of Science & Engineering, LUMS, Pakistan {14060006, 14060019}@lums.edu.pk

More information

Control of an 8-Legged, 24 DOF, Mechatronic Robot

Control of an 8-Legged, 24 DOF, Mechatronic Robot Control of an 8-Legged, 24 DOF, Mechatronic Robot Submitted by Brian Lim Youliang Kuvesvaran s/o Paramasivan National Junior College Assoc. Prof. Dr. Francis Malcolm John Nickols Abstract The objective

More information

Sphero Lightning Lab Cheat Sheet

Sphero Lightning Lab Cheat Sheet Actions Tool Description Variables Ranges Roll Combines heading, speed and time variables to make the robot roll. Duration Speed Heading (0 to 999999 seconds) (degrees 0-359) Set Speed Sets the speed of

More information

Driven Cavity Example

Driven Cavity Example BMAppendixI.qxd 11/14/12 6:55 PM Page I-1 I CFD Driven Cavity Example I.1 Problem One of the classic benchmarks in CFD is the driven cavity problem. Consider steady, incompressible, viscous flow in a square

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Manipulator trajectory planning

Manipulator trajectory planning Manipulator trajectory planning Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering Department of Cybernetics Czech Republic http://cmp.felk.cvut.cz/~hlavac Courtesy to

More information

Lecture 3: Motion Planning 2

Lecture 3: Motion Planning 2 CS 294-115 Algorithmic Human-Robot Interaction Fall 2017 Lecture 3: Motion Planning 2 Scribes: David Chan and Liting Sun - Adapted from F2016 Notes by Molly Nicholas and Chelsea Zhang 3.1 Previously Recall

More information

Optimization of a two-link Robotic Manipulator

Optimization of a two-link Robotic Manipulator Optimization of a two-link Robotic Manipulator Zachary Renwick, Yalım Yıldırım April 22, 2016 Abstract Although robots are used in many processes in research and industry, they are generally not customized

More information

CHAPTER 1: REAL TIME COMPUTER CONTROL

CHAPTER 1: REAL TIME COMPUTER CONTROL CHAPTER 1 Page 1 ENGG4420 LECTURE 2 September 08 10 12:49 PM CHAPTER 1: REAL TIME COMPUTER CONTROL REFERENCES: G. F. Franklin et al., ``Feedback Control of Dynamic Systems,`` 5th Edition, Pearson, 2006.

More information

Motion Planning for a Reversing Full-Scale Truck and Trailer System

Motion Planning for a Reversing Full-Scale Truck and Trailer System Master of Science Thesis in Department of Electrical Engineering, Linköping University, 216 Motion Planning for a Reversing Full-Scale Truck and Trailer System Olov Holmer Master of Science Thesis in Motion

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018 Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018 Asaf Moses Systematics Ltd., Technical Product Manager aviasafm@systematics.co.il 1 Autonomous

More information

One category of visual tracking. Computer Science SURJ. Michael Fischer

One category of visual tracking. Computer Science SURJ. Michael Fischer Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for

More information

Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty

Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Michael Comstock December 6, 2012 1 Introduction This paper presents a comparison of two different machine learning systems

More information

Mobile Robots: An Introduction.

Mobile Robots: An Introduction. Mobile Robots: An Introduction Amirkabir University of Technology Computer Engineering & Information Technology Department http://ce.aut.ac.ir/~shiry/lecture/robotics-2004/robotics04.html Introduction

More information

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Jingyu Xiang, Yuichi Tazaki, Tatsuya Suzuki and B. Levedahl Abstract This research develops a new roadmap

More information

Robot Motion Planning

Robot Motion Planning Robot Motion Planning slides by Jan Faigl Department of Computer Science and Engineering Faculty of Electrical Engineering, Czech Technical University in Prague lecture A4M36PAH - Planning and Games Dpt.

More information

Part I Part 1 Sampling-based Motion Planning

Part I Part 1 Sampling-based Motion Planning Overview of the Lecture Randomized Sampling-based Motion Planning Methods Jan Faigl Department of Computer Science Faculty of Electrical Engineering Czech Technical University in Prague Lecture 05 B4M36UIR

More information

15-780: Problem Set #4

15-780: Problem Set #4 15-780: Problem Set #4 April 21, 2014 1. Image convolution [10 pts] In this question you will examine a basic property of discrete image convolution. Recall that convolving an m n image J R m n with a

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION 1.1 Motivation The presence of uncertainties and disturbances has always been a vital issue in the control of dynamic systems. The classical linear controllers, PI and PID controllers

More information

Humanoid Robotics. Path Planning and Walking. Maren Bennewitz

Humanoid Robotics. Path Planning and Walking. Maren Bennewitz Humanoid Robotics Path Planning and Walking Maren Bennewitz 1 Introduction Given the robot s pose in a model of the environment Compute a path to a target location First: 2D path in a 2D grid map representation

More information

6.141: Robotics systems and science Lecture 10: Motion Planning III

6.141: Robotics systems and science Lecture 10: Motion Planning III 6.141: Robotics systems and science Lecture 10: Motion Planning III Lecture Notes Prepared by N. Roy and D. Rus EECS/MIT Spring 2012 Reading: Chapter 3, and Craig: Robotics http://courses.csail.mit.edu/6.141/!

More information

Part I Part 1 Sampling-based Motion Planning

Part I Part 1 Sampling-based Motion Planning Overview of the Lecture Randomized Sampling-based Motion Planning Methods Jan Faigl Department of Computer Science Faculty of Electrical Engineering Czech Technical University in Prague Lecture 06 B4M36UIR

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Keywords: UAV, Formation flight, Virtual Pursuit Point

Keywords: UAV, Formation flight, Virtual Pursuit Point DESIGN THE GUIDANCE LAW FOR FORMATION FLIGHT OF MULTIPLE UAVS Heemin Shin*, Hyungi Kim*, Jaehyun Lee*, David Hyunchul Shim* *Korea Advanced Institute of Science and Technology (KAIST) Keywords: UAV, Formation

More information

1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination

1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination 1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination Iwona Pajak 1, Grzegorz Pajak 2 University of Zielona Gora, Faculty of Mechanical Engineering,

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

1 Introduction. Control 1:

1 Introduction. Control 1: 1 1 Introduction Hierarchical Control for Mobile Robots Motive Autonomy Control Level We build controllers because: Real hardware ultimately responds to forces, energy, power, etc. whereas we are concerned

More information

Experimental Verification of Stability Region of Balancing a Single-wheel Robot: an Inverted Stick Model Approach

Experimental Verification of Stability Region of Balancing a Single-wheel Robot: an Inverted Stick Model Approach IECON-Yokohama November 9-, Experimental Verification of Stability Region of Balancing a Single-wheel Robot: an Inverted Stick Model Approach S. D. Lee Department of Mechatronics Engineering Chungnam National

More information

PATH PLANNING AND EVOLUTIONARY OPTIMIZATION OF WHEELED ROBOTS DALJEET SINGH

PATH PLANNING AND EVOLUTIONARY OPTIMIZATION OF WHEELED ROBOTS DALJEET SINGH PATH PLANNING AND EVOLUTIONARY OPTIMIZATION OF WHEELED ROBOTS DALJEET SINGH Bachelor of Science in Computer Engineering Cleveland State University May 2011 A thesis submitted in partial fulfillment of

More information

6.141: Robotics systems and science Lecture 9: Configuration Space and Motion Planning

6.141: Robotics systems and science Lecture 9: Configuration Space and Motion Planning 6.141: Robotics systems and science Lecture 9: Configuration Space and Motion Planning Lecture Notes Prepared by Daniela Rus EECS/MIT Spring 2012 Figures by Nancy Amato, Rodney Brooks, Vijay Kumar Reading:

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information