ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev

Size: px
Start display at page:

Download "ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev"

Transcription

1 ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS by Aleksandr Elesev A mobile robot must know its position in order to operate autonomously. The process of determining the robot s position from sensor data is termed robot localization. IMU and RF are a few of the many different types of sensors that can be used for navigation and localization purposes. When used independently, these sensors can achieve good accuracy when operating in certain conditions. By merging the results from multiple sensors, the accuracy over a wider range of conditions can be obtained. This work proposes a technique of merging heterogeneous information from inertial and RF sensors. Since sensors have errors associated with their readings, the robot s state will be represented using a probability distribution function (PDF). At each time step, this PDF will be updated based on the RF readings and then updated again based on the IMU readings. Better localization accuracy is obtained by using the RF and inertial sensors together.

2 ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS A Thesis Submitted to the Faculty of Miami University in partial fulfillment of the requirements for the degree of Master of Computer Science Department of Computer Science by Aleksandr Elesev Miami University Oxford, Ohio 2008 Advisor Dr. Michael Zmuda Reader Dr. Jade Morton Reader Dr. Valerie Cross

3 Table of contests 1. Introduction Previous Research Hardware Robot RF Motes IMU Obtaining Ground Truth Robot-Control Software Hardware Issues Methodology Introduction Learning sensor model P(distance signal readings) Grid-based approach Particle filter approach IMU update RF update Sequence of updates Simulation Environment Experimental Results Straight path Full configuration Various configurations RF only, IMU only, RF+IMU Number of RF transmitters Acceleration and position perturbations Other paths Square path Long path Number of particles Conclusions and Future Work...65 ii

4 8. References Appendices Center of the robot identification Data collection procedure IMU Update derivations...75 iii

5 List of tables Table 1. Long path. Average accuracy and dispersion (mm)...63 Table 2. Number of particles; RF+IMU; Straight and Long paths; average accuracy and dispersion (mm), and finish flag...64 iv

6 List of figures Figure 1. PIONEER P3-DX. Robot shown with additional accessories: camera, laser, and speakers mounted on the top. [12]...9 Figure 2. Cross Bow MICA2 Mote [10]. Size 2.25 x 1.25 x 0.25 (in)...10 Figure 3. MIB520 Mote Interface [10]...10 Figure 4. Average sensor reading. [1]...11 Figure 5. Standard deviation of data. [1]...12 Figure 6. Distribution of sensor readings for 2.6m. [1]...12 Figure 7. P(distance Sensor Reading = 16920). [1]...13 Figure 8. 3DM-GX Figure 9. Weighted rotational base with a tape...17 Figure 10. Measuring distances from the bases to the marked points on the robot...17 Figure 11. Reference points A and B marked on the top of the robot s platform...17 Figure 12. Processes and dataflow...22 Figure 13. Obtaining P(distance = D sensor reading = S) [1]...24 Figure 14. Triangle distribution...26 Figures 15a-c. Three RF rings and d) their intersection Figures 15e-g. Various intersections of RF rings...31 Figure particles uniformly distributed over the space...34 Figure particles concentrated in the intersection of 3 rings...34 Figure 17 (fragment) particles concentrated in the intersection of 3 rings...34 Figure 18. Intersection of 4 RF rings, few particles left after first RF update from initial particles uniformly distributed over the space...34 Figure 19. Square space with four transmitters and the robot at its initial position...38 Figure 20. Iteration 0, initialization (RF update), weighted particles...39 Figure 21. Iteration 1, IMU update weighted particles from the input generated uniformly weighted child particles, which were down selected to resulting particles...39 Figure 22. Iteration 1, RF update uniformly weighted particles from the input resulted in 7092 nonzero weighted particles...40 v

7 Figure 23. Iteration 2, IMU update weighted particles from the input generated uniformly weighted child particles, which were down selected to resulting particles...40 Figures 24a-d. From left to right: RF update (iteration 2), IMU update, RF update (both from iteration 3), IMU update (iteration 4)...41 Figures 25a-d. From left to right: RF update (iteration 4), IMU update (iteration 5), IMU update (iteration 15), IMU update (iteration 30)...41 Figures 26a-d. From left to right the set of the particles during iteration 50, 90, 150, 300. The scale is the same, but the offset of c-d is bigger than a-b, therefore the black border is invisible.42 Figures 27a-c. True path of the robot (green) and path of the center of the cloud (red). From left to right: full path, fragment at the beginning, and fragment at the end...42 Figure 28. Real position of the robot (green blob on origin), estimated position (red spot southwest of origin) as the center of the population (dispersed red dots), and distances from particles to the real (green lines) and estimated positions (red lines)...43 Figure 29. Error measures during the straight path, all iterations from 1 to Figure 30. Error measures during the straight path (fragment), iterations from 1 to Figure 31. Error measures during the straight path (fragment), iterations from 31 to Figure 32. Straight path, Error Distance, RF only, IMU only, RF+IMU Figure 33. Straight path, Dispersion, RF only, IMU only, RF+IMU...47 Figure 34. Straight path, Error Distance, 1st, 2nd, 3rd, and 4th transmitter used...48 Figure 35. Straight path, Dispersion, 1st, 2nd, 3rd, and 4th transmitter used...49 Figure 36. Various RF rings around transmitter #1 corresponding to the same location of the robot...49 Figure 37. Set of the particles at the beginning of iteration Figure 38. PDF of the RF reading at iteration Figure 39. a) Set of the particles after iteration 22, b-d) set of particles after next three iterations...50 Figure 40. Straight path, Error Distance, 2 transmitters used...51 Figure 41. Straight path, Dispersion, 2 transmitters used...52 Figures 42a-b. Intersection of 2 RF rings from 1st and 3rd transmitters. Initialization is on the left and 21st iteration right before both spots were eliminated is on the right...53 vi

8 Figure 43. Intersection of 2 RF rings from 2nd and 4th transmitters. Initialization is on the left and 220th iteration is on the right...53 Figure 44. Straight path, Error Distance, 3 and 4 transmitters used...54 Figure 45. Straight path, Dispersion, 3 and 4 transmitters used...54 Figure 46. Straight path, Error Distance, RF only, 4 transmitters, position perturbation from 10 to Figure 47. Straight path, average of Error Distance and Dispersion, RF only, 4 transmitters, position perturbation from 30 to Figure 48. Straight path, average of Error Distance and Dispersion, RF+IMU, 4 transmitters, acceleration perturbation from 25 to Figure 49. Square path. True path of the robot (green) and path of the center of the cloud (red). From left to right: RF only (4 transmitters, 50mm position perturbation), IMU only, RF+IMU (50mm/s2 acceleration perturbation for both)...60 Figure 50. Square path, Error Distance, RF only, IMU only, RF+IMU...60 Figure 51. Square path, Dispersion, RF only, IMU only, RF+IMU...61 Figures 52a-c. Long path. True path of the robot (green) and path of the center of the cloud (red). From left to right: RF only (4 transmitters, 50mm position perturbation), IMU only, RF+IMU (50mm/s2 acceleration perturbation for both)...62 Figure 53. Long path, Error Distance, RF only, IMU only, RF+IMU...62 Figure 54. Long path, Dispersion, RF only, IMU only, RF+IMU...63 Figure 55. Consecutive estimation of the robot s center...70 Figure 56. Original location of the robot...71 Figure 57. Location of the robot after the first rotation...71 Figure 58. Disposition of bases and motes...73 vii

9 1. Introduction A mobile robot must know its position in order to operate autonomously. The process of determining a robot s position from sensor data is called robot localization. There are two main types of localization absolute and relative. Absolute, or global localization, refers to localization of robots on the surface of the Earth expressed in its global coordinates. It is usually done by means of sensors that provide coordinates in the Earth s geographic coordinate system. Another way to identify the robot s absolute position is to take relative measurements to objects or landmarks in robot s environment that are positioned at known locations in some absolute coordinate system. In this case, relative location can be converted to absolute location by taking an offset, and perhaps a rotation, from the global reference point. This process is usually referred as relative localization. Once absolute coordinates of the robot are known, they can be further translated into global coordinates in the coordinate system of the Earth. This translation requires knowing the position and orientation of the original absolute system expressed in the coordinates of final global system. This research focuses on relative localization, where the absolute system of the robot is fixed somewhere in the robot s environment. Another approach to robot localization uses sensor information regarding the robot s absolute acceleration, which can be used to establish the robot s velocity, which in turn can be used to estimate the robot s position over time. Similarly, other types of sensors can be used for navigation and localization purposes such as sonars, lidars, wheel encoders, GPS devices, stereovision, etc. Most sensors exhibit good results in some environments but, at the same time, are inapplicable in other environments. For example: 1) localization based on sonar typically requires a map of the environment including the position of known obstacles; 2) localization using wheel encoders requires a flat, smooth, non-skid surface; 3) GPS requires an environment capable of receiving clean satellite signals; and 4) RF sensors generally require an outdoor environment free of obstacles since obstacles causes multi-path and influence signal strengths. Thus, effectively 1

10 integrating multiple sensors is an important objective since a second, or even third, sensor type may help compensate for the inadequacies of the other sensor(s). For example, using an inertial sensor solely provides information about acceleration that, in combination with a known starting location, can be used to estimate robot current position. The main problem with inertial sensors is drift. It occurs when robot s position is estimated using IMU sensor solely because of accumulated noise of the sensor and it causes significant degradation of estimates over time. Thus, location cannot be calculated precisely. Adding a second sensor, such as an RF transmitter, may help mitigate the problem of drift. In this work, RF transmitters are used to augment the estimates provided by an inertial sensor. One or more RF transmitters serve as landmarks at known positions. This research continues research by Zmuda and Morton [1], which used RF sensors exclusively to estimate the robot s location. By mounting an RF receiver on the robot and measuring signal strength from the transmitters, estimates of the robot s position were made. This work extends their technique by introducing an inertial sensor to the system and merging the RF and inertial information in order to achieve better localization accuracy. In contrast to techniques that use the map of environment and try to find robot position and orientation that fits sensors readings, in this work the robot navigates through a flat, outdoor, open space free of obstacles. An uncluttered space is an easier environment to work with since RF sensor readings are noisy in the presence of walls and other obstacles. While the ultimate goal of this line of work is for indoor navigation this beginning work uses a restricted environment to demonstrate the technique of merging heterogeneous signals without the added complications of extreme noise. The two sensors that are used in this work are an Inertial Measurement Unit (IMU) and RF transmitters/receiver. The IMU provides indirect information about the robot s velocity and the RF components provide information about distance from landmarks. Since all sensors have errors associated with their readings, we represent the robot s state probabilistically. In this case, we represent position, velocity, and acceleration using a probability density function (PDF). This function represents the belief of having a particular location/velocity/acceleration. Since we consider only motion on a flat terrain (i.e., two dimensions only), the robot configuration consists of six dimensions. The first two components represent its position x on XY plane, second two 2

11 represent coordinates of velocity vector x, and the last two represent coordinates of acceleration vector x. A short notation (x, x, x ) of robot s state will be used here and further, where each of three bold components represent a two-coordinate item or vector. Our goal was to 1) implement the proposed localization algorithm 2) design and conduct experiments and 3) compare achieved localization accuracy with accuracy obtained using fewer sensors. The experimental results indicate that that the base-level accuracy of the RF-only localization technique was improved with inertial data to help track accelerations. Likewise, the IMU-only localization was improved using RF information to track distances to known targets. Since the inertial readings was used on a time-step basis (instead of using over long period of time to accumulate velocity information), effects of drift were lessened. Overall, the proposed algorithm achieved accuracy at a sub-centimeter level. 2. Previous Research The field of robot navigation and localization has been studied well. In general, there are two main lines of research. First, various types of sensors solely, or in combination, with each other are used to establish location using mathematical models. The primary goal has been to find effective approaches for using multiple sensors. Second, various mathematical models and implementations are used to accomplish this task using the same sensors. Their primary goal is to improve accuracy and performance through advanced models and algorithms. There are several papers relevant to this work the rest of this section describes the relationship of that work to the work described here. [2] addresses the problem of global localization when a robot is given the map of its environment and an unknown initial position. This is a difficult problem because the robot must identify its position without knowing its initial position. Their work operates under the Robocup Simulation League, which is a soccer simulation system. Software players issue commands like kick, accelerate, etc. and the simulation updates the field based on specific physics equations. A Robocup field has sensors positioned at known locations around the field. Further, the error model of these sensors is known. Our work also uses a rectangular shaped region with landmarks (i.e., RF transmitters) at known locations. One significant difference is that Robocup has 3

12 multiple moving objects with imprecise positions such as a ball and players. Another difference is that sensors readings are delivered as a list of imprecise distances and directions to the objects in the field of vision. In our case, we will not know directions to landmarks and will need to infer the distances to landmarks. Finally, our approach uses IMU readings to obtain information about acceleration, while the Robocup agents are explicitly given velocity information. One significant characteristic is that the Robocup s server has a published sensor error model. Our work must create a technique that will learn the probabilistic relationship between signal readings and distances. Such adaptability should allow our technique to easily integrate other sensors such as wheel encoders. Finally, [2] uses Markov localization approach to maintain a PDF over the full robot state space. The main idea is that all these configurations are modeled by a random variable representing the true robot state i.e. position, orientation, velocity, etc at given time. Two assumptions were made in order to efficiently update the state of the random variable. 1. Independence of actions (Markov assumption) - knowledge about the position and the motion command executed at time (t-1) is sufficient to predict the localization of the robot at time t. 2. Independence of perceptions perceptions received at time t, revealing what the robot sees, depend only on the state of the world at that instant [2]. This work makes similar assumption. [3] continues the Markov localization approach described in [2], but takes a different technique to representing uncertainty called particle filters. Instead of describing the PDF explicitly (say as a grid), they represent it by maintaining a set of samples that are randomly drawn from it. The approach described in the paper is called Monte Carlo Localization (MCL) because it uses the well-known Monte Carlo method to update the density representation over time. The advantages they obtain over a grid-based representation are 1. Reduced amount of memory required, which in particular speeds up processing of measurements. 2. Potentially Higher accuracy compared to localization with fixed cell size, because the state represented in the samples is not discretized. 3. Ease of implementation 4

13 4. Potentially faster run-time performance due to reduced number of computations. There are a few disadvantages to using particle filters as well: 1. Properly setting the number of particles may be difficult. 2. Possibility to lose representation in certain portions of the state space. The conceptual idea behind sampling-based methods is based on the duality between the samples and the density from which they are generated. There are methods to reconstruct the density back from the samples, for example a histogram or a kernel based density estimation technique. Formally, let define x as robot s state, Z as a set of readings, and k as time-step. They represent the density p(x k Z k ) as a set of random samples or particles S k = { s i k; i=1..n} recursively computed on each time-step k. The localization algorithm proceeds in two phases: 1. Prediction phase. It starts with the set of particles S k-1 computed in the previous iteration and applies the motion model u k-1 to each particle i sk 1 by sampling from the density p(x k i sk 1, u k-1 ): for each particle i sk 1 : draw one sample i s k 1 ' from p(x k i sk 1, u k-1 ) After this phase a new temporarily set S' k-1 is obtained. It approximates a random sample from the density p(x k Z k ) but does not incorporate sensor measurements. 2. Update phase. Associate with each sample in S' k-1 a weight likelihood of being in a state by resampling from the set of weighted samples: for j=1..n: draw one S k sample i mk 1 = p(z k i s k 1 ' ), i.e. the i s' k given sensor measurement z k. After that S k is obtained j sk 1 from { s i k, i m k } Particles with higher weights have a higher probability to be selected. A new set S k approximates a random sample from p(x k Z k ). After the update phase, both phases are repeated in the next time step. The algorithm starts at time k=0 with a random sample S 0 = { s i 0 } drawn from p(x 0 ), which might be a uniform distribution of the environment. 5

14 Our research started with a grid-based representation of the PDF because of the following considerations. One of the tasks we need to accomplish is to combine/fuse two PDFs into one. If the state represented in the samples is not discretized, then it is more complicated to merge two sets of samples rather than two discretized grids. Further, particle filters have the added problem of setting the number of particles appropriately. However, the grid-based approach was abandoned as the dimensions of our state space grew and we switched to a particle-based approach similar to the one described above. The major differences are the following. First, since we do not use information about the motion commands issued to the robot, we do not have the prediction phase as it is defined in the algorithm above. Also, the update phase, which takes into account sensor readings, is our only phase. Our update phase has two types of sensor readings IMU and RF. The nature of IMU readings providing data about robot s instantaneous acceleration and orientation suggests that it can play a role of the prediction phase. Comparing to the real motion model, IMU data is very noisy, but the fact that we do not rely on the real motion commands can significantly help to keep tracking the robot s state in extreme cases of uneven or slippery surfaces where motion commands executed by the robot can have unpredictable effect. Thus, we still have two phases. During the prediction phase we use IMU data as our motion model and during the update phase we incorporate RF sensor readings. Second, in the prediction phase we generate more than one particle. The number of drawn children particles depends on the weight of the parent particle coming from the previous update phase. By doing so we have a better chance not to lose a right particle comparing to the case when it generates only one child, which can be a bad particle. Our phases are more interconnected and output from one phase is input for the other. Moreover, since we generate more than one particle during the prediction phase, our set of particles grows. During the update phase, the bad particles receive small weight and are probabilistically eliminated, causing the set to decrease in size. In order to keep an average size of the set of particles constant, the number of particles removed during the update phase is adjusted. [4] presents an active localization approach. In passive localization, estimates of position are based on the incoming stream of sensor data and neither robot motion, nor the pointing direction of the sensors are controlled. In contrast, active localization assumes that the localization process has partial, or full control, over the robot during the exploration. This approach can greatly 6

15 reduce the complexity of achieving the task by actively interacting with the environment to disambiguate location. Their framework is based on passive Markov localization and two localization problems they address - active navigation (where to move next) and active sensing (what sensors to use and where to point them). This thesis research does not use active localization. One rationale for using passive localization is that we assume the robot is performing a specific task and can t be interrupted to perform other movements simply to localize itself. However, active localization can also find its application in our future research. First, as will be shown later the IMU performs better when it moves with low velocity. This fact can be explored to improve estimates by decreasing the speed or stopping the robot periodically or when we suspect that the localization accuracy has worsened. Second, experiments indicate that RF signal readings (using one particular configuration) can not be distinguished on distances close that 1.6m to an RF transmitter. Thus, the robot may want to avoid or leave an area with poor properties. Another technique that is used for localization is called Kalman filters [11, 14]. The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. [14]. A common application example for this technique is continuous tracking of position and velocity of a moving object given a series of observations of object s position that has some error. Kalman filters can grasp the dynamics of the system and lessen the effect of noise while providing reasonably good estimates at the present time (filtering), at a future time (prediction), or at a time in the past (interpolation or smoothing). The system mentioned in the definition is a linear dynamic system that changes from one state to another over the time in discrete fashion. Markov localization assumptions cited earlier form the basis for transitions between the states of the system. The Kalman filter is a recursive estimator, which in accordance with the first (Independence of Actions) Markov assumption means that the estimated previous state and the current observation are sufficient to estimate the current state. The state of the system is represented by two variables: 1) the estimate of the state at time k; and 2) the error covariance matrix (a measure of the estimated accuracy of the state estimate). On each step two linear operators perturbed by Gaussian noise is applied to the current state. Two phases corresponding to the two operators are called Predict and Update. The predict phases uses the previous state to predict the current state. The update phase uses current observation to refine 7

16 the predicted state from the predict phase. Note that the overall approach is the same as particle filters as well as our modification of it described later. In our approach the state of the system is represented by a set of particles instead of two variables. However, the cloud of particles can be loosely viewed as a probabilistic region formed by two variables in the Kalman filters. Additionally, the predict and update phases are presented in our approach as well. The predict phase corresponds to the IMU update phase, which uses readings from the IMU sensor to predict the current state of the robot. This violates the original idea of not using any observations on the prediction phase, however, it produces more accurate results since it is based on the real readings and saves time of not using the real prediction phase. The update phase in Kalman filters corresponds to the RF update phase in ours that is used to refine the state information produced after the motion information is taken into account by the IMU update phase. [5, 6] address the problem of cooperative position estimation, in particular, estimating the distance between two robots. In [5] measurement of distance is based on detecting the linear velocity of a vertical slit laser beam from a rotating beacon that is projected on the photodiode array mounted in a circle around a robot. Similarly, [6] uses a robot with a smaller-size assistant robot, which carries a cylindrical target beacon with a helicoidal light pattern. The master robot tracks the beacon with a camera, compares the pattern to a geometrical model, and estimates the distance. The idea of having an assistant can be implemented with RF sensors too. In this case, instead of a beacon with pattern, the assistant will carry an RF transmitter. One problem here is that in contrast with cameras, RF sensors that we plan to use do not provide information about the direction, thus more than one assistant would be needed to disambiguate the position. [7, 8] cover multiple object tracking. [7] presents a probabilistic algorithm for simultaneously estimating the pose of a robot and the locations of nearby people. It uses conditional particle filters that work with other sets of particles representing people and robot locations. This technique is similar to Monte Carlo Localization work done in [3]. [8] presents the Multiple Hypothesis Tracking (MHT) probabilistic algorithm to estimate the positions of opponent players in autonomous robot soccer based on image data. MHT considers the environment populated with a set of stationary and moving objects. The number of objects may vary and is restricted by the capability of a sensor to detect objects within its range. The objective of the 8

17 MHT algorithm is to keep a set of object hypotheses and maintain it over time. An object hypothesis consists of an estimated position, orientation, and velocity. This object hypothesis is similar to a PDF that we will use to represent robot configuration, except that they have two probabilistic measures associated with a hypothesis - one for uncertainty of estimation, and another for the degree of belief that this hypothesis accurately reflects an existing object. In our work we will have the environment free of other stationary or moving objects, but RF transmitters can be viewed as stationary objects that we need to track. Moreover, moving RF transmitters might occur in hostile military environments. The MHT algorithm is unnecessary for this thesis work because sensors used are not directional and with limited range thus we can sense all RF transmitters from any robot configuration. 3. Hardware 3.1. Robot Data collection experiments are run on the PIONEER P3-DX robot (see Figure 1). It is an agile, versatile intelligent mobile robotic platform updated to carry loads more robustly and to traverse sills more surely with high-performance current management to provide power when it is needed. [12] The P3- DX has the following technical characteristics: 44cm x 38cm x 22cm aluminum body with 16.5cm diameter drive wheels. Two motors use 38.3:1 gear ratios. This differential drive platform can rotate in place moving both wheels. Climbs a 25% grade and sills of 2.5cm. Moves at speeds of 1.6 mps on a flat floor. Carries payloads up to 23 kg at slower speeds. Has the hinged battery door for hot-swapping batteries Runs hours on three fully charged batteries. Re-charging time is 2.4 hours if a high-capacity charger is used. Figure 1. PIONEER P3-DX robot with additional accessories: camera, laser, and speakers mounted on the top. [12] 9

18 The robot is configured with a range of navigation sensors. The two motors contain 500-tick wheel encoders. The P3-DX base contains eight ultrasonic transducer (range-finding sonar) sensors arranged to provide 180-degree forward coverage. They read ranges from 15cm to approximately 7m. Additional devices includes compass, laser ranging device, inertial sensors, and wireless communication link and on-board computer to autonomous navigation and control. These sensors provide a variety of ranging and heading data, each with unique environment dependent error and noise characteristics. Embedded wheel encoders and compass provide the robot with internal localization capability. The robot can track its position and orientation relative to the robot s initial state. This feature has a particular interest for our research because we can either use robot s readings as ground truth (real position and orientation).however, it will be shown later that the robot s estimates are inaccurate, thus other sources of ground truth were used, such as physical measurements of robot s location and simulated path data RF Motes This work uses the Cross Bow MICA2 motes (see Figure 2). It is a third generation mote module used for enabling low-power, wireless, sensor networks. [10] Some of the features of MICA2 include 868/916 MHz multi-channel transceiver with extended range. > 1 Year Battery Life on AA Batteries (using Sleep Modes). Support for wireless remote reprogramming. Base station or receiver allows aggregation of sensor network data onto a PC or other computer platform. Any MICA2 Mote can function as a base station when it is connected to a standard PC interface or gateway board. The MIB510/MIB520 (see Figure 3) provides a serial/usb interface for both programming and data communications. Figure 2. Cross Bow MICA2 Mote [10]. Size 2.25 x 1.25 x 0.25 (in). Figure 3. MIB520 Mote Interface. [10] 10

19 In this work, several transmitters are positioned at known locations in the environment. The base, or a receiver sensor, is mounted on the robot and provides information about signal strength between the base and each transmitter. The principle of decreasing signal strength for increasing distance allows us to estimate the distance from the robot to a transmitter given the signal reading. If distances to several transmitters are known, then the location of the robot can be found in the intersection of the rings drawn around the transmitters with radiuses equal to the estimated distances. In order to estimate distance given signal strength in noisy environments, we need to build the probabilistic model P(distance signal strength). It is worth noting that readings provided by MICA2 sensors are directly proportional to signal strength and consequently increase with the distance. Thus, the readings are not actually signal strength. However, the true interpretation of the readings is unimportant to this work because they are processed in purely statistical fashion. The data collection procedure, as well as the modeling of the relationship between signal strength and distance, is taken from the previous research by Zmuda and Morton [1]. Data was collected in both outdoor and indoor settings. Zmuda and Morton describe the data collection procedure as follows. The transmitter is positioned at a fixed location, while the receiver is moved at small increments. At each stop, hundreds of sensor readings are recorded. The outdoor data was taken on flat terrain with relatively few obstacles in the vicinity, while the indoor data was taken in a room approximately 15 x25. This room is configured as an engineering lab and has numerous obstructions including computers, tables, chairs, and walls. Figure 4 demonstrates the behavior of the MICA2 motes operating in indoor and outdoor environments. This plot shows the average sensor readings obtained in the two environments. The outdoor data shows a smooth increasing trend with one spike around 2.8m that could have been caused by a vehicle driving by or another obstacle. Indoor data, in contrast, is very inconsitant even though it was collected on Sensor Reading Average Sensor Reading vs Distance Distance (m) Indoor Figure 4. Average sensor reading. [1] Outdoor 11

20 the same hardware. Such behavior was expected because the outdoor environment did not have numerous obstacles in the room to cause multi-path and influence signal strength. Note that for small distances (e.g. <1.6m) the signal readings are all similar. Thus it could be expected that inaccurate localization results would be obtained when the distance between transmitter and receiver is small. The motes can be calibrated to operate with more/less sensitivity. Some effort was made to optimize Standard Deviation vs Distance these settings so that the entire dynamic range was utilized yet provide reasonable sensitivity in our operating range. Sensor Reading Distance (m) Indoor Figure 5. Standard deviation of data. [1] Outdoor Figure 5 shows the standard deviation of signal readings shown before. The outdoor data shows a general increase with distance with a spike near 2.8m that corresponds to the anomaly in the original readings. The indoor data does not show any trend, which is expected since the original data is very erratic. Figures 4 and 5 clearly indicate that first, indoor and outdoor settings should have different models, and, second, indoor settings are very inconsistent and less useful for indoor localization. Although not verified experimentally, it is likely that indoor data would be significantly different for a different room. The spikes on the plots correspond to the specific configuration of obstacles in the room, and a model built on such data would work only for the original room. Data modeled after outdoor environments is used for our experiments P(Sensor Reading = S Distance = 2.6m) Sensor Reading (S) Indoor Outdoor Figure 6. Distribution of sensor readings for 2.6m. [1] 12

21 Figure 6 shows the distribution of readings corresponding to the model P(Sensor Reading distance = 2.6m). The general shape of both the indoor and outdoor data roughly resembles a Gaussian distribution, except that slopes are not symmetrical and the tails are quite short. The two distributions have different centers, but this is consistent with the interpretation of Figure 4. Figure 7 shows the distribution of distances, which corresponds to the model P(distance Sensor Reading = 16920). The P(d Sensor Reading = 16920) Distance Indoor Outdoor Figure 7. P(distance Sensor Reading = 16920). [1] shape of the outdoor plot indicates the distance is likely to be between 0.2m and 1.2m, while the indoor data fluctuates erratically from 0.2m to 3.2m. For the indoor case, the fluctuations make it impossible to obtain a distance estimate with accuracy better than 1.5m. Similar observations were made with other values of S IMU This work uses Microstrain IMU 3DM-GX1 Gyro Enhanced Orientation Sensor (see Figure 8). 3DM-GX1 sensor combines three angular rate gyros with three orthogonal DC accelerometers, three orthogonal magnetometers, multiplexer, 16 bit A/D converter, and embedded microcontroller, to output its orientation in dynamic and static environments. Operating over the full 360 degrees of angular motion on all three axes, 3DM-GX1 provides orientation in matrix, quaternion and Euler formats. The digital serial output can also provide temperature compensated, calibrated data from all nine orthogonal sensors. [9] The update rate announced by the Figure 8. 3DM-GX1. manufacturer is 350 Hz, however our software uses the 100 Hz update rate on average. For our research we use orientation data expressed as angle rotated around vertical Z(yaw or heading) axis as well as instantaneous accelerations along X(roll) and Y(pitch) axes that are relative to the orientation of the sensor. 13

22 The following series of experiments were conducted in order to test accuracy of the Euler angle readings from the 3DM-GX1 sensor. Results of these experiments are important for proper use of this sensor in final data collection experiments when mounted on the Pioneer robot. 1. North identification (IMU, compass). The purpose of this experiment was to test whether the IMU orientation readings align with a simple real compass and to check the amount of magnetic noise indoors (since IMU use magnetic north to align it). During the experiment the IMU was positioned horizontally with the top labels facing up and a simple compass together move around a room. The angle between the true north direction and the current IMU heading (direction of its X axis) is represented by the Z (heading) component of the Euler readings. The IMU software printed Euler angles on the screen and they were visually compared with compass readings. The experiment was conducted in the Sensor Integration Lab (EGB 253), which is a small room with tables along the walls and empty space in the middle. It turned out that even within one small room the direction of north on both IMU and compass varied significantly. For example, there is a location near the door where the north direction changes 180 degrees. In all likelihood, there is an electrical device behind the wall that generates a strong magnetic field. Generally, it is hard to find a place inside the Engineering building where the compass, or IMU, shows true north accurately. Later another set of experiments with the compass only was conducted in Withrow Court, which is a large empty gym without any obstacles inside. Readings indicated compass error in many areas most likely caused by metal beams under the floor. 2. IMU North identification in closeness to the robot. An additional experiment was performed to identify the influence of the robot on the IMU (as the robot is a heavy metal object). The IMU was initially positioned horizontally with the top labels facing up. The IMU raised 1.5m above the robot and slowly lowered on the top of the robot while the north direction printed by the IMU regain software was visually recorded. It was found during this experiment that the robot (as well as other metal objects) significantly influences the IMU s orientation readings. The sensor being close to such objects gives a false north direction. The closer the IMU is to the robot, the bigger the angle between the real north direction and the direction shown by the IMU. 14

23 However, experiments also indicate that once the readings are stabilized, the error is constant, even when the IMU is located directly on the top of the metal surface of the robot s cap. It takes about 10 seconds after the position of the sensor is changed for the readings to stop changing and stabilized at a new value. This only happens with the Z (heading) axis and not with the other two axes. Thus, the robot s influence on the IMU could conceivably be elminated by adjusting for the constant error. 3. Sensitivity of X (roll) and Y (pitch) axes of the IMU. The purpose of this experiment was to check the sensitivity of X (roll) and Y (pitch) Euler angle readings of the IMU. Additionally, we needed to test the hypothesis that Z axis might be broken and that was the reason why it gave bad results in the presence of metal objects when the readings from the two other axes were not influenced by those objects. If the hypothesis is true, then the heading of the robot might be obtained from either the X or Y axis instead of Z if the IMU is mounted on the robot correspondingly. Initially the IMU was placed on the top of the robot rotated 90 degrees around X or Y axis. During the experiment the IMU was rotated starting from initial (already rotated) position around its relative vertical axis to simulate the robot changing its heading. The following observations were made: 1. Having X (roll) reading about 90 degrees. In this case, the vertical axis is Y (originally pitch). Rotation around this axis is supposed to result in changes in the pitch reading regarding of the current position of the IMU in space. However, the experiments showed changes in the Z (heading) angle. It is explained by the nature of the Euler angles system, called ZYX or Aircraft coordinate system. An aircraft may have roll (which is 90 degrees in this case), but its heading is still given by the same component of Euler angels. The distance to the robot influences readings in the same fashion. 2. Having Y (pitch) reading about 90 degrees. In this case, vertical axis is X (originally roll). Rotation around this axis should result in changes in the roll reading regarding of the current position of the IMU in space. However, the experiments show that it results in changes in both the Z (heading) and the X (roll) angles. Moreover, the readings are unstable and a small change of angle of the vertical axis cause significant erratic changes in both the X and Z angles. It is 15

24 again explained by the nature of the Euler system. A pitch of 90 is the singularity of this system. The IMU s manual does not recommend using the Euler system when pitch exceeds ±70. In conclusion, the IMU works properly and the deviations in heading readings are explained by the magnetic nature of the sensor. The first experiment with a compass showed the same problem. Also the other two Euler components cannot be used as a source of heading data. 4. IMU heading readings influenced by magnetic field of the robot wheels engines. The purpose of this experiment was to check weather the magnetic field of an operating wheel motor influence the IMU heading readings. The robot is fixed so that its wheels did not touch the floor and the robot did not move. It was done by putting the robot on a box shorter than the distance between the wheels and higher than the clearance of the robot. The IMU was mounted on the top of the robot and the heading readings were stabilized. During the experiment the robot wheels engines are being turned on and off and the readings are being observed. Observations of the readings indicate that turned on working engine(s) do not increase the fluctuations caused by natural noise of the readings Obtaining Ground Truth In order to assess the accuracy of any localization technique, it is essential to know the robot s actual position, referred to as ground truth. The estimated position can then be compared to the ground truth so that the algorithm s error can be measured and assessed. Two possibilities exist for gathering the ground truth: 1) take physical measurements to establish the robot s position 2) use the robot s internal estimates that are based on the measurements from the wheel encoders. Option 2 is not viable since the robot s internal estimates are inaccurate (described later). Option 1 is more cumbersome, but provides the opportunity to obtain accurate measurements of the robot s position and orientation. 16

25 For the purposes of taking physical measurements during the experiments with the robot, the following system was designed. Two heavy rotational bases were built and each base was positioned on the plane where the robot operates. The two bases, referred to as Base1 and Base2, represent two marked points on that plane. Base1 is positioned at (0,0) and Base2 is positioned at (0, Y), where Y is the actual distance between the bases. These bases form a Cartesian coordinate system with the center of Base1 being designated as the origin and with the Y axis going trough the center of the other base. Construction of a base allows using a tape measure to easily measure distance between the base and any other point within tape s length. Additionally, in order to simplify measuring distance to the robot and its Figure 9. Weighted rotational base with a tape. Figure 10. Measuring distances from the bases to the marked points on the robot. orientation, two reference points were marked on the top of the robot s platform. Figure 11. Reference points A and B marked on the top of the robot s platform. Coordinates of any point X in this system can be found according to the following algorithm, which is referenced later as ALG: 1. Use tape to measure distances from X to Base1 and Base2. 2. Find intersection of two circles with centers at (0, 0) and (0, Y) and radiuses equal to distances measured in 1. The algorithm with a C-code example is given in [13]. 3. If two intersection points are found, which will typically occur, pick the one that lies in quadrant #1, since our experiments operate exclusively in quadrant #1. Otherwise, if one intersection point is found, pick it. The case when no intersection is found should never occur, because it violates triangle inequality and indicates an error in measurements. 17

26 ALG can be used twice to get the coordinates of A and B. These two coordinates can then be used to obtain theta, robot s orientation. The net result is the ground truth info. The next couple of experiments were dedicated to identifying the center of the Pioneer robot. It is important to find and mark the center so that the IMU can be mounted on top of the robot precisely in its center. If both centers of the robot and the IMU align, then it will significantly simplify calculations of the rotational angle of the robot based on the angle provided by IMU. In this case they will be equal to each other. Additionally, the IMU should be mounted in the center of the robot such that their X and Y axes are aligned. If both these criteria are satisfied, then vectors of acceleration and heading provided by the IMU can be directly used as robot s acceleration and heading. The two experiments designed to identifying the robot s center are described in the appendices (9.1). A final set of experiments was conducted to test 1) the accuracy of the robot s internal estimates and 2) the accuracy of following the control commands issued to the robot. The internal robot s estimates are X and Y and Theta, which show the robot s current orientation. All three components are relative to the robot s initial position and orientation. 1. Line Turn Line (X, Y, Theta) In the first experiment the robot traveled along a straight line that should have resulted in a 3000mm long segment, stopped, made a 180 degree turn and traveled back having the same speed during the same time. The robot started having initial readings X=0, Y=0, Theta=0. It is expected that the robot would come back to the same point, but the observations show that it did not. It stopped within 400mm from the initial point but had shifts along both the X and Y axes. The robot s inertial readings did not indicate that it was in the initial position either. Theta read 177 and X and Y were close to the actual position of the robot at the end of the experiment. The shift can be explained by the fact that the robot made a turn less than 180 and therefore did not come back to the initial point; however, the lengths of forward and backward segments are different, which should not have happened because the robot executed the same set of control commands issued by a program. The same experiment was repeated several times including a version with more complicated acceleration pattern during traveling along the straight 18

27 segments. In all experiments the robot does not come back to the origin, At the first glance, X, Y, and Theta were reasonable. This fact suggests that the control commands and the wheel engines are inaccurate, but the robot may be capable of tracking itself. However, more accurate and longer experiments are needed to verify the accuracy of its internal readings. 2. Measured Line Turn Line (X, Y, Theta) This experiment was designed to carefully check the accuracy of the internal estimates suggested in the previous experiment. All measurements were taken using the system described in 3.4. Two coordinate bases are positioned at (0,0) and (0,1000). The robot started from an arbitrary position defined by the following coordinates of the marked points on the robot A=(484,258), B=(497,640), O=(632,495), where O is the center of the robot. The angle alpha between robot s X axes and system s X axes α=88. The initial robot s readings are X=0, Y=0, Theta=0. The robot traveled along a straight line having a speed that should resulted in a 3000mm long segment, then stopped, made a 180 turn and traveled back having the same speed during the same time. The robot final measured position is A=(563,733), B=(518,354), O=(396,510), which implies α=263. The robot s final estimate was X=24, Y=26, Theta=178. The difference between calculated alpha angles is =175, which is close to the robot s reading of 178, but needs additional evaluation. Coordinates of both O centers above were calculated. If we assume that the robot s internal readings correspond to its rotational center, then the final O point must be close to the point given by final robot s readings. The robot s final X and Y coordinates, which are relative to the robot s initial position and orientation, can be converted into the coordinate system formed by bases in the following way: 1. Turn the robot s coordinate system by 88 clockwise or turn the point (24,26) in the robot's system by 88 counterclockwise. This gives the point (-25,25). 2. Since the initial center of the robot is in O=(632,495), then (-25,25) is equivalent to (632-25,495+25)=(607,520) 19

28 The point (607,520) must be the final position of the robot s center according to the robot s readings. However, the measurements show that O=(396,510) is its final location, indicating that robot s internal estimates are inaccurate. 3. Rotation (Theta) The goal of this experiment was to check accuracy only of the rotational angle Theta given by the robot. The robot was initially positioned at the intersection of two perpendicular lines of a tiled floor. The robot was rotated one of the following angles 90, 180, 270, 360, 720 in any direction and then stopped. Then robot was manually rotated to its initial position and the experiment was repeated. It was visually clear that the robot s real orientation is inconsistent with the readings given by the robot. For example, when the robot s estimate was 90, its real angle was approximately 110. When it makes a full turn and shows Theta=0, its real angle is approximately 30. It is worth noting that when using the Aria demo program in the teleop mode, Theta changes even when robot is being turned around in the air without touching the floor (i.e., wheel encoders are not changed). When the robot is operated by programmed commands, no such change in Theta is noted. Together, these experiments indicate that robot s internal estimate of Theta is inaccurate. At a minimum, it is unclear what theta represents or how the robot is actually calculating its value. 4. Robot Parking (X, Y, Theta) The last experiment was designed to be simple and accurate enough so that we can finally confirm our hypothesis of inaccuracy of internal estimates. In this experiment a tight parking spot for the robot was marked on the floor. The robot started being parked in the spot and having X=0, Y=0, Theta=0. Then under manual remote control it was driven in a random path at random speeds and parked back precisely to the initial position. The duration of the experiment was about 30sec. If the internal estimates are accurate then it is expected that the robot should have X=0, Y=0, Theta=0 or close once it comes back to the initial position. The experiment was repeated several times and the readings were always far from the original ones. X and Y have an average error about mm, Theta had an average error about

29 In summary, we conducted multiple experiments testing accuracy of robot s internal estimates and following the control commands. None of the measured experiments was able to obtain accurate estimates from the robot. We conclude that robot s estimated position X and Y and rotational angle Theta are inaccurate and should not be used as ground truth or in the calculation of its real location. Physical measurements are the best way to obtain ground truth. Additionally we saw that the robot cannot follow control commands accurately Robot-Control Software This section describes the robot-control software issue as a part of the final data collection procedure required for testing of the localization algorithm. Because of the significant influence of metal objects on the IMU, the decision was made to run experiments outside (e.g., Miami s field hockey area). This would probably avoid metal constructions that exist even in big rooms like basketball courts, and second remove most of the obstacles that can disrupt the RF signal. The details of the data collection procedure can be found in the appendices section 9.2. The robot has a Debian GNU/Linux machine on board. We implemented the following three processes to collect data and control the robot: 1. IMU logging process (root/imu). Starts by executing mono IMU.exe. It reads the signals from the IMU connected to the port COM2 (dev/ttys1) and logs them in the imu.txt text file. Each line represents a onetime reading, which consist of the following items separated with a space: Timestamp - number of seconds passed after Jan 1st, Heading angle between original and current IMU s orientations (in degrees). AccX acceleration along IMU s local X axes. AccY acceleration along IMU s local Y axes. 2. RF logging process (root/rf). Starts by executing a batch file run. It reads the signals from the RF sensor to the port COM3 (dev/ttys2) and logs them in five text files rf1.txt, rf2.txt, rf3.txt, rf4.txt, rf5.txt. Each file contains readings that correspond to one of five RF motes. Each line in any file represents a onetime reading, which consists of the following items separated with a space: Timestamp - number of seconds passed after Jan 1st, Signal strength. 21

30 3. Robot control process (root/path). Starts as robot <args>. Args specify the path, which robot will drive. The list of possible paths and corresponding args values can be found in the Appendices section 9.2. imu.txt rf1.txt rf5.txt Processes IMU RF Robot Sensors IMU RF 1 RF 5 Wheel engines Figure 12. Processes and dataflow 3.6. Hardware Issues As seen in previous sections, this research involves significant amount of work dealing with hardware. It includes the PIONEER P3-DX robot, Cross Bow MICA2 motes, Microstrain IMU 3DM-GX1 Gyro Enhanced Orientation Sensor, and handmade measuring system. Working with hardware always implies risks that equipment may brake. Since we do not have specialists available to fix broken hardware and ordering new pieces is both expensive and time consuming, one broken device can put the whole project on hold. This situation occurred several times in this research. One time, the IMU was burnt out. When it was fixed, it was unavailable because it was used on another project. We completed the majority of the calibration experiments, checked the accuracy of the sensors, and figured out the ground truth. Later when all experiments were designed, robot control software was completed, the data collection procedure was prepared, and a court (open space indoor hall size of three basketball courts) for running the experiments was reserved, we faced another problem the RF base stopped functioning. This probelm occurred twice. The first time it was fixed, but not the second time. Since we did not know what the nature of the problem was and when it could be fixed, the decision was made to switch to a simulation of both RF and IMU sensor readings. We already had collected RF and IMU data during draft 22

31 experiments in the lab and some RF data was available from the previous research [1]. This data was enough to build noise models of the sensors and write software that can generate realistic sensor readings. The simulation software is described in section Methodology 4.1. Introduction This section covers the approach for solving the localization problem. As mentioned previously, the robot s state information must be modeled probabilistically, since its position, velocity, etc. is not known precisely. Several approaches can be used to model such a state space. One approach is to divide the space into small N-dimensional cubes, each representing a small region of the state space. This method, referred to as the grid-based approach, was initially pursued in this work. Information stored in this approach can be viewed as a continuous set of weighted states, where each N-dimensional cube or cell represents a state and the probability value stored in a cube is its weight. Thus, a large weight in that cube indicates a large probability of the robot being in that state. Having a full grid gives access to probability of every discrete state within our space and simplifies the implementation. However, it requires large amount of memory to store the grid and significant time to iterate through the grid during the update phase even though 99% of the grid may be zero. For these reasons the grid-based approach is infeasible for real time processing. Additionally, the accuracy is limited by the size of a discrete N-dimensional cube in the grid. Thus, the grid-based approach was abandoned as the dimensions of our state space grew. A second approach to representing probabilistic information in a large state space is referred to as particle filters [3]. When using particle filters, a relatively small number (e.g., tens of thousands) of discrete elements are distributed throughout the state space. These elements, referred to as particles, have an associated weight that corresponds to the likelihood that the robot s state matches the parameters of the individual element. As sensor information is obtained, the weights and parameters of the particles are adjusted to reflect the incoming sensor reading. The idea is that particles with high weight will populate the regions of the state space that correspond the robot s actual state. This approach, in contrast with the grid-based one, 23

32 requires a reasonable amount of memory to store and relatively little time to update the representation. Second, the granularity is not limited by a cell size as in grid-based approach. One problem with particle filters, however, is that the set of particles may lose track of the true state information. Loosing this information makes it hard to recover if representation is lost in that particular area of the state space. Our research started with a grid-based approach, however it was abandoned as the dimensions of our state space grew and we switched to a particle-based approach Learning sensor model P(distance signal readings) As was stated earlier, it is important to model P(distance = D sensor reading = S). This model is needed because the RF sensor reading needs to be converted into an estimate of distance. The difficulty is that the data collection does not provide this information. Instead, the collection gives a series of PDFs, P(sensor reading = S distance = D), as shown in Figure 6. One question is how to obtain P(distance = D sensor reading = S) from a set of P(sensor reading = S distance = D). The technique taken from the previous research [1] is briefly reviewed here. P(s d = 4.2) P(s d = 4.0) P(d s = 16350) P(s d = 0.2) Figure 13. Obtaining P(distance = D sensor reading = S) [1] The data collected is a set of P(sensor reading = S distance = D), at regular increments (e.g. 0.2m). Figure 13 shows the set of PDFs as the stack of solid lines. The important point to note is that the desired model P(distance = D sensor reading = S) is a slice across the series of PDFs. One difficulty is that this slice is incomplete (due to discrete sampling) and an analytic solution to solving for this slice is not directly possible. However, if we can obtain a complete set 24

33 of original PDFs through modeling, then a slice across them will be a complete PDF as well. Thus, our problem is to fill in missing data between the samples using interpolation. More samples will result in more accurate results, but at the expense of more costly data collection. To interpolate between the samples we model P(sensor reading = S distance = D) with a discrete number of parameters. For example, if this distribution is Gaussian, the distribution could be modeled using two parameters, µ and σ. There are many other modeling techniques that could be used to obtain a function for these parameters as a function of distance µ ˆ( d) and σ ˆ( d). We denote P(sensor reading = S distance = D) as Pµ ˆ ( d ), σˆ ( d )( s d) to indicate that the distribution is defined by the interpolated parameters µ ˆ( d) and σ ˆ( d). Thus, cross section is defined as: P(distance = d sensor reading = s) = α P 0 µ ˆ ( d ), σˆ ( d ) P µ ˆ ( x), σˆ ( x) ( s d) ( s x) dx The Cross Bow RF receiver s readings are integer values in a discrete range (e.g. [0, 512]). Therefore, the set of sensor readings is a small finite set. In order to simplify modeling we use a customized discrete distribution with two parameters that are similar to µ and σ. Figure 14 shows a triangle shape of this distribution with its center at µ and its width defined by σ. The distribution s tails go toward the minimum and maximum sensor values, before the PDF goes to zero. The area in the region [µ-σ,µ+σ] covers 0.85 of the distribution, which in turn defines the maximum value at µ. The remaining 0.15 is distributed in the tails of the distribution. 25

34 MIN µ-σ µ-2σ µ µ+σ µ+2σ MAX Figure 14. Triangle distribution Using this model we can determine µ d and σ d values for all the sampled distances. Next we assume that the triangle distribution s parameters change linearly from one sample distance to the next one. Under this assumption we can calculate µ and σ for any PDF in between actual samples. The equation for P(distance = D sensor reading = S) can now be approximated using basic integration techniques to evaluate the integral with a small, user-defined step size Grid-based approach As mentioned before, our first attempt was the grid-based approach. This section covers the details of this approach as well as the implementation of the localization algorithm. The RF sensor gives information regarding the position of the robot, relative to known transmitter locations. The inertial sensor gives information about the accelerations of the robot, which in turn gives information about velocity. Since the sensors give information about acceleration, and we track velocity, the state information included the position x, velocity x', and acceleration x''. Since these experiments are conducted in the x-y plane, the PDF has a total of six dimensions (i.e., 2*3). The PDF p(x, x', x'') at a given time represents the probability of the robot being at that moment at location x, moving with velocity x', and having an instantaneous acceleration of x''. The robot can be localized by looking at the x that maximizes p(x, x', x'') dx'dx''. The grid is updated using the algorithm below. p 0 (x, x', x'') is belief robot s position=x, moving with velocity=x', and acceleration= x'' LOOP t = t

35 t = elapsed time since last interval // UPDATE based on IMU and previous belief of position and velocity let r t+1 '' be the acceleration reading received from IMU let f(y'' r'') be the PDF for the actual acceleration being y'', given the IMU reading r''. f(y'' r'') models the error in the IMU. a(x, x', x''; r t+1 '', t) = f(x'' r t+1 '') p t (y, y', u'') du where (y, y', u'') represents a robot state that could result in (x, x', x'') during the time interval, given the acceleration of x''. y and y' are defined as (derivations not shown for conciseness): y' = x' 1/2 t (x'' + u'') y = x + 1/6 t 2 (2x'' + u'') - x' t // UPDATE based on all RF sensors b(x, x', x'') = PDF uniformly distributed over x, x', x'' for each sensor s i let r i be the sensor reading from s i let g(d r i ) be a distribution learned off-line, where d is distance let l be the location of s i h(x, x', x'') = b(x, x', x'')*g(dist(x, l) r i ) b = h // Normalize PDF p t+1 = α*a*b END LOOP After each iteration p t+1 is PDF for (x, x', x''). The position-only PDF w(x) is w(x) = p(x, x', x'')dx'dx'' We can measure the accuracy of localization by comparing the true position of the robot with the current state of PDF grid. There are two error functions that access accuracy, as described in [1]. The first is a weighted average over all possible locations: 27

36 error WEIGHT ( w, tx) = w( x) dist( tx, x) dx Here w is the current position-only PDF (i.e., removing velocity and acceleration from the state information) and tx corresponds to the true position of the robot. The function dist(p1, p2) returns the Euclidean distance between points p1 and p2. A small error is obtained by having small p values at locations that are very distant from tx. Ideally, all the volume in the PDF w is located near tx, which produce small distance values. Numerically, error(w, tx) yields the average distance from the true location. A second error function assumes the robot s predicted location corresponds to the location with the maximum value for w. This is referred to as the maximum a postori (MAP) prediction [11]. The distance between this position and the true position is the MAP error. error MAP ( w, tx) = dist( tx, L), where p( L) = max w( x) x MAP error is not used in this work, but is mentioned here as a possible errors measure for future research Particle filter approach The grid-based approach was abandoned because the size of the grid required excessive computational requirements. The size of the grid was extremely big because the state is represented by three two-dimensional vectors of position, velocity, and acceleration, which implies a 6-dimensional array. For a 2x2m area divided into 2x2cm regions, 1000x1000 grid cells would be needed. Allowing 10x10 on both acceleration and velocity would require 10 6 * 10 2 *10 2 = cells, which is equivalent to 40Gb of memory. Further, this configuration might still give poor resolution on velocity and acceleration. The particle-based requires relatively small amount of memory and time to store and update probabilistic information about robot s state represented by a set of weighted discrete elements or particles distributed throughout the state space. The corresponding density can be always 28

37 constructed using a histogram or a kernel based density estimation technique. However, in our case we are only interested in the mean and dispersion of the set of particles rather than the density itself. Our state representation consists of three two-dimensional vectors of robot's position x, velocity x', and acceleration x''. To simplify the notation, a compact form (x, x', x'') is used to represent robot s state, where each bold-faced item is a two-dimensional vector. Therefore, each particle stores 7 components: six for robot s state and one for particle s weight. The set of particles is maintained over the time and updated on every time step when new RF or IMU sensor readings arrive. Since the update procedure depends on the type of sensor, the update procedures for each sensor type is described in the next sections IMU update The IMU update procedure starts with a set of weighted particles obtained from the previous RF update when a new IMU reading is received. Let (x, x', x'') be an arbitrarily selected particle from the input set of weighted particles. A new IMU reading is received t seconds later. The reading provides instantaneous acceleration vector r'' at the current time. We are interested in the current state of the robot, t seconds after the previous state (x, x', x''). It is assumed that the robot s acceleration changed linearly from x'' to r'' over t seconds. Then these two points define a linear function a(t), where t is a time instant [0.. t], that interpolates robot s acceleration: a(t) = t (r'' x'')/ t + x'' This allows functions of robot s velocity and acceleration to be obtained by integrating a(t) once and then twice. The derivations are shown in the appendices section 9.3. If we plug t into these derived functions we find the robot s position and velocity corresponding to the current state. The IMU update with the acceleration reading r'' applied to the initial state (x, x', x'') after t seconds turn into the state: (1/6 t 2 (r''+2x'')+ t x'+x, ½ t (r''+x'')+x', r'') (1) Of course, the IMU does not provide precise information. This work assumes the IMU s readings are uniformly distributed and centered on the true reading. The IMU s noise is modeled by generating several child particles. The number of child particles is proportional to the weight of the particle. Each child particle gets a different, random, value of r'' that is probabilistically consistent with the IMU s actual reading and parent particle s weight. For example, if the IMU s 29

38 error is a small uniformly distributed PDF centered on the reading, the child particles would be those resulting from accelerations within a constant radius called IMU acceleration perturbation from the actual reading. In our implementation child particles have accelerations within [r''- r, r''+ r], where r is IMU acceleration perturbation. IMU acceleration perturbation is not an algorithm parameter it is the amount of noise in the sensor. The increased number of particles is then down-selected to maintain a constant-sized set of particles. This is done by randomly selecting initial number of particles from the increased set of child particles. The IMU update algorithm can be split into the following steps: 1. Take a set of weighted particles S = {p} and r'' as input 2. Find a particle with the minimum weight W min 3. Calculate the expected number of child particles: p. weight expectednumberofchildren = p S Wmin 4. Calculate a normalization coefficient, which allows to keep the size of the set of child particles within boundaries: normcoeff = Maximum size of increased population / expectednumberofchildren 5. Generate a set R of child particles: R = for each particle p in S numberofnewparts = p.weight / W min with probability normcoeff do numberofnewparts times generate a random acceleration vector x'' constructed with r'' calculate position x and velocity x' corresponding to x'' according to (1) R = {(x, x', x'')} R 6. S = randomly select sizeof(s) particles from R The resulting set of uniformly weighted particles serves as the basis for the next RF reading RF update The RF update starts with a set of uniformly weighted particles obtained from the IMU update when a new RF reading is received. It suffices to describe the process of handling a single RF 30

39 reading, since multiple transmitters are processed sequentially using the same algorithm. The RF reading is an integer value, which is probabilistically related to the distance from the transmitter. Noise inherent in the RF transmitters is modeled in an off-line training process. This can be done by systematically changing the distance between the transmitter and receiver and taking numerous sensor measurements. The PDF, P(distance RF reading), can be created and stored on the robot. Thus, a sensor reading can be used to provide information about the robot s position. With a single sensor, and a single measurement, location can be viewed as a ring around the transmitter. With multiple transmitters, the location is, loosely, the intersection of all the rings. The rings are not uniformly weighted; instead they have weights that start with small values at the borders and increase to a maximum at the center. This is due to the triangular model of P(distance = D signal reading = S). Figures 15a-d show rings of possible locations around three RF transmitters (a-c) and their intersection (d). The robot is located in the left top corner of the black square. The dark red area in the middle of the rings consists of particles with big weights, while light red edges represent areas of particles with small weights. Figures 15a-c. Three RF rings and d) their intersection. Figures 15e-g. Various intersections of RF rings. 31

40 Figures 15.e-g show various intersections of 3 rings (e, f) and 2 rings (g). If middle parts of all rings intersect approximately in one point, then their intersection contains one high probability spot in the middle (e, f), otherwise there are several high probability spots (g). The true position of the robot is marked by the small green square. Consequently, the particles resulting from the IMU update step are reweighted according to the stored PDF based upon their distance from the transmitter location. Those particles that are inconsistent with the RF readings will receive very small weights and be eliminated. The RF update algorithm can be split into the following steps: 1. Take a set S of uniformly weighted particles and n RF signal readings (rf 1, rf 2,, rf n ) as input 2. Update weight of particles: for each particle p S for each signal reading rf i p.weight *= PDF(rf i dist from p to transmitter i) if p.weight < Mininmum Allowed Weight S = S - {p} break The resulting set of weighted particles serves as the basis for the next IMU reading. Notice that the update does not utilize velocity or acceleration. The idea is that this can compensate for poor IMU readings that result in positions and velocities that do not agree with RF readings Sequence of updates Since the IMU update takes a set of weighted particles and produces a uniformly weighted set as a result, the RF update does the opposite, it is natural to put these updates one after another and use the result from one as an input for the other. In practice, readings of one type may come at different frequency than readings of the other type. Moreover, for the comparison and evaluation purposes we need to be able to run RF or IMU only localization. This problem can be solved in the following way. First, if we need to apply two consecutive IMU updates, we use a uniformly weighted set produced by the first update as an input for the second update and it results in an 32

41 equal number of child particles. Second, if we need to apply two consecutive RF updates, then we need to expand the set after the first update because each RF update essentially zeros out some of the particles. Since the positions of particles do not change (they are not moving) during an RF update, we will lose all of them after several updates. This is addressed by a procedure referred to as RF extension. It is similar to the IMU update except that it randomly generates child particles around the parent particle within a constant radius called RF position perturbation. The number of child particles is determined by the ratio of parent particle s weight and minimum weight among all parent particles. RF position perturbation compensates for the lack of IMU data. The new child particles are effectively the old particles with a random velocity applied. A normalization coefficient, which controls the upper boundary of the expanded set, is used too. Another question is how to distribute the initial particles. If only IMU readings are available, a known initial position/velocity/acceleration of the robot is required. In this case the initial set of particles consists of one particle with given initial information. Further the IMU update process generates child particles causing the set to grow up to the constant upper bound. Once the size of the set exceeds the bound, down selection takes place. In a case when no initial information about the robot s state is available, RF readings are required. There are two initialization options based on the first RF readings. One option is to start with a set of particles uniformly distributed over the whole space (Figure 16). This approach is inefficient because the first RF Update will zero out most particles. Those that are non-zero will reside in a small region. Otherwise, especially if child particles appear close to their parents and positional diversity of particles is not enough, they all can be easily eliminated by the next RF updates. For example, assume that we want to keep the size of the set around particles. The size of the robot s space of positions is 4000x4000 mm. Intersection of three RF rings is approximately an area of 300x300mm. If we uniformly cover the whole space with particles and after the first RF update we want to have about 10,000 particles in our set, then we need about 10000*(4000*4000)/(300*300) = 1.7*10 6 initial particles, which is 170 times more and almost all of them will be zeroed out by the first RF update. The second option is to have our initial full size set of particles uniformly covering not the whole state space (Figure 16), but the small region consistent with the first RF reading (Figures 17). It is done by generating initial particles 33

42 with positions consistent with the first set of RF readings. This approach was taken in our research. It guarantees that we have enough particles in our initial set around the real initial position and avoids the risk of starting with only a few particles (Figure 18) left after the first RF update or excessive time required to generate enough particles covering the whole state space, so that they will result in full size set (Figure 17) after the first RF Update. Figure particles uniformly distributed over the space. Figure particles concentrated in the intersection of 3 rings. Figure 17 (fragment) particles concentrated in the intersection of 3 rings. Figure 18. Intersection of 4 RF rings, few particles left after first RF update from initial particles uniformly distributed over the space Finally, several error measures are used to assess accuracy of our localization algorithm. Measures are based on the estimated position of the robot calculated as the center of the population of particles and the true position of the robot, and the distances between those two 34

43 position and particles. All measures should be applied on the uniformly weighted set of particles obtained after the IMU update. The measures are discussed in details in section Simulation Environment Because of the unexpected hardware problems with the IMU and RF sensors, a decision was made to use simulated sensor readings instead of real ones from the robot. Before the hardware problems appeared, multiple senor and robot calibration experiments had been completed. Data collected during those experiments along with the RF readings in the previous work [1] helped to guide the writing of the simulator. The simulator allows easy generation of arbitrary paths that help study the impact on localization accuracy of different parameters such as noise models of RF and IMU sensors, the shape of the robot s path, the acceleration of the robot during navigation, and the number of RF transmitters in the environment. The simulator is a Python program that takes the following information as input: Size of the state space RF model Robot s path expressed as a series of points (delta time, instantaneous acceleration) Positions of the transmitters It produces the following output: 1. PDF for the model P(distance signal reading) 2. A path file, which consists of a series of data points, each contains a) time stamp b) real position/velocity/acceleration c) IMU readings d) RF readings (noised signal strength) The generated PDF file provides data for the P(distance signal strength) model. It is a text file that has 512 rows, where and each row i represents a PDF for the P(distance signal reading = i). In this work, the RF sensors do not provide more than 512 distinct values. The granularity of our 35

44 state space is 1mm, thus each row has one value for each millimeter in the maximum allowed distance from any transmitter to the robot. For example, in most of our experiments the robot did not move further than 6m away from any transmitter, therefore 6000 columns were used (6m = 6000mm). Finally, a value in row i and column j indicates the probability of the robot to be j mm away from the source of the signal (transmitter) given that corresponding signal reading is i (i.e. P(distance = j signal reading = i)). The simulator generates a path corresponding to the series of instantaneous accelerations given as input. On each time step of the path the simulator calculates the true distances to the transmitters and true acceleration of the robot. Then the RF models are applied to give a stochastic output value. The noise models of the RF sensor can simulate environments with variety of obstacles. In the section 3.2, we saw very noisy indoor and clean outdoor settings. This noise can be simulated using the same distribution that we used to in our original P(signal reading distance) model. It was a customized triangle-shaped discrete distribution with two parameters that are similar to µ and σ of a Gaussian distribution. Having such model, we first obtain the distribution of signal readings for a given true distance and then draw a sample from this distribution. For the outdoor settings in our environment, we used 85% of the whole distribution spread out in the middle triangle and 15% in the tails. Changing µ, σ, and the weight ratio between the triangle and the tails we can obtain various RF noise models. Modeling of µ and σ was done with the following functions µ(d) = 12 d and σ(d) = 0.5 d + 10, where d is distance. These approximation functions were obtained through experiments. The noise models of the IMU sensor can simulate environments with various magnetic fields. Experiments show that the robot itself causes constant change in the orientation readings of the IMU, therefore that change can be added or subtracted from all readings. However, magnetic fields around the robot that come from external objects cause various changes that depend on the strength of the field in the current position. Such noise can be simulated using a map of magnetic fields in the environment. Then, on each step the real orientation will be noised by the current magnetic field. This can be an interesting topic for future research. 36

45 In the outdoor environment we assume that there are no significant magnetic fields that behave differently in different parts of the space. Since our localization algorithm applies uniform noise on the acceleration readings within [r''- r, r''+ r] while generating child particles, then there is no need to do it one more time in the simulator, we can simply double the acceleration perturbation in the localization algorithm. Thus, our implementation of the simulator gives true acceleration values as IMU readings, which become noisy in the localization algorithm. 6. Experimental Results The data was originally planned to be gathered on an actual robot, but as mentioned before, instead it was necessary to simulate at virtual time rate similar to the real sensor update rate. During the experiments, data was generated using the simulator for various paths with various acceleration patterns. Some of the paths are long periods of continuous movement; others include abrupt turns and accelerations. Once the data was generated, our algorithm was applied in an offline manner using various configurations of input parameters listed below. The accuracy was then compared between different configurations as well as with the true position of the robot. The following are characteristics of possible configurations of our localization algorithm: Update type o RF only o IMU only o RF plus IMU If RF is used (RF only, RF plus IMU) o number of transmitters If IMU is used (IMU only, RF plus IMU) o acceleration perturbation maximum number of random numbers that added to or subtracted from X and Y acceleration readings. Used for generating child particles during the IMU update. If RF only is used o position perturbation - maximum of random numbers that added to or subtracted from X and Y position of particles in population. Used for generating child particles during the RF extension phase. 37

46 The proposed algorithm in which the RF and IMU sensors are used together is compared against the above algorithms on the basis of several error measures and better localization accuracy is achieved Straight path (-1, -1) 1 2 (-1, 4) 4 3 (4, -1) (4, 4) Figure 19. Square space with four transmitters and the robot at its initial position. The first series of experiments include moving the robot along a straight line with a simple acceleration pattern during a short time period. These experiments are easy, but at the same time are useful to understand the procedure, the inputs of the localization algorithm, and presentation of the results. In the next section, more complex paths and accelerations patterns are shown. In the straight path experiments the robot operates in a square space 4x4m (See Figure 19). There are four transmitters located at the points (-1,-1), (-1, 4), (4, 4), (4, -1) (meters). The robot starts in the left top corner (0, 0) and travels diagonally towards the bottom and the right sides of the square space. The acceleration pattern used is the following: 1. The robot starts with zero velocity and acceleration. 2. During the first second it accelerates along its path up to 1m/s 2 along both axes. 3. During the second second it maintains an acceleration of 1m/s 2 on both axes. 4. During the third second it decelerates, such that at the end it has acceleration of -1 m/s 2 along both axes. 38

47 Generally, the increase in speed during the first two seconds is big enough to keep the robot moving forward even during the decelerating period. The total duration of the path is 3 seconds with the rate of 100 readings per second, which gives us total 300 readings per path Full configuration The first run the localization algorithm uses readings from all four RF transmitters and uses readings from the IMU sensor with 50 mm/s 2 perturbation. Each of the 300 iterations consist of two steps - the RF update and the IMU update. During the initialization the algorithm generate particles that are consistent with the first set of four RF readings. As mentioned before those are particles with positions from the intersection of four RF rings. The distribution of weighted particles after the initialization is shown in the in the Figure 20. The darker red color of the particles indicate large weight. The initialization is considered 0 iteration and it consists only of the initial RF update. Figure 20. Iteration 0, initialization (RF update), weighted particles. Figure 21. Iteration 1, IMU update weighted particles from the input generated uniformly weighted child particles, which were down selected to resulting particles. Figure 20 shows three areas with high-weighted particles close to the middle of the edges of the intersection. Note that the robot is shown as the small green square located at the origin. Figure 21 shows the distribution of the particles after the first IMU update. The previous RF update resulted in weighted particles that served as input for the IMU update, which generated uniformly weighted child particles that were down selected back to All resulting 39

48 particles are uniformly weighted and have the same color. However, the figures show dark and light regions that are formed by the density of particles. According to the IMU update, the bigger the weight of an input particle the more child particles it generates. As the result, heavy-weighted dark regions in the Figure 20 turn into dense dark regions in the Figure 21. The first iteration starts with this IMU update. Figure 22. Iteration 1, RF update uniformly weighted particles from the input resulted in 7092 nonzero weighted particles. Figure 23. Iteration 2, IMU update weighted particles from the input generated uniformly weighted child particles, which were down selected to resulting particles. Figure 22 shows the next RF update, which is the second half of the first iteration. The RF update started with uniformly weighted particles and resulted in 7092 weighted nonzero particles. The remaining 2908 particles received a zero weight and were eliminated. Figure 22 shows that the RF update created a heavy-weighted dark spot on the right, while most of the particles on the left are light. The previous three-spot pattern is almost invisible because majority of those particles are in the light edges of the current RF rings, thus they are given low weight. Figure 23 starts the second iteration and shows the next IMU update, which takes 7092 weighted particles from the previous RF update, generates child particles, which are down-selected to uniformly weighted particles. It is apparent that the left part of the cloud is less dense. This is explained by the fact that input particles in that region had small weights and they generated only a few child particles. Also the three dark spots shown on the Figures 20 and 21 are somewhat visible again. The upper right spots are very dense. The left spot is faint but still represented by a slightly bigger number of particles, compared to adjacent regions above and 40

49 below. In addition to the two dense spots, the area between them (right part of the cloud) maintained its density and now the distribution of particles on Figure 23 looks similar to the Figure 21 except the left most region that was partially eliminated by the RF update shown on Figure 22. Figures 24a-d. From left to right: RF update (iteration 2), IMU update, RF update (both from iteration 3), IMU update (iteration 4). Figure 24 shows the next four consecutive updates. Those are alternating RF and IMU updates starting from the last (RF) update of iteration 2 through the first update (IMU) of iteration 4. We can follow the same principle to track changes between the updates. Comparing the last picture with Figure 20, it is apparent that the cloud became slightly smaller and more concentrated around the true position of the robot. Figures 25a-d. From left to right: RF update (iteration 4), IMU update (iteration 5), IMU update (iteration 15), IMU update (iteration 30). Figures 25 a-b show the two next RF and IMU updates, which do not have significant changes. Figures 25 c-d show the set of uniformly weighted particles after the IMU updates on iterations 15 and 30. The plots shows the size that the size of the cloud decreased twice on the 15 th iteration comparing to the first one and it decreased twice again at the 30 th iteration. 41

50 Figures 26a-d. From left to right the set of the particles during iteration 50, 90, 150, 300. The scale is the same, but the offset of c-d is bigger than a-b, therefore the black border is invisible. Figures 26a-d show the cloud of particles after either RF or IMU update on the iterations 50, 90, 150, and on the last 300 th iteration. There is a slight visible decrease in the size between iterations and 50-90, but there is almost no change after iteration 90. At the end, there is one small cloud of particles concentrated around the true position of the robot. The cloud successfully moved along with the robot more than 4000mm during the 3 seconds. The whole path of both the true position of the robot and the center of the cloud is shown on Figures 27a-c using green and red colors, respectively. The center of the cloud is our main estimate of the robot s position. Figures 27a-c. True path of the robot (green) and path of the center of the cloud (red). From left to right: full path, fragment at the beginning, and fragment at the end. Significant fluctuations in the position of the center of the cloud occur at the beginning of the path when the cloud was big. However, after a few iterations, the position of the center settles down close to the true position of the robot. Figure 27c indicates that even the cloud s center was moving close to the true position of the robot, when it stopped, it was slightly behind, which means that we can expect an error bigger than just the distance between the red and green lines. 42

51 In order to evaluate behavior of the localization algorithm the following error measures are used: 1. Error Distance distance between the estimated position of the robot, which is calculated as the center of the population of particles, and the true position of the robot. This quantity represents localization accuracy. (Figure 28, distance between the red and the green centers) 2. Mean ME or Dispersion - average of the distances from the center of the population to all particles. This quantity represents the size of the cloud of particles. (Figure 28, red lines show distances) 3. Mean SD - standard deviation of distances in Real ME - average of the distances from the real position to all particles. (Figure 28, green lines show distances) 5. Read SD - standard deviation of distances in 4. Figure 28. Real position of the robot (green blob on origin), estimated position (red spot southwest of origin) as the center of the population (dispersed red dots), and distances from particles to the real (green lines) and estimated positions (red lines). The next three figures show the dynamics of these error measures. For better visualization Figure 29 shows the error values over the whole 300 iterations while Figures 30 and 31 show the first 30 iterations and the remaining iterations. 43

52 Figure 29. Error measures during the straight path, all iterations from 1 to 300. Figure 30. Error measures during the straight path (fragment), iterations from 1 to

53 Figure 31. Error measures during the straight path (fragment), iterations from 31 to 300. As we noted before, the cloud of particles is relatively large in the beginning, but decreases in size several times during the first 30 iterations and after that closely follows the true position of the robot. Figure 29 shows that all errors settle down within the first 15 iterations, which is a 0.15s interval. During that time, our main error measure, Error Distance (localization accuracy) drops from 130mm to about 10mm. At the same time, Mean ME or Dispersion (size of the cloud) drops from about 110mm to 15mm. Further we can see that all errors stay low having small fluctuations within 10mm till 70 th iteration. After that, Error Distance and Real ME grow linearly till the end of the experiment. This growth may be explained by either accumulation of some error over the time, or increase in robot s speed. We can also note that the size of the cloud remains the same. This result is consistent with the rest of the errors staying low and indicates that the slight decrease in accuracy over the time is not caused by spread of the particles, but by the whole cloud going slightly off from the true path of the robot. The plots above indicate that several error measures copy each other, thus we will continue to use only two of them Error Distance and Mean ME. Error Distance stands for localization accuracy and can be viewed as the error of the particle set s mean, while Mean ME represents the size of the cloud and can be viewed as the deviation of the set of particles. Mean ME will be also called dispersion or 45

54 deviation. The remaining error measures do not provide additional significant information: Real ME is similar to Error Distance, while Mean SD and Real SD are second level deviations with respect to Mean ME and Real ME Various configurations RF only, IMU only, RF+IMU The next set of experiments with a straight path explores and compares various configurations of the localization algorithm. There are three possible update options: RF plus IMU, RF only, and IMU only. In the previous experiment both RF and IMU were used. The current experiments use the same straight path with 4 RF transmitters, 50 mm position perturbation, and 50 mm/s 2 acceleration perturbation. Figure 32. Straight path, Error Distance, RF only, IMU only, RF+IMU. 46

55 Figure 33. Straight path, Dispersion, RF only, IMU only, RF+IMU. Figures 32 and 33 show Error Distance and Dispersion for all three configurations. The RF only update type results in significantly bigger size of the cloud, which jumps around the true position of the robot. The IMU only update type shows good results with Error Distance better then RF+IMU. It starts having zero Error Distance, which increases over the time because of the drift associated with IMU sensors. However, after iteration 250, when the robot starts slowing down, the Error Distance slightly improves from 25 to 20 mm. Figure 33 indicates that the size of the cloud for IMU only update type grows linearly and after first 170 iterations (i.e., 1.7s) it overcomes the size of the cloud in RF+IMU update type, which keeps its size of the cloud constant on average. Also, the accuracy of all three update types becomes worse over the time, but overall, RF only achieves worse accuracy than RF+IMU and IMU only, which are both very similar. So far, IMU only seems accurate, however, it cannot be truly compared with the other two updates, because 1) it only works with known starting location of the robot and 2) we expect that because of the drift of IMU sensors the dispersion of the IMU only will grow and the accuracy will worsen over the time in longer experiments. The question of how the accuracy and dispersion degrade is addressed later using longer experiments. 47

56 Number of RF transmitters This section investigates how the number of RF transmitters affects localization accuracy. Again, the same straight path experiment is run using RF only update and all possible combinations of 1, 2, 3, and 4 transmitters out of the 4 available. Position perturbation will be the same (i.e. 50 mm). First, only one RF transmitter is used. This is the worst case and good results are not expected because we have only one RF ring that covers large region instead of a small intersection of several rings. Instead, it is expected that the mean will be in the center of the RF ring, where the transmitter is located, and the deviation varies from the minimum to the maximum radius. However, our algorithm operates under the assumption that the robot does not go more than half of the original size of the space away from it. In particular, the model of P(distance signal reading) does not have data for such long distances. Therefore, instead of a full RF ring, the typical case is a half circle arc that intersects the space and goes out in both directions. The mean of such arc is not in the center of the ring anymore. This explains why the mean is less than the distance from the robot to the transmitter and it can change over the time as the arc moves. Such behavior can be seen when transmitters #2 and #4 were used. Figures 34 and 35 show the mean and the deviation of four experiments with one transmitter. Figure 34. Straight path, Error Distance, 1st, 2nd, 3 rd, and 4 th transmitter used. 48

57 Figure 35. Straight path, Dispersion, 1st, 2nd, 3 rd, and 4 th transmitter used. Figures 34 and 35 show that the experiment for transmitter #1 stopped after 90 iterations, because the last RF update zeroed out all the particles. Sometimes, an RF reading is significantly inconsistent with current location of the particles. As shown in section 3.2, our configuration of RF motes is incapable of distinguishing between signals on distances less than a certain distance (e.g., 1.6 meters, as shown in see Figure 4). In this case transmitter #1 is only 1.4 meters away from the initial position of the robot. Figure 36 shows how different signal readings and the corresponding RF rings could be for the same location of the robot, when it is close to a transmitter. Figure 36. Various RF rings around transmitter #1 corresponding to the same location of the robot. On each step of the algorithm a similar ring is intersected with the current set of the particles. Sometimes it may cut off all or almost all particles. For example, Figure 37 shows the set of 49

58 particles after the first 21 iterations and Figure 38 shows the set of weighted particles that correspond to the RF reading at iteration 22. Notice that these two clouds are almost disjoint. The inner radius of the ring on 37 is slightly bigger than 1m (distance from transmitter #1 located in the center of the ring to the robot shown as a small green square at the corner) and the outer radius of 38 is about 1m. These two rings are essentially superimposed and then pointwise multiplied, zeroing out almost all particles. Figure 37. Set of the particles at the beginning of iteration 22 Figure 38. PDF of the RF reading at iteration 22 The remaining 25 particles out of the previous 6400 are shown on Figure 39a. In this case not all particles were eliminated. However, when the algorithm was repeated several times there were runs when all particles were zeroed out at this iteration. Figures 39b-d show how the set of particles recovered during next several iterations. Note, Figure 39b shows the set of particles after the next iteration 23. Each particle from the previous iteration resulted in a square volume of particles. This demonstrates the effect of 50mm position perturbation. Figure 39. a) Set of the particles after iteration 22, b-d) set of particles after next three iterations. 50

59 On iteration 90, a similar situation occurred, but none of the particles remained after the RF update. Thus, the representation of the robot was lost and the algorithm stopped. The experiment was repeated several times, but it never finished the whole path and stopped every time within first 100 iterations (1 second). It is conjectured that a larger position perturbation would solve the problem because the robot is moving too fast at this point. Another exceptional case occurs with the transmitter #3, which is located in the right bottom corner of the square space, so that the robot travels directly towards the transmitter. The RF arc mentioned above happened to be symmetrical with respect to the path from the robot to the transmitter, therefore the center of this arc lies close to the path of the robot resulting in better accuracy (about 0.5m) than the other transmitters. However, the length of the arc-shaped cloud was still large, thus, the deviation for the transmitter #3 is large too. The following considers all configurations where two RF transmitters are used. Figures 40 and 41 show Error Distance and Dispersion for six possible combinations of two transmitters out of four. There is one case, when transmitters #1 and #3 (i.e. 1x3x) were used, when the algorithm did not finish. Cases 12xx, x2x4, and 1xx4 also demonstrate bad results. Cases x23x and xx34 show significantly better results. All these cases are discussed below. Figure 40. Straight path, Error Distance, 2 transmitters used. 51

60 Figure 41. Straight path, Dispersion, 2 transmitters used. First, there is one case (1x3x) when all particles received a zero weight during the RF update at iteration 22. The reason is the same that caused interruption of the previous experiment with the transmitter #1 closeness the robot to a transmitter and as the result inconsistent RF readings. In this case two RF rings intersect in two spots and both spots belong to the area where the robot can possible travel (see Figure 42). Both spots are maintained over the time. Experiments indicate that there is a chance that the spots will eventually merge, or one of them will be eliminated and another gets closer to the true position of the robot. However, since the robot is close to the first transmitter at the beginning, there is also a good chance that all particles will be zeroed out during one of the RF updates. This actually happened on the same iteration 22, as in previous experiment. Transmitter #3 did not help to move the cloud during the previous iterations such that it would not be completely eliminated. In particular, it happened because of the location of the transmitter #3. The intersection of #1 and #3 is a narrow stripe-shaped region (Figure 42a) which is not better than the ring around #1. However, there are two other cases (1x2x and 1xx4) when transmitter #1 was used, but the experiments completed. In these cases, both rings of transmitters #2 and #4, when intersected with the ring around transmitter #1, create a region that goes through the whole width of the ring around transmitter #1. This shape of the intersection helps to resist fluctuations of the ring #1. Generally, more transmitters help to create 52

61 a more accurate cloud of particles that surrounds the true position of the robot and also has a margin around it, which prevents eliminating all particles. A similar situation occurred when the #2 and #4 were used (see Figure 43). Two regions were identified as the potential location of the robot during the initialization. In the previous example, the robot moved along the line going from one transmitter to another, thus the typical intersection of two RF rings for this case is a stripe. In this case, the robot moves along the line that is almost perpendicular to the line between the transmitters. This results in two spots maintaining over the time. One of the spots estimates the true position of the robot with good accuracy, however, since the mean is in the center between two spots, the data shown in the charts indicate big error distance. Thus, Error Distance can be misleading for multi-modal distributions Figures 42a-b. Intersection of 2 RF rings from 1st and 3 rd transmitters. Initialization is on the left and 21 st iteration right before both spots were eliminated is on the right. Figure 43. Intersection of 2 RF rings from 2nd and 4th transmitters. Initialization is on the left and 220th iteration is on the right. Once the robot crosses the line between transmitters 2 and 4, the two areas merge into one. At that moment the accuracy and size of the cloud is minimum. The charts indicate the best mean and deviation values around iteration 230. After that, once the robot moves further from that line, the cloud splits into two spots again and they start moving apart that results in worsening mean and deviation. Two other cases 12xx and 1xx4 also resulted in two spots, but because of the location of the transmitters and the path of the robot those spots never merged and their mean and deviation indicate complete failure to localize the robot. Two remaining cases x23x and xx34 show significantly better results than all other configurations of two transmitters. This is explained by the fact that they intersected in only one spot in our operating area. The second spot was eliminated because it was far from the square space and the distance from the spot to one of the transmitters exceeded the maximum allowed distance between the robot and a transmitter. 53

62 The following examines all configurations where three or four RF transmitters are used. Figures 44 and 45 show mean and deviation for these configurations. As we can see all of them have very similar behavior. The mean and the deviation settle down within the first 15 iterations and the deviation remains constant for the rest of the path, while the mean increases along with increasing acceleration until iteration 250 and then slightly decreases. Figure 44. Straight path, Error Distance, 3 and 4 transmitters used. Figure 45. Straight path, Dispersion, 3 and 4 transmitters used. 54

63 These experiments indicate that no improvement is achieved by using 4 over 3 transmitters. However, there is a significant improvement using 3 over 2. This is expected from basic geometry, since 3 rings (with 3 non-linear centers) is enough to unambiguously identify a point on the plain if distances from that point to the centers of the rings are known. From the same point of view, two transmitters are not sufficient, because 2 RF rings generally intersect in two spots. Reasonable accuracy can be obtained only if the additional information helps eliminate one of the spots. In the case of one transmitter, it is impossible to achieve good estimates. Since the configuration with four transmitters is one of the most accurate with respect to the number of transmitters used, we expect it to be the most accurate when we integrate IMU readings into the algorithm as well. A few additional experiments confirmed this hypothesis; therefore, the cases with various numbers of transmitters and RF+IMU scenario are not described in detail Acceleration and position perturbations There are two quantities left that require additional investigation acceleration perturbation and position perturbation. Acceleration perturbation is the magnitude of the random values that are added to, or subtracted from, X and Y components of IMU acceleration readings. It is used for generating child particles in IMU only or RF+IMU configurations during the IMU update. Position perturbation is the maximum of random values that are added to, or subtracted from, X and Y components of particles positions. It is used for generating child particles in RF only configuration during the RF extension phase. First, we investigate the effect of position perturbation on accuracy and size of the cloud. RF only is the only update type where position perturbation takes place. It replaces the IMU update phase when we apply some motion model to move particles around in order to select particles with the best positions later. Experiments were run on the same straight path using 4 transmitters and various position perturbation values starting from 10 to 100 incrementing the value on each step by 10, for a total of ten runs. It was expected that the size of cloud would increase along with the spread of the particles caused by position perturbation. 55

64 Figure 46. Straight path, Error Distance, RF only, 4 transmitters, position perturbation from 10 to 50. Figure 46 shows the dynamics of the error distance only for the first 5 runs where the perturbation changes from 10mm to 50mm. The experiments for 10mm and 20mm were stopped because all particles received zero weight. It happened because the spread of the particles was not big enough to follow the robot, which was moving faster and faster. The remaining three configurations show similar mean and deviation. Similar results were also obtained using perturbations up to 100. In order to visualize them better, the average mean and deviation were calculated for each configuration over the whole path. Figure 47 shows average values of the mean and deviation over the configurations with various perturbations. As we expected the deviation grows linearly along with the spread of the particles. The mean drops down to the point where the cloud can follow the velocity of the robot properly and then starts slightly increasing along with increasing spread of the particles. It is expected that the shape of the Error Distance depends on the velocity pattern of the robot. If the robot moves with high speeds then the perturbation has to be big enough to follow it. If we have a slowly moving robot, then the big values of perturbation will result in bigger fluctuations of the center of the cloud. Thus, the proper setting depends on the physical characteristics of the robot; however, RF only is not a focus if this work, so configuring this parameter is not a concern. 56

65 Figure 47. Straight path, average of Error Distance and Dispersion, RF only, 4 transmitters, position perturbation from 30 to 100. The next set of experiments was conducted to investigate the effect of acceleration perturbation on accuracy and size of the cloud. IMU only and RF+IMU are the two update types where acceleration perturbation takes place. That is, how well do these algorithms operate with an increasing noisy IMU? Acceleration perturbation adds noise to the readings provided by IMU. For each child particle generated during an IMU update, we add or subtract a value, which is less than acceleration perturbation (i.e. noise model of IMU), to the acceleration of that child particle. Experiments were conducted using the same straight path and the RF+IMU update type with 4 transmitters and various acceleration perturbation values starting from 25 to 250 incrementing the value by on each step by 25 mm/s 2, for a total of ten runs. Experimental results indicate that all configurations show similar mean and deviation. In order to visualize them better, we plotted the average mean and deviation for each configuration over the whole path. Figure 48 shows average values of mean and deviation over the configurations with various perturbations. 57

66 Figure 48. Straight path, average of Error Distance and Dispersion, RF+IMU, 4 transmitters, acceleration perturbation from 25 to 250. We can see that the deviation stays almost the same for all configurations. One might expect the size of the cloud to grow proportionally with the growing acceleration perturbation (actually IMU only experiments indicate such behavior); however, in the RF+IMU case, the RF updates help the deviation to stay constant around 20mm. At the same time we observe a slight linear increase in the mean that can be explained by increasing fluctuations of particles positions caused by increasing fluctuations in their acceleration. Initial experiments with the actual IMU showed that even though the orientation reading provided by the sensor can be significantly influenced by magnetic fields, the inertial readings providing instantaneous acceleration are reasonably accurate. Thus the RF component helps mitigate the noise present in the IMU sensor Other paths In this section, more complicated and longer experiments were conducted in order to better study performance of the algorithm. As shown before, the IMU only update type cannot truly compete with RF+IMU because it requires information about the robot s initial position. In contrast to IMU only, RF+IMU is able to precisely localize the robot within 15 iterations without knowledge about robot s initial position. As a side note, it is worth noting that both IMU and RF+IMU configurations require information about the initial velocity of the robot. In all experiments described previously the robot is assumed an initial velocity of zero and this 58

67 information is a part of the input. If the algorithm is started from the middle of the straight path, where the robot already has a non-zero velocity, but the particles are assigned a zero velocity, then both IMU and RF+IMU configurations quickly lose track of the true position of the robot and the next RF update zeroes out all the particles. This zeroing out happens because the IMU provides information about acceleration, and the particles velocity cannot be estimated without its previous velocity. For example, suppose the IMU provided a reading of 4m/s 2 and the last reading of 0m/s 2 was 1 second ago. We interpolate the acceleration and calculate that the average acceleration for the last second was 2m/s 2. If the previous speed was 0m/s, then after one second the robot s speed became 2m/s. Therefore, the interpolated speed was 1m/s and robot traveled 1m on average. However, if the previous speed was 4m/s, then the average speed would be 5 m/s and the robot would have traveled 5m since the last update. Without the information about the previous velocity it is impossible to calculate the new velocity based on the IMU readings. In application to our robot, this problem may be easily solved if the robot has control over its motion. If velocity information is not known, or position is lost, we can either use velocity readings provided by the robot s wheel encoders or issue a command to stop the robot and restart the localization algorithm Square path Another series of experiments were conducted using a rounded square shape path, which includes previous patterns seen during moving along a straight line using various accelerations. Additionally, the rounded square path includes graduate turns with a small radius in the corners of the square. In these experiments the robot operates in the same square space 4x4m. Four transmitters are located at the same points (-1,-1), (-1, 4), (4, 4), (4, -1) (meters). The robot starts in the left top corner (0, 0) and travels 1m toward the bottom, then turns left and travels 1m more, then turns left again and travels up 1m, and finally travels back and stop about 0.25m before its initial position. The total duration of the path is 4 seconds with the rate of 100 readings per second, giving a total of 400 readings per path. 59

68 RF IMU RF+IMU Figure 49. Square path. True path of the robot (green) and path of the center of the cloud (red). From left to right: RF only (4 transmitters, 50mm position perturbation), IMU only, RF+IMU (50mm/s 2 acceleration perturbation for both). Figure 49 shows the trajectories of the true position of the robot and the estimated position provided by the algorithm for RF, IMU, and RF+IMU. The pattern is similar to those observed on the straight path: 1) RF only follows the path, but jumps around 2) IMU only follows the path accurately and 3) RF+IMU settles down very quickly at the beginning and accurately follows the path as well. Figures 50 and 51 show mean and deviation for the three update types. Figure 50. Square path, Error Distance, RF only, IMU only, RF+IMU. 60

69 Figure 51. Square path, Dispersion, RF only, IMU only, RF+IMU. Figure 50 confirms the hypothesis that the accuracy decreases when the speed increases. Four waves can be seen. When the robot has a maximum speed in the middle of the four sides the Error Distance reaches its maximum. Similar patterns appear for both IMU and RF+IMU configurations. The deviation shown in Figure 51 is expected: the RF only fluctuates, but is stable; IMU only increases because of the drift; and RF+IMU does better than IMU starting from iteration 170, but a longer experiment in need to make a conclusion about it s trend Long path Next, a series of experiments was conducted using a long path. It includes moving along straight lines followed by a series of figure eights. The robot operates in the same environment with the same positions of transmitters. It starts in the left top corner (0, 0), then travels diagonally towards the middle of the region, and start a series of straight lines and curves. The total duration of the path is seconds with the rate of 100 readings per second, which gives a total of 3818 readings per path. 61

70 RF IMU RF+IMU Figures 52a-c. Long path. True path of the robot (green) and path of the center of the cloud (red). From left to right: RF only (4 transmitters, 50mm position perturbation), IMU only, RF+IMU (50mm/s 2 acceleration perturbation for both). Figure 52 shows the trajectories of the true position of the robot and the estimate position provided by the algorithm for RF, IMU, and RF+IMU. The figures show a similar pattern of these updates types to those seen on the straight and square paths. However, since this path is long the problems with the IMU are observed. Figure 52c clearly shows that green and red lines started as one line in the left top corner in the beginning of the experiment, but deviated significantly at the end of the experiment, when the robot traveled the last 8-shaped curve at the top. This result indicates that the estimates provided by the IMU worsen over the time. Figure 53. Long path, Error Distance, RF only, IMU only, RF+IMU. 62

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta 1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

Camera Drones Lecture 2 Control and Sensors

Camera Drones Lecture 2 Control and Sensors Camera Drones Lecture 2 Control and Sensors Ass.Prof. Friedrich Fraundorfer WS 2017 1 Outline Quadrotor control principles Sensors 2 Quadrotor control - Hovering Hovering means quadrotor needs to hold

More information

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

A Simple Introduction to Omni Roller Robots (3rd April 2015)

A Simple Introduction to Omni Roller Robots (3rd April 2015) A Simple Introduction to Omni Roller Robots (3rd April 2015) Omni wheels have rollers all the way round the tread so they can slip laterally as well as drive in the direction of a regular wheel. The three-wheeled

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity

More information

Selection and Integration of Sensors Alex Spitzer 11/23/14

Selection and Integration of Sensors Alex Spitzer 11/23/14 Selection and Integration of Sensors Alex Spitzer aes368@cornell.edu 11/23/14 Sensors Perception of the outside world Cameras, DVL, Sonar, Pressure Accelerometers, Gyroscopes, Magnetometers Position vs

More information

Evaluating the Performance of a Vehicle Pose Measurement System

Evaluating the Performance of a Vehicle Pose Measurement System Evaluating the Performance of a Vehicle Pose Measurement System Harry Scott Sandor Szabo National Institute of Standards and Technology Abstract A method is presented for evaluating the performance of

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

Project 1 : Dead Reckoning and Tracking

Project 1 : Dead Reckoning and Tracking CS3630 Spring 2012 Project 1 : Dead Reckoning and Tracking Group : Wayward Sons Sameer Ansari, David Bernal, Tommy Kazenstein 2/8/2012 Wayward Sons CS3630 Spring 12 Project 1 Page 2 of 12 CS 3630 (Spring

More information

ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE

ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE Raghav Grover and Aneesh Agarwal RG (Grade 12 High School), AA (Grade 11 High School) Department of Physics, The Doon School, Dehradun. raghav.503.2019@doonschool.com,

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM Glossary of Navigation Terms accelerometer. A device that senses inertial reaction to measure linear or angular acceleration. In its simplest form, it consists of a case-mounted spring and mass arrangement

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

Indoor navigation using smartphones. Chris Hide IESSG, University of Nottingham, UK

Indoor navigation using smartphones. Chris Hide IESSG, University of Nottingham, UK Indoor navigation using smartphones Chris Hide IESSG, University of Nottingham, UK Overview Smartphones Available sensors Current positioning methods Positioning research at IESSG 1. Wi-Fi fingerprinting

More information

CS283: Robotics Fall 2016: Sensors

CS283: Robotics Fall 2016: Sensors CS283: Robotics Fall 2016: Sensors Sören Schwertfeger / 师泽仁 ShanghaiTech University Robotics ShanghaiTech University - SIST - 23.09.2016 2 REVIEW TRANSFORMS Robotics ShanghaiTech University - SIST - 23.09.2016

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

Testing the Possibilities of Using IMUs with Different Types of Movements

Testing the Possibilities of Using IMUs with Different Types of Movements 137 Testing the Possibilities of Using IMUs with Different Types of Movements Kajánek, P. and Kopáčik A. Slovak University of Technology, Faculty of Civil Engineering, Radlinského 11, 81368 Bratislava,

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots ROBOT TEAMS CH 12 Experiments with Cooperative Aerial-Ground Robots Gaurav S. Sukhatme, James F. Montgomery, and Richard T. Vaughan Speaker: Jeff Barnett Paper Focus Heterogeneous Teams for Surveillance

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER

MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Engineering By SHASHIDHAR

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Probabilistic Motion and Sensor Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Sensors for Mobile

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Localization of Multiple Robots with Simple Sensors

Localization of Multiple Robots with Simple Sensors Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab

More information

Digital Compass Accuracy

Digital Compass Accuracy Digital Compass Accuracy Lindsey Hines, University of St. Thomas Mentor: Dr. James Bellingham Summer 2007 Keywords: digital compass, Microstrain, tilt measurement error ABSTRACT The overall goal of the

More information

Encoder applications. I Most common use case: Combination with motors

Encoder applications. I Most common use case: Combination with motors 3.5 Rotation / Motion - Encoder applications 64-424 Intelligent Robotics Encoder applications I Most common use case: Combination with motors I Used to measure relative rotation angle, rotational direction

More information

CS 4758 Robot Navigation Through Exit Sign Detection

CS 4758 Robot Navigation Through Exit Sign Detection CS 4758 Robot Navigation Through Exit Sign Detection Aaron Sarna Michael Oleske Andrew Hoelscher Abstract We designed a set of algorithms that utilize the existing corridor navigation code initially created

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,

More information

Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer

Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer Lydia Lorenti Advisor: David Heddle April 29, 2018 Abstract The CLAS12 spectrometer at Jefferson

More information

Satellite and Inertial Navigation and Positioning System

Satellite and Inertial Navigation and Positioning System Satellite and Inertial Navigation and Positioning System Project Proposal By: Luke Pfister Dan Monroe Project Advisors: Dr. In Soo Ahn Dr. Yufeng Lu EE 451 Senior Capstone Project December 10, 2009 PROJECT

More information

Aided-inertial for GPS-denied Navigation and Mapping

Aided-inertial for GPS-denied Navigation and Mapping Aided-inertial for GPS-denied Navigation and Mapping Erik Lithopoulos Applanix Corporation 85 Leek Crescent, Richmond Ontario, Canada L4B 3B3 elithopoulos@applanix.com ABSTRACT This paper describes the

More information

Inertial Navigation Systems

Inertial Navigation Systems Inertial Navigation Systems Kiril Alexiev University of Pavia March 2017 1 /89 Navigation Estimate the position and orientation. Inertial navigation one of possible instruments. Newton law is used: F =

More information

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter Przemys law G asior, Stanis law Gardecki, Jaros law Gośliński and Wojciech Giernacki Poznan University of

More information

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The

More information

EE565:Mobile Robotics Lecture 2

EE565:Mobile Robotics Lecture 2 EE565:Mobile Robotics Lecture 2 Welcome Dr. Ing. Ahmad Kamal Nasir Organization Lab Course Lab grading policy (40%) Attendance = 10 % In-Lab tasks = 30 % Lab assignment + viva = 60 % Make a group Either

More information

An Intro to Gyros. FTC Team #6832. Science and Engineering Magnet - Dallas ISD

An Intro to Gyros. FTC Team #6832. Science and Engineering Magnet - Dallas ISD An Intro to Gyros FTC Team #6832 Science and Engineering Magnet - Dallas ISD Gyro Types - Mechanical Hubble Gyro Unit Gyro Types - Sensors Low cost MEMS Gyros High End Gyros Ring laser, fiber optic, hemispherical

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang MULTI-MODAL MAPPING Robotics Day, 31 Mar 2017 Frank Mascarich, Shehryar Khattak, Tung Dang Application-Specific Sensors Cameras TOF Cameras PERCEPTION LiDAR IMU Localization Mapping Autonomy Robotic Perception

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Introduction to Multi-Agent Programming

Introduction to Multi-Agent Programming Introduction to Multi-Agent Programming 6. Cooperative Sensing Modeling Sensors, Kalman Filter, Markov Localization, Potential Fields Alexander Kleiner, Bernhard Nebel Contents Introduction Modeling and

More information

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual

More information

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS Oliver Wulf, Bernardo Wagner Institute for Systems Engineering (RTS/ISE), University of Hannover, Germany Mohamed Khalaf-Allah

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

Mobile Robots: An Introduction.

Mobile Robots: An Introduction. Mobile Robots: An Introduction Amirkabir University of Technology Computer Engineering & Information Technology Department http://ce.aut.ac.ir/~shiry/lecture/robotics-2004/robotics04.html Introduction

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Overview for Families

Overview for Families unit: Graphing Equations Mathematical strand: Algebra The following pages will help you to understand the mathematics that your child is currently studying as well as the type of problems (s)he will solve

More information

MEMS technology quality requirements as applied to multibeam echosounder. Jerzy DEMKOWICZ, Krzysztof BIKONIS

MEMS technology quality requirements as applied to multibeam echosounder. Jerzy DEMKOWICZ, Krzysztof BIKONIS MEMS technology quality requirements as applied to multibeam echosounder Jerzy DEMKOWICZ, Krzysztof BIKONIS Gdansk University of Technology Gdansk, Narutowicza str. 11/12, Poland demjot@eti.pg.gda.pl Small,

More information

Xkitz.com. 8 Channel Capacitive Touch Switch Model XCTS-8M. Operators Manual. Invisible Touch Switch:

Xkitz.com. 8 Channel Capacitive Touch Switch Model XCTS-8M. Operators Manual. Invisible Touch Switch: 8 Channel Capacitive Touch Switch Model XCTS-8M Operators Manual Xkitz.com Invisible Touch Switch: The XCTS-8M allows you to install up to 8 capacitive touch switches nearly anywhere. It detects any sudden

More information

OFERTA O120410PA CURRENT DATE 10/04//2012 VALID UNTIL 10/05/2012 SUMMIT XL

OFERTA O120410PA CURRENT DATE 10/04//2012 VALID UNTIL 10/05/2012 SUMMIT XL OFERTA O120410PA CURRENT DATE 10/04//2012 VALID UNTIL 10/05/2012 SUMMIT XL CLIENT CLIENT: Gaitech REPRESENTANT: Andrew Pether MAIL: andyroojp@hotmail.com PRODUCT Introduction The SUMMIT XL has skid-steering

More information

Robotics. Haslum COMP3620/6320

Robotics. Haslum COMP3620/6320 Robotics P@trik Haslum COMP3620/6320 Introduction Robotics Industrial Automation * Repetitive manipulation tasks (assembly, etc). * Well-known, controlled environment. * High-power, high-precision, very

More information

Motion estimation of unmanned marine vehicles Massimo Caccia

Motion estimation of unmanned marine vehicles Massimo Caccia Motion estimation of unmanned marine vehicles Massimo Caccia Consiglio Nazionale delle Ricerche Istituto di Studi sui Sistemi Intelligenti per l Automazione Via Amendola 122 D/O, 70126, Bari, Italy massimo.caccia@ge.issia.cnr.it

More information

Introduction to Mobile Robotics Probabilistic Motion Models

Introduction to Mobile Robotics Probabilistic Motion Models Introduction to Mobile Robotics Probabilistic Motion Models Wolfram Burgard, Michael Ruhnke, Bastian Steder 1 Robot Motion Robot motion is inherently uncertain. How can we model this uncertainty? Dynamic

More information

Build and Test Plan: IGV Team

Build and Test Plan: IGV Team Build and Test Plan: IGV Team 2/6/2008 William Burke Donaldson Diego Gonzales David Mustain Ray Laser Range Finder Week 3 Jan 29 The laser range finder will be set-up in the lab and connected to the computer

More information

VN-100 Hard and Soft Iron Calibration

VN-100 Hard and Soft Iron Calibration VN-100 Hard and Soft Iron Calibration Application Note Abstract This application note is designed to briefly explain typical magnetic disturbances and mitigation strategies. It also addresses in detail

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Satellite Attitude Determination

Satellite Attitude Determination Satellite Attitude Determination AERO4701 Space Engineering 3 Week 5 Last Week Looked at GPS signals and pseudorange error terms Looked at GPS positioning from pseudorange data Looked at GPS error sources,

More information

Light and the Properties of Reflection & Refraction

Light and the Properties of Reflection & Refraction Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction

More information

Modern Robotics Inc. Sensor Documentation

Modern Robotics Inc. Sensor Documentation Sensor Documentation Version 1.0.1 September 9, 2016 Contents 1. Document Control... 3 2. Introduction... 4 3. Three-Wire Analog & Digital Sensors... 5 3.1. Program Control Button (45-2002)... 6 3.2. Optical

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

ROBOTICS AND AUTONOMOUS SYSTEMS

ROBOTICS AND AUTONOMOUS SYSTEMS ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 6 PERCEPTION/ODOMETRY comp329-2013-parsons-lect06 2/43 Today We ll talk about perception and

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

How to Measure Wedge. Purpose. Introduction. Tools Needed

How to Measure Wedge. Purpose. Introduction. Tools Needed Purpose Optical Wedge Application (OWA) is an add-on analysis tool for measurement of optical wedges in either transmission or reflection. OWA can measure a single part or many parts simultaneously (e.g.

More information

Practical Course WS12/13 Introduction to Monte Carlo Localization

Practical Course WS12/13 Introduction to Monte Carlo Localization Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter

More information

2002 Intelligent Ground Vehicle Competition Design Report. Grizzly Oakland University

2002 Intelligent Ground Vehicle Competition Design Report. Grizzly Oakland University 2002 Intelligent Ground Vehicle Competition Design Report Grizzly Oakland University June 21, 2002 Submitted By: Matt Rizzo Brian Clark Brian Yurconis Jelena Nikolic I. ABSTRACT Grizzly is the product

More information

Perspective Sensing for Inertial Stabilization

Perspective Sensing for Inertial Stabilization Perspective Sensing for Inertial Stabilization Dr. Bernard A. Schnaufer Jeremy Nadke Advanced Technology Center Rockwell Collins, Inc. Cedar Rapids, IA Agenda Rockwell Collins & the Advanced Technology

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Terry Payne Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization AUTONOMOUS SYSTEMS PROBABILISTIC LOCALIZATION Monte Carlo Localization Maria Isabel Ribeiro Pedro Lima With revisions introduced by Rodrigo Ventura Instituto Superior Técnico/Instituto de Sistemas e Robótica

More information

Marker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar

Marker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar Marker Based Localization of a Quadrotor Akshat Agarwal & Siddharth Tanwar Objective Introduction Objective: To implement a high level control pipeline on a quadrotor which could autonomously take-off,

More information

Middle School Math Course 2

Middle School Math Course 2 Middle School Math Course 2 Correlation of the ALEKS course Middle School Math Course 2 to the Indiana Academic Standards for Mathematics Grade 7 (2014) 1: NUMBER SENSE = ALEKS course topic that addresses

More information

MIP Inertial Sensors with Magnetometers Hard and Soft Iron Calibration Software

MIP Inertial Sensors with Magnetometers Hard and Soft Iron Calibration Software LORD QUICK START GUIDE MIP Inertial Sensors with Magnetometers Hard and Soft Iron Calibration Software Overview Certain MIP inertial sensors contain a magnetometer. The magnetometer values are available

More information

Robotics (Kinematics) Winter 1393 Bonab University

Robotics (Kinematics) Winter 1393 Bonab University Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots

More information

Low Cost solution for Pose Estimation of Quadrotor

Low Cost solution for Pose Estimation of Quadrotor Low Cost solution for Pose Estimation of Quadrotor mangal@iitk.ac.in https://www.iitk.ac.in/aero/mangal/ Intelligent Guidance and Control Laboratory Indian Institute of Technology, Kanpur Mangal Kothari

More information

Chapter 6 : Results and Discussion

Chapter 6 : Results and Discussion Refinement and Verification of the Virginia Tech Doppler Global Velocimeter (DGV) 86 Chapter 6 : Results and Discussion 6.1 Background The tests performed as part of this research were the second attempt

More information

ECSE-626 Project: An Adaptive Color-Based Particle Filter

ECSE-626 Project: An Adaptive Color-Based Particle Filter ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Project Proposal By: Beach Wlaznik Advisors: Dr. Huggins Dr. Stewart December 7, 2006 I. Introduction The objective of the Autonomous Vehicle Navigation

More information

Horus: Object Orientation and Id without Additional Markers

Horus: Object Orientation and Id without Additional Markers Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Introduction to Autonomous Mobile Robots

Introduction to Autonomous Mobile Robots Introduction to Autonomous Mobile Robots second edition Roland Siegwart, Illah R. Nourbakhsh, and Davide Scaramuzza The MIT Press Cambridge, Massachusetts London, England Contents Acknowledgments xiii

More information

Inertial Navigation Static Calibration

Inertial Navigation Static Calibration INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2018, VOL. 64, NO. 2, PP. 243 248 Manuscript received December 2, 2017; revised April, 2018. DOI: 10.24425/119518 Inertial Navigation Static Calibration

More information