Lego Robot Remote Mapping

Size: px
Start display at page:

Download "Lego Robot Remote Mapping"

Transcription

1 Registration number 99 Lego Robot Remote Mapping Supervised by Dr Graeme Richards University of East Anglia Faculty of Science School of Computing Sciences

2 Abstract This paper describes the design, implementation and evaluation of remote mapping software targeted at the low cost Lego Mindstorms NXT robotics platform. A probabilistic robot localisation and mapping algorithm is used to reduce the negative effects of errorprone sensors and motors on the maps constructed. Methods of modelling such errors in robot perception and movement are designed or adapted from existing systems. The system is evaluated by comparing the maps constructed by the system to the real-world environment explored.

3 Contents Contents 1 Introduction Requirements Key Areas of Understanding Roadmap Related Work Server to Client Communication Robot Control Movement Map Representation Coordinate-Based Representation Relational Representation Sensor Modelling Motion Modelling Perception Modelling Simultaneous Localisation and Mapping Path Planning Design Map and Robot Pose Representation Localisation and Mapping Algorithm Motion Model Map Weighting Perception Model Map Resampling Exploration Algorithm Search Node Initialisation Route-Planning Through Nodes Route Traversal Reg: 99 iii

4 Contents 4 Implementation Communication, Robot Movement and Sensor Reading Communication Movement Sonar Reading Graphical User Interface Mapping Application Overview Client Application Server Application Evaluation 37 6 Conclusion 40 References 42 Reg: 99 iv

5 List of Figures List of Figures 3.1 Data structures for a map and a robot pose Notional representation of placing the occupancy grid over an area to be mapped Decomposition of a complex robot movement into a translation-rotationtranslation triplet Resultant positions from the motion model for two different movements Graphic example of the correlation of new and old sensor readings Labelled sonar cone and perception model probability distribution Grid of search nodes over an area to be mapped The GUI during mapping Execution flow of the server application mapping and exploration thread The first area configuration and two maps produced by the system The second area configuration and two maps produced by the system.. 39 Reg: 99 v

6 1 Introduction 1 Introduction Many applications exist where mobile robots perform tasks otherwise carried out by humans. Autonomous cleaning robots save time in domestic environments whilst unmanned combat vehicles reduce human risk in military applications. In order to perform complex roles autonomously, it is usually necessary to define some representation of the surrounding environment, and provide a method of determining the robot location within that environment. Mapping is the problem of environment representation whilst localisation is that of determining the robot pose (location and heading) relative to this representation (Milstein, 2008). Commonly a robot has no initial knowledge of the surrounding environment or its pose. This is problematic as creating a map requires accurate knowledge of pose whilst determining this pose requires a reliable map. Both localisation and mapping are required to be performed concurrently; referred to as Simultaneous Localisation and Mapping (abbreviated SLAM). Mapping and localisation problems pre-date modern robots as a geographic surveying problem. In robotics, some of the key advances in mapping and localisation include Kalman filter techniques (Smith and Cheeseman, 1986), occupancy grid maps (Elfes, 1987; Moravec, 1989), particle filters (Dellaert et al., 1999) and probabilistic algorithms (Thrun et al., 2005). A great diversity of modern probabilistic techniques have been developed to solve the SLAM problem (Cyrill Stachniss, 2013). Such solutions largely target high-end mobile robots, with little exploration of the benefits of such techniques using more crude sensors and actuators. However, robots are often limited to lower precision hardware due to the inherent high cost of more accurate alternatives. The implementation of probabilistic algorithms with inexpensive mobile robots would allow for more accurate mapping and localisation with such robots and promote wider exposure, such as within education, of SLAM and probabilistic techniques. Reg: 99 1

7 1 Introduction 1.1 Requirements The overall requirement of the project was to produce remote mapping software targeted toward the LEGO Mindstorms NXT, an inexpensive, highly-modular robotics system. Remote mapping software aims to produce a map of an area which describes the area configuration (location, size and shape of objects and free space). To achieve this, a server component for sensor-data processing, path-planning and map display runs on a desktop computer. A client application on the robot manages movement and reports obstacles detected with a sonar range-finder. A client-server architecture facilitates the communication of pose and sensor readings between the robot and desktop machine. Both client and server applications are programmed in Java. The lejos Java virtualisation platform allowed the execution of the non-native Java code on the NXT. Like all robots, the NXT is subject to uncertainty and error in both sensor and odometer readings. Probabilistic algorithms are used to reduce or recover from the negative effect of such error on the maps constructed by the system: The error inherent to the NXT ultrasonic sensor and odometer readings is modelled and incorporated within a SLAM algorithm. The performance of the SLAM algorithm, probabilistic techniques used, and the accuracy of the constructed maps is then be evaluated. 1.2 Key Areas of Understanding A firm understanding of several areas was necessary to meet the project requirements. Primarily, a working knowledge of the Java programming language and the lejos virtualisation platform was required. Particularly, knowledge of stream-based data transfer over a bluetooth connection was required to facilitate robot-computer communication, knowledge of building graphical user interfaces in Java was also required to facilitate displaying maps to the user, and knowledge of the lejos interfaces to robot sensors and actuators was required to move the robot and retrieve sonar sensor readings. The NXT hardware had to be explored: To model the errors presented by sensor and actuator hardware, the nature of such errors must be known. Maps must be internally represented within the application. The advantages and dis- Reg: 99 2

8 2 Related Work advantages of various representations was considered with regard to the environment being mapped, the NXT hardware, and the probabilistic algorithms being used. Knowledge of SLAM algorithms, perception (range sensor) and movement (actuator) models was required in order to design and adapt such algorithms to the NXT hardware. 1.3 Roadmap Relevant findings from related works are discussed in section 2. The chosen internal map and robot pose representations, design and adaption of the error models and SLAM algorithm, and design of the exploration algorithm is detailed in section 3. The implementation of robot-to-computer communication and robot control, graphical user interface, and overall structure and flow of the client and server mapping applications is described in section 4. The accuracy of the constructed maps with regards to the explored environment is evaluated in section 5. Finally, the performance of the application and suitability of the probabilistic techniques with the NXT is discussed in section 6. 2 Related Work This section highlights the most relevant findings from related works. Several areas were researched: the NXT hardware, lejos virtualisation platform and supported hardware interfaces, a taxonomy of map representations, error modelling techniques, methods of robot localisation and mapping, and path planning. First, the findings related to the lejos virtualisation platform with regards to robot communication, movement and sensor polling are presented. Various map representations are then explored. The sensors and actuators supplied with the NXT system are then listed and methods of modelling the error inherent to each are discussed. A suitable method of simultaneous localisation and mapping is then identified. Finally, a simple method of path planning is discussed. Reg: 99 3

9 2 Related Work 2.1 Server to Client Communication Communication is required between the desktop computer (server) and robot (client) applications: The server must send movement orders to the client while the client must send range readings and odometry (pose) information to the server. The LeJOS API provides robust communications packages (specifically pc.comm and nxt.comm) which support communication through both USB and bluetooth channels (Scholz and Bagnall, 2009). The use of bluetooth is preferred for a remote mapping application where USB cables may encumber robot motion or be falsely detected as obstacles. Several predefined methods exist for the communication of primitive data types (such as integer and float types) which are implemented over stream-based input and output. Methods to establish a Bluetooth connection and the required data streams are also provided by these packages. High-level communications methods for specific data objects (such as pose or sensor data) may simply be programmed on top of those provided. 2.2 Robot Control Control of robot movement through the manipulation of the robot actuators (motors) is required for the robot to explore the surrounding environment whilst polling of the sonar range sensor is required for the detection of obstacles Movement Humans often describe movement in terms of the desired outcome: Moving to a space without necessarily considering the rotations and translations that compose such movements. However, the NXT is driven by low-level operations such as individual motor rotations. A mechanism that decomposes high-level movements will provide abstraction from low-level operations. The lejos API (specifically navigation and localization) provides a highly object oriented structure for the control of robot movement, described in Bagnall (2011). Pilot classes directly control motor motion, implementing low-level rotation and forward movement methods. Pose providers use a pilot to maintain estimated pose in- Reg: 99 4

10 2 Related Work formation (odometry). Navigator classes, composed of both a pilot and pose provider, handle high-level, waypoint defined motion within a local coordinate system. The navigator class provides abstraction but hides the underlying motor movements from the application: Only the initial and final robot pose can be found from the pose provider, not the motor movements made. This is unsuitable as the underlying movement components (rotations and translations) will be required by the motion model. However, the pilot and pose provider classes contain methods to action the desired movements while providing the required movement and pose information. 2.3 Map Representation A map representation can not be chosen for the ability to maintain an adequate likeness of an environment alone. The required operations, such as the suitability for navigation and localisation methods, complexity of the integration of sensor data, and readability to a human user must also be considered. Map representations vary greatly in mobile robotics but are largely categorised as coordinate-based or relational representations Coordinate-Based Representation Coordinate-based representations capture the geometric properties of the environment using coordinates within an absolute coordinate system. Common entities in such representations are landmarks and geometric objects extracted from sensor information, and occupancy cells. Landmark-Based Representation Landmark-based representations describe the environment by a collection of recognisable features corresponding to real world objects, such as a door frame or corner of a room, within a coordinate system. Landmark signatures must be extracted from sensor data through some means of pre-process. Characteristics such as average colour and shape may be used in landmark feature-extraction. An example landmark-based map representation by Guivant and Nebot (2001) uses the profile of tree trunks extracted from laser range data in a large outdoor environment. The representation is shown to be compact despite the large scale of the environment. Reg: 99 5

11 2 Related Work However, the reliance on dense, distinguishable landmarks reduces the application domain of such a representation. Geometric Representation Geometric representations use primitive geometric objects such as points, lines and curves to discern environment features within a coordinate system. Geometry must be extracted from sensor data. Point objects representing corners and line objects representing walls are extracted from sonar range data to compute a geometric map in Tardós et al. (2002). The representation may be compact if corresponding entities are effectively merged at additional computation cost. Furthermore, unexplored areas of the environment are difficult to discriminate from explored areas unless explicitly represented. Occupancy-Based Representation Occupancy-based representations decompose the environment over fine-grained grids which model whether a grid cell is occupied or empty. Sensor readings are simply used to alter the occupancy probability over observed grid cells. Early examples of occupancy grid mapping (Elfes, 1987) were developed to focus on specific problems within the mobile robot domain. Namely the integration of range-based sensor data and fusion of data from differing sensor models. Operations such as path planning, often achieved by value iteration (Fox et al., 1998), are complicated due to the low-level of the represented information. Search spaces can become large but not overly so in comparison to other representations. A further disadvantage of occupancy-based representations is in their application to mapping large environments (Zelinsky, 1992). The complexity of the environment has no direct impact on the representation mechanism; empty areas must still be represented by grid cells. The number of required map entities to describe any given environment increases quickly in relation to that environments size. Occupancy-based representations are widely used and proven. Highly successful implementations have been realised using sonar sensors (Yamauchi, 1997) and laser range finders (Thrun et al., 1998). Such implementations show the low computational complexity when using probabilistic sensor and movement models due to the occupancy grids sensor-like, low-level representation. Reg: 99 6

12 2 Related Work Relational Representation Relational representations are largely defined by topological maps. A topological map representation describes an environment as a list of connected significant places, annotated with information on how to navigate from one to another (Thrun, 2002). Similar to landmark-based maps, recognisable landmarks must be extractable from sensor data. Topological representations only retain the broad environment structure. Consequently, little environment information is conveyed to a human observing such a map. Thrun and Bücken (1996) describe the representations difficulty in discriminating between significant places and recognising geometrically nearby places. 2.4 Sensor Modelling Probabilistic models aim to describe specific types of uncertainty in robot motion and perception. Motion models attempt to predict the actual movement made by a robot when attempting to perform a given movement order. For instance a robot may move forward further than the odometry information suggests due to actuator error. Perception models attempt to predict the actual location of an obstacle reported by a range reading. For instance a robot may report an obstacle at a certain distance away within a cone but not the exact direction of the obstacle from the sensor, as is common in sonar sensors. Error in sensor and odometry is a significant problem in robot mapping. Inaccurate sensing and robot pose information causes the distortion of features in the constructed map. Furthermore, errors in odometry tend to propagate through subsequent measurements quickly leading to the accumulation of substantial error (Piedrahita et al., 2006). The Lego NXT is provided with several sensors. The most notable in a mapping context are the touch and ultrasonic sensors. Touch sensors are prone to such small error that readings can be taken as truth. Contrary to this, inexpensive sonar range finders such as the NXT ultrasonic sensor have limited resolution, large error and are prone to noise. Three servo motor actuators with incorporated odometers are included with the NXT. Actuators, due to gearing restraints and slippage, also exhibit large error. Commonly for the NXT, errors of up to five degrees can occur on a ninety degree turn. Reg: 99 7

13 2 Related Work Motion Modelling Odometers (or rotor encoders) generate a digital signal at a rate proportional to the angular movement of a motor. Robot pose can be estimated from odometry using the wheel (relative distances and size) and steering configurations, and simple trigonometry. Pose calculations often incur error due to poor descriptions of such configurations. Furthermore, errors occur due to gearing restraints, wheel slippage, deformation of wheels, measuring inaccuracies and signal noise. Thrun et al. (2005) presents a model whereby any robot action is decomposed into a rotation-translation-rotation triplet. Each movement action is then modelled independently by a probability density function. Gaussian distributions are shown to effectively characterise uncertainty in odometry whilst maintaining low computational complexity Perception Modelling Sonar range finders emit an ultrasonic sound wave in a cone-like spread from the sensor aperture. Obstacles in the sonar cone reflect the wave back to the sensor allowing the distance to the object to be calculated. Whilst the exact location of an object cannot be determined within this cone, Blahut et al. (1991) describes sonar energy as being stronger closer to the emitter. As a stronger signal is more likely to be reflected, near objects are more likely to be detected. Furthermore, an object is shown to be more likely to be sensed along the acoustic axis (facing of the sensor) as objects at angles far from the acoustic axis are likely to reflect sonar in unfavourable directions for detection. Murphy (2000) describes another characteristic of range finders in general. The area sensed by a range finder can be split into three independent regions. The region between the sensor and the object which is likely unoccupied (referred to as the free space), that beyond the detected object about which no information can be discerned, and a further region where the object is most likely to exist (referred to as the surface). The size of the domain in which an object is likely to exist is described by the instrument error. A larger region represents greater inherent sensor error. Gaussian distributions are well documented in their use to model such characteristics. Moravec and Elfes (1985) detail early methodologies using a two-dimensional Gaussian over the sensor cone. Konolige (1997) further develops such methodologies, presenting Reg: 99 8

14 2 Related Work Bayes rule as a means to translate sensor-given-model into model-given-sensor probabilities. Such probabilities are then used in low-complexity map update functions over occupancy-based grids. 2.5 Simultaneous Localisation and Mapping In an effort to reduce the impact of previously described propagation errors in motion, a method of localising the robot while mapping is necessary. The Lego NXT is a crude platform in comparison to the target hardware usually considered for SLAM. However, the use of a centralised server application to process sensor data does allow for much higher complexity implementations to be considered than the NXT alone would allow. Techniques which trade complexity for the ability to cope with approximate environment models and low-resolution range finders will likely yield better results. A suitable metric localisation technique is demonstrated by Fox et al. (1999). A probability distribution over the robot location belief is maintained. Each robot movement introduces uncertainty while updates to the location belief from sensor data reduce uncertainty. The methods use of occupancy-based maps increases the complexity of the location belief but allows the incorporation of raw sensor data. The environment approximations needed for grid representation act to reduce the impact of noisy sensor readings. Thrun et al. (2005) further builds on this, describing modern, computationally involved SLAM algorithms which maintain a belief over a large set of maps. Similarly to Fox et al., motions models are used to introduce uncertainty in robot movements on each map while new sensor readings are correlated with previous sensor readings to reduce uncertainty. The map set can then be filtered, removing maps (predicted robot locations and sensor readings) which are less likely. While many of the components of the algorithms are too high-resolution for the hardware provided with the NXT, the use of motion models to introduce variation in robot location in a large map set, followed by the correlation of sensor readings to remove maps where the robot location is unlikely, is a good fit for the requirements. Reg: 99 9

15 3 Design 2.6 Path Planning Path planning is a highly diverse area in robotics. Most advanced methods of path planning rely on a complete, reliable map to traverse. However, when no prior map is known, more crude methods of planning are required. One common method used in a simultaneous localisation and mapping context is frontier-based exploration (Yamauchi et al., 1998). A small area is initially explored around the robot starting location. The edge of this area (the frontier) is then iteratively expanded: The robot essentially moves in circles of an increasingly large radius. A drawback of the method is the deviation from the circular path often caused by large obstacles. Complex routines must be implemented to avoid obstacles while bringing the robot back on route. A more realisable method of exploration is simple wall-following (Katsev et al., 2011). The robot moves until a wall (or obstacle) is found and then attempts to follow the wall at a constant distance. In a mapping context, such a technique could be used where the robot moves in a given direction from the start point until a wall is found. The wall can then be followed until it extends beyond the area being mapped, at which time the robot returns to the starting position and inspects a different direction. This can be repeated over several directions to build a small area map. 3 Design This section describes the design of several system components: The internal map and pose representation, the SLAM algorithm, and the exploration mechanism. Relevant algorithms, which were adapted from the literature or designed from scratch are presented in isolation in order to reduce complexity. Section 4 details the interaction between the localisation and mapping, and exploration components presented here. The occupancy grid method of internal map representation is described in section 3.1. Such a representation was chosen due to the ease of integration with motion and perception error models, and suitability for use with low-resolution sensors. Particularly, low cost, direct incorporation of sensor readings with little preprocessing. Furthermore, Reg: 99 10

16 3 Design the occupancy grid resolution (area represented by a single grid cell) is easily variable. The resolution of the grid may be tailored to the accuracy of the sensor being used. A SLAM algorithm was adapted to the NXT hardware, which is inspired by those presented by Thrun et al. (2005), and is described in section 3.2. The algorithm maintains a large set of maps. A motion model introduces variation in each of the maps given a robot movement. A method of correlating new and old sensor readings was designed to allow the likelihood of each single map to be determined. A perception model was designed to update a map with a given sensor reading. Finally, a method of resampling the map set replaces unlikely maps with those which are more likely. The exploration algorithm created is described in section 3.3. Points of interest, connected by paths, are defined over the area being mapped. The robot attempts to follow these paths in order to visit all the points of interest, fully exploring the area being mapped. 3.1 Map and Robot Pose Representation The occupancy grid method of map representation requires a simplification: The area being mapped is always considered two-dimensional, defined by a width and height. In accordance with this, data structures for a single map and robot pose were designed. Map { double [ ] [ ] o c c u p a n c y _ g r i d ; double w e i g h t ; Pose r o b o t _ p o s e ; } (a) Map data structure. Pose { double x ; double y ; double θ ; } (b) Pose data structure. Figure 3.1: Data structures for a map and a robot pose. Figure 3.1a shows the data structure for a single Map. The occupancy_grid is a two dimensional array. Notionally, the grid can be thought of as being placed over the area being mapped as shown in figure 3.2. Each grid cell value represents the Reg: 99 11

17 3 Design Figure 3.2: Notional representation of placing the occupancy grid over an area to be mapped. likelihood of the area underneath it being occupied by an obstacle: A value of one being the belief the cell is certainly occupied through to a value of zero being the belief the cell is certainly unoccupied. Each grid value corresponds to a fixed physical location. The coordinates of a cell in the grid (taken from its centre) are used by the mapping algorithm is section 3.2. The resolution of the grid is variable; typical values of two to five centimetres-squared per cell are used. For example, a metre square area would require a fifty-by-fifty grid at a resolution of two centimetres per cell. It is noteworthy that the probability of an area being occupied is not directly stored in the occupancy grid. The value of a grid cell must be updated every time a sensor reading is recorded over the cell. Performing calculations involving probabilities directly can lose much of the precision required to maintain the map. To avoid this, probabilities are stored in their log odds ratio form as describeb by Thrun et al. (2005). Log odds ratios are simply an alternative way of expressing probabilities, which simplifies the process of updating them. Odds are defined as the ratio of a probability to one minus that probability. Log odds are the natural logarithm of the odds. For example, each occupancy grid cell is initialised with a value P(occ) of 0.5: A cell is initially considered equally likely to be empty or occupied. Equation 3.1 below shows the calculation of the initialisation value in terms of log odds. Log Odds Cell Value = log P(occ) 0.5 = log 1 P(occ) (3.1) Reg: 99 12

18 3 Design Actual probability values can be retrieved from the occupancy grid by simply reversing the log-odds ratio, shown in equation 3.2. A log-odds bayesian update rule is used to incorporate new sensor data into the occupancy grid, detailed alongside the perception model described in section P(occ) = e Log Odds Cell Value (3.2) The weight value represents how well the map correlates with new sensor readings. If the map shows an area to be free space where an obstacle is detected, the map is less likely to be a good representation. A map which correlates with all sensor readings will have a weighting of one while maps which correlate less have lower weightings. The method of calculating this weight is further discussed in section The robot_pose describes the robot location and bearing within the area covered by the occupancy grid. Due to the error in robot movement, this pose is likely not the same as that reported by the robot odometry. Furthermore, the error model will introduce variation in the robot pose of each map, discussed in section Figure 3.1b shows the data structure for a robot Pose. The pose is a high level description of robot location. Furthermore, any robot movement is fully defined by two poses: The initial and final pose. The pose is defined by Cartesian x- and y-coordinates within the area bounded by the occupancy grid and a bearing (or heading) θ. 3.2 Localisation and Mapping Algorithm A mapping algorithm adapted from those presented by Thrun et al. (2005) was designed. The algorithm maintains a collection of maps. When a movement is made by the robot, the potential error is captured by a motion model. The model serves to add variation in the robot pose of each map. If the model is descriptive of the potential error in robot motion, and the map set is large enough (typically fifty to one-hundred maps), some of the movements applied to the maps will likely be a good representation of the actual movement taken by the robot: Some maps will accurately represent the physical robot location rather than that reported by the inaccurate odometry. A method of discerning the likelihood of each map accurately representing the robot position and environment (referred to as map weight) is used. New sensor readings Reg: 99 13

19 3 Design are correlated with obstacles present in the map. If new sensor readings disagree with obstacles described in the map, the map is likely a poor representation. A perception model is used to incorporate sensor readings into the occupancy grid. The uncertainty inherent to the sonar range finder is captured by the model. Finally, a method of resampling the map set using the calculated map weight removes unlikely maps from the set, replacing them with maps of a higher weight. Effectively, resampling removes maps where a movement was applied which was not descriptive of the actual robot movement made. Algorithm 1 shows a single loop of the mapping algorithm. Each loop requires a set of maps M, a movement control defined by two poses: the last and current odometry readings u l and u c, and a sonar sensor range reading v. Lines 2-6 show the update of each map m within the collection of maps M. Line 3 uses the motion model to update the robot pose for each map m[p]. Line 4 uses the method of weighting a maps correlation with the sensor reading to update each maps weight m[w]. Line 5 uses the perception model to update each maps occupancy grid m[o] with the sonar sensor reading. Finally on lines 7 and 8 the map collection M is resampled, removing less likely maps (those with lower weights) and copying maps which are more likely (those with higher weights), and returning the new updated map collection M. It is noteworthy that any number of movements could be performed before recording Algorithm 1 Probabilistic mapping algorithm. 1: procedure MAPPING(M, u l, u c, v) 2: for each map m of the collection of maps M do 3: m[p] = APPLY_MOTION(m[p], u l, u c ) 4: m[w] = WEIGHT_READING(m[o], m[p], v) 5: m[o] = APPLY_READING(m[o], m[p], v) 6: end for 7: M = RESAMPLE(M) 8: return M 9: end procedure Reg: 99 14

20 3 Design any readings. Similarly, several movements, readings and weightings could be made before resampling. The following sections present the design of the motion model, method of map weighting, perception model and method of map collection resampling Motion Model The motion model is designed and implemented as directly described by Thrun et al. (2005). Only the tuning of the model is tailored to the specific NXT hardware. The motion model attempts to describe the error in robot motion. There are a huge number of error sources to consider, including: slippage and movement between the teeth of gears in the actuators, slippage between robot wheels and the ground surface, deformation of the tires, and signal noise and resultant measuring inaccuracies. These sources of error cause the odometry to misreport the actual robot location as robot movements and rotations may be greater or smaller than those recorded by the robot. Research showed that the errors in robot movement can be effectively modelled with Gaussian distributions. The motion model is used to apply a movement described by the robot odometry to the robot pose of each map of the set. The movement applied to each map is different as random variation is added. Ideally, after a movement is applied over the map set, a number of the maps will contain a robot pose which is representative of the actual movement made by the robot rather than that reported by error-prone odometry. All robot movements for a two-wheeled differential-steered robot can be decomposed Initial Pose Final Pose Δrot1 Δrot2 Δtran Figure 3.3: Decomposition of a complex robot movement into a translation-rotationtranslation triplet. Reg: 99 15

21 3 Design into a primitive rotation-translation-rotation triplet shown in figure 3.3. This is how motion is handled by the robot motor control methods. When moving to a given pose the robot first rotates to face the location, then the robot moves forwards until it reaches that location, finally it rotates again to orient itself in the required direction. Algorithm 2 shows the motion model which applies a robot movement defined by starting and resultant poses u l and u c to a given starting pose p. On lines 2-4 simple trigonometry is used to decompose the movement between the two poses u l and u c into a rotation-translation-rotation triplet. Lines 5-7 apply noise to each of the movements. The method gaussian generates a pseudorandom, Gaussian distributed value with mean of zero and standard deviation of one, multiplied by the passed argument. For the rotations, the amount of Gaussian noise applied is a function of their respective magnitudes and the magnitude of the translation. For the translation, the amount of Gaussian noise applied is a function of the translation Algorithm 2 An algorithm to apply a robot movement with Gaussian variance. 1: procedure APPLY_MOTION(p, u l, u c ) 2: rot1 = atan2(u c [y] - u l [y], u c [x] - u l [x]) - u l [θ] 3: tran = sqrt((u l [x] u c [x]) 2 + (u l [y] u c [y]) 2 ) 4: rot2 = u c [θ] - u l [θ] - rot1 5: rot1 = rot1 - gaussian(a rot1 + B tran) 6: tran = tran - gaussian(c tran + D rot1 + D rot2) 7: rot2 = rot2 - gaussian(a rot2 + B tran) 8: x = p[x] + ( tran cos(p[θ] + rot1)) 9: y = p[y] + ( tran sin(p[θ] + rot1)) 10: θ = p[θ] + rot1 + rot2 11: p = (x, y, θ) 12: return p 13: end procedure Reg: 99 16

22 3 Design and rotation magnitudes. Lines 8-10 calculate the resultant pose components x, y, and θ after applying the noisy movement to the given starting pose p. Finally, the resultant pose p is returned. There are four noise parameters within the algorithm: A, B, C and D. These parameters are used to tune the amount of noise which is added to each primitive movement. The value of A represents the magnitude of expected error in rotational movements. B represents the influence of errors in translation movements on the bearing of the robot. C represents the magnitude of expected errors in translation movements. D represents the influence of errors in rotation on the x/y position of the robot. The tuning parameters were learned through a brute-force technique: Performing physical movements with the robot and attempting to tune the model to represent the variation found in these movements. Starting from the same position, the robot performed a straight line translation, and a set of turns followed by a straight line translation, thirty times each. The physical end pose of the robot was recorded. A test harness Initial Pose Final Pose Initial Pose Final Pose (a) Straight line translation. (b) Ninety-degree turn and straight translation. (c) One-hundred samples (orange) from a straight line translation. (d) One-hundred samples (orange) from a ninety-degree turn and straight translation. Figure 3.4: Resultant positions from the motion model for two different movements. Reg: 99 17

23 3 Design was then created to sample the motion model a hundred times for both the straight line translation, and the sets of turns followed by a straight line translation, and plot the resultant positions. The model parameters were tuned so that the output distribution of resultant positions from the model test-harness was representative of the error found in the actual robot motion. Figure 3.4 shows the results of applying the motion model test harness one-hundred times for two different movements. Sampling for the straight line translation shown in (a) produces the tightly clumped resultant poses shown in (c). Sampling for the ninetydegree turn followed by a straight line translation shown in (b) produces a more dispersed distribution of resultant poses shown in (d). This shows the model successfully captures the lower certainty in robot position after a more complicated movement Map Weighting A method of measuring how correlated new sensor readings are with those previously observed was designed. The level of correlation is referred to as a maps weight. A higher weight is indicative of a map which correlates well with new observations and so is more likely to be a good representation of the physical environment. The weighting method considers the origin of a sensor reading with regard to obstacles already recorded on the map. The closer a sensor reading is to an obstacle present on the map, the higher the weight. Similarly, the greater the certainty of the obstacle being present (the higher the occupancy value), the higher the weight. Algorithm 3 shows the method of measuring the weight from a maps occupancy grid m[o] and current robot pose m[p] using a sonar sensor distance reading v. The distance component of weight is modelled by a Gaussian distribution. This promotes correlation between sensor reading and known obstacles more than a linear relationship. Lines 2-3 show the calculation of the x and y coordinates of the centre of the obstacle detected. Simple trigonometry is used to decompose the distance of the reading v and the robot heading m[p][θ] into x and y components, which are added to the robot position (m[p][x] and m[p][y] respectively). The weight is initialised to a value of zero on line 4. The occupancy grid is then searched to find the highest weight corresponding to the sensor reading on lines Reg: 99 18

24 3 Design Each cell of the occupancy grid is iterated over and the weight with regard to that particular cell w is calculated. First, The distance between the grid cell centre and the sensor reading origin is calculated. This distance is then used to sample a value from a Gaussian distribution which is multiplied by the occupancy value of the cell on line 7. If the cell weight w is higher than the highest previously encountered, weight is given the value w, shown on lines 8-9. Once all cells have been assessed, the highest weight found is returned on line 12. Algorithm 3 An algorithm to weight a maps correlation with a new sensor reading. 1: procedure WEIGHT_READING(m[o], m[p], v) 2: x = m[p][x] + v cos(m[p][θ]) 3: y = m[p][y] + v sin(m[p][θ]) 4: weight = 0 5: for each cell c of the occupancy grid m[o] do 6: d = distance from (x, y) to centre of c 7: w = gaussian_value(d) value of c 8: if w > weight then 9: weight = w 10: end if 11: end for 12: return weight 13: end procedure Figure 3.5 illustrates the correlation of new and old sensor readings. Old sensor readings are not stored directly but are incorporated as obstacles within the occupancy grid. Visually, the obstacles and robot pose shown in figure 3.5a agree more with the sensor reading than those shown in figure 3.5b. Therefore, the map containing the obstacles and robot pose shown in figure 3.5a is likely to be a more accurate representation of the surrounding environment and so is given a higher map weight. It is noteworthy that when exploring a new area or when mapping first begins, the Reg: 99 19

25 3 Design (a) A well correlated sensor reading. Occupancy grid (orange squares), robot pose (large circle and sector) and range reading (small circle and dotted line). (b) A poorly correlated sensor reading. Occupancy grid (orange squares), robot pose (large circle and sector) and range reading (small circle and dotted line). Figure 3.5: Graphic example of the correlation of new and old sensor readings. map weight calculated will be very low as no previous readings will correlate with new ones, and is further discussed with map resampling in section Perception Model A perception model is used to capture the uncertainty in sonar sensor readings and update an occupancy grid. Research showed that two-dimensional Gaussian distributions are commonly used to model sonar sensor uncertainty. The distribution represents the location of an obstacle (or occupancy) within a sensor cone. However, the NXT sonar sensor is too inaccurate to necessitate the use of Gaussians to model this distribution. Instead a less computationally involved method was designed which considers a probability distribution of free space belief as well as occupancy. Sonar is an inaccurate method of range detection when compared with laser. Sonar range finders emit a cone of ultrasonic sound from the sensor aperture and obstacles within the sonar cone reflect the sound waves back, which are then detected by the sensor. The sensor is unable to distinguish where in the cone the obstacle is situated, only the approximate distance it is from the sensor aperture. The lejos range finder Reg: 99 20

26 3 Design API allows the retrieval of a single value of distance corresponding to the closest object detected during a range scan. Several parameters define the nature of a sonar sensor: the maximum range, the angle of the cone and the error margin. These parameters were not documented for the sonar sensor provided with the NXT and so, had to be found. The range was estimated by increasing the distance of an object from the sensor until it was no longer detected. Similarly, the angle was estimated by increasing the angle of an object until it was no longer detected. The error was estimated by varying the distance and angle of an object to the sensor and comparing the sensor value with the physical distance. Rough working parameters of 50cm for maximum reliable range, 40 degrees for the angle of the cone (20 degrees in both directions from the acoustic axis) and ±5cm for the measurement error margin were found. Other sources of error are also found in sonar range finders. Obstacles may not be detected due to two shortcomings of sonar: Smooth surfaced objects at an angle reflect sound waves away from the sensor while surfaces which acoustically dampen sound waves reflect too little for the sensor to detect. These sources of error are not able to be modelled and are a significant drawback of using sonar range finders. However, the properties of sonar energy can be exploited to gain more information from a reading than the sensor provides directly. Sonar energy is characterised as being much stronger closer to the emitter. As a stronger signal is more likely to be reflected, near objects are more likely to be detected. Furthermore, an object is shown to be more likely to be sensed along the acoustic axis (facing of the sensor) as objects at angles far from the acoustic axis are likely to reflect sonar in unfavourable directions for detection. In order to model this, the sonar field (or sensor cone) is decomposed into two regions calculated from the sensor measurement error and the distance of a positive reading: surface and freespace, shown on figure 3.6a. Given a sensor error of ±e and a range reading at distance d, the surface is defined as the section of the cone between d +e and d e. The freespace is the area between the sensor aperture and d e. The detected obstacle is very likely within the surface while the freespace is very likely empty. A method was devised for calculating the probability of any point being occupied in the surface or empty in the freespace, shown in equation 3.3. The value of distance is Reg: 99 21

27 3 Design surface e freespace d (a) Labelled sonar cone. (b) Perception model probability distribution. Figure 3.6: Labelled sonar cone and perception model probability distribution. the distance of the point from the sensor aperture and the value of angle is the angle of the point from the sensor aperture. The value range describes the maximum range of the sensor and the value of beta is half the angle of the cone (50cm and 20 degrees respectively for the NXT sonar sensor). probability = range distance range + beta angle beta (3.3) This captures the properties of sonar energy described above. Effectively, the closer a point is to the sensor and the closer to the acoustic axis, the higher the probability of it being occupied in the surface or empty in the freespace. Furthermore, the probability of being occupied will always be greater than 0.5 in the surface while the probability of being free space will always be greater than 0.5 in the freespace. The probability Reg: 99 22

28 3 Design distribution created by this method is visualised in figure 3.6b. The darker an area the higher the probability of an obstacle, the lighter an area the higher the probability of free space. Information about the area outside the cone is unknown: The probability of this area is unchanged from the initial probability of 0.5. Algorithm 4 shows the pseudocode for the perception model designed. A range reading v is used to apply a probability distribution, originating from the robot position defined by the pose m[p], over an occupancy grid m[o]. Lines 2-3 show the calculation of the probability distribution bounds for surface (surf ) and freespace (free) using the distance from the range reading v and the sensor measurement error e. Each cell of the occupancy grid is then checked to see if it is within the sensor cone. Algorithm 4 An algorithm to apply a robot range reading over an occupancy grid. 1: procedure APPLY_READING(m[o], m[p], v) 2: sur f = v + e 3: f ree = v - e 4: for each cell c of the occupancy grid m[o] do 5: dis = distance from (m[p][x], m[p][y]) to centre of c 6: ang = angle from m[p][θ] to centre c 7: if ang <= beta and dis <= sur f then 8: if dis < f ree then 9: pemp = ((range - dis / range) + (beta - ang / beta) / 4) : pocc = 1 - pemp 11: else 12: pocc = ((range - dis / range) + (beta - ang / beta) / 4) : pemp = 1 - pocc 14: end if 15: value of c += log(pocc) - log(pemp) - log(iocc) + log (1 - iocc) 16: end if 17: end for 18: end procedure Reg: 99 23

29 3 Design Line 5-6 calculate the distance and angle of each grid cell relative to the robot position (m[p][x], m[p][y]) and facing (m[p][θ]). The check for distance and angles within the cone is performed on line 7. If the cell is within the cone, a further check is made to classify the cell as within surface or freespace on line 8. If the cell is within freespace then the probability of the cell being empty pemp is calculated using 3.3 and the probability of the cell being occupied pocc is one minus pemp (lines 9 and 10). Conversely, if the cell is within surface then the probability of the cell being occupied pocc is calculated using 3.3 and the probability of the cell being empty pemp is one minus pocc (lines 12 and 13). Finally, if the cell is within the cone and the probabilities calculated, the occupancy probability of the grid cell is updated, shown on line 15. A recursive form of Bayes rule in log-odds form is used for the update (Konolige, 1997; Thrun et al., 2005). The value of iocc is the initialisation probability of the occupancy grid, Map Resampling A simple method of resampling the set of maps was devised. Maps with a higher weight correlate more with the robots actual position and surrounding environment, while those with lower weights are likely poor representations. Essentially, the resampling method replaces low weight maps with those of a higher weight making the map set as a whole more representative of the environment and actual robot pose. Algorithm 5 shows the pseudocode for the resampling method applied over a set of maps M. Initially, the set of maps is sorted in descending order by the weight of each map on line 2. A pointer into the map set currpointer is created and incremented until it points to the first map which has a weight lower than an arbitrarily defined threshold or until it points beyond the map set on lines 3-6. A threshold of 0.7 was found to be effective. If the pointer was incremented and does not exceed the index of the last map of the set, resampling must be performed. A pointer copypointer which holds the number of copies performed and a pointer to the first map below the threshold copymax are initialised on lines 8-9. Each map below the threshold is replaced by a map that is above the threshold. Line 11 shows the logically circular operation of replacing maps using Reg: 99 24

30 3 Design the three pointers. Finally, the map set is returned on line 16. It is noteworthy that no resampling will be performed when the robot first moves into a previously unexplored area. This is because all sensor readings in unexplored areas will be weighted lower than the threshold. Algorithm 5 An algorithm to resample a set of maps with bias to higher map weight. 1: procedure RESAMPLE(M) 2: sort M by descending map weight 3: currpointer = 0 4: while currpointer < length(m) and M[currPointer][weight] > threshold do 5: currpointer = currpointer + 1 6: end while 7: if currpointer > 0 and currpointer < length(m) then 8: copypointer = 0 9: copymax = currpointer; 10: while currpointer < length(m) do 11: M[currPointer] = copy(m[copypointer%copymax]) 12: currpointer = currpointer : copypointer = copypointer : end while 15: end if 16: return M 17: end procedure 3.3 Exploration Algorithm Several methods of exploration (or path planning) were considered. During research, a diversity of algorithms were found. Variations of wall (or obstacle) following methods, Reg: 99 25

31 3 Design commonly used across the robotic mapping domain, were thought to be suitable for use with the mapping algorithm. Simple wall following methods were trialled but did not perform as well as expected. The robot would often follow a cyclic path (following a single, isolated obstacle) and areas of the environment which were less obstacle-dense were unexplored or less explored. These problems persisted to some degree despite complex methods being employed to reduce such behaviour. Furthermore, the SLAM algorithm designed relies on correlating old sensor readings with new ones. Essentially, obstacles which are found must be viewed again from different robot positions in order to relocalise. The wallfollowing routine simply did not re-sense obstacles reliably enough. Instead, a method of exploration inspired by graph search algorithms was designed: A uniformly-spaced grid of poses is defined over the area to be mapped and exploration of each node is attempted by repeat application of a breadth-first search to find the nearest unexplored node. Coordinates are used to define points of interest within the map area, each of which the robot should attempt to visit and record range readings, referred to as search nodes. A list of traversable connections between search nodes is maintained. If a connection does not appear in the list then the path between the two nodes has been found to be blocked by an obstacle, and removed. The exploration is split into three components: The creation of the search nodes and connections, the generation of a route through path-planning, and the traversal of that route by the robot Search Node Initialisation The distribution of search nodes is defined by the area being mapped and two further parameters: The border parameter specifies the minimum distance between the robot and the edge of the map area while the spacing parameter defines the space between search nodes. The area to be mapped is simply defined by the width and height of a rectangle. If the robot is too close to the edge of the area to be mapped, some sonar sensor readings will fall outside of this area. Range readings outside this area are unnecessary as they are not Reg: 99 26

32 3 Design (a) Area in which search nodes will be distributed (orange), the mapping area (dotted line) and obstacles (light grey). (b) Initialised grid of search nodes (orange circles) and paths (dark grey lines), and obstacles (light grey). (c) Grid of search nodes (orange circles) and paths (dark grey lines), and obstacles (light grey) after the bottom left node is found to be unreachable. Figure 3.7: Grid of search nodes over an area to be mapped. Reg: 99 27

33 3 Design incorporated into the map data. Therefore, nodes are not generated near the edge of the map. Figure 3.7a shows the border parameter being used to reduce the mapping area to a smaller area considered for the placement of search nodes. A node at the center of the map is placed first. This is the node at which the robot always starts. Further nodes are then placed outward from this center node with a distance of the spacing parameter between them. The spacing between nodes is tuned to tradeoff the required overlap in sensor readings required by the SLAM algorithm against mapping time. More overlap means more correlated sensor data for SLAM but slower mapping due to more search nodes. Initially, all search nodes are defined as connected by direct paths to the horizontal and vertical neighbouring nodes. Figure 3.7b shows the placed search nodes and the paths connecting them after initialisation Route-Planning Through Nodes This section describes the method of generating a single route to the nearest, connected but unexplored search node. Algorithm 6 shows the pseudocode for the generation of a route given the node the robot is at currnode and a set of nodes which have previously been explored explorednodes. The algorithm uses a breadth-first search over the nodes to find the nearest unexplored node. The shortest path to the node found is then constructed using the list of node to parent-node relationships. Lines 2-5 show the instantiation of several data structures. A queue data structure queue holds nodes to be opened. A set structure opened tracks which nodes have previously been opened. A map parents describes the relationship of a node to the node it was expanded from, allowing a route to be constructed. A list of nodes route describes the route generated. Lines 6-27 implements a breadth-first search. While there are nodes in the queue, the first node (initially the node the robot occupies currnode) is expanded using the helper method connectednodes on line 10, which returns a list of neighbouring search nodes currently connected to the given node (varying from zero to four nodes). The returned nodes are then each inspected. If the node has previously been expanded it is ignored. Nodes that have been explored are added to the back of the queue, added to the opened set and added to the parents map along with the node they were expanded from. Reg: 99 28

34 3 Design Algorithm 6 An algorithm to generate a route to an unexplored node from the current. 1: procedure GET_NEXT_ROUTE(currNode, explorednodes) 2: queue: empty queue of nodes to open 3: opened: empty set of opened nodes 4: parents: empty map of node to parent node relationships 5: route: empty list of nodes 6: queue.add(currnode) 7: opened.add(currnode) 8: while queue is not empty do 9: current = queue.remove 10: nodes = connectednodes(current) 11: for each node of nodes do 12: if opened does not contain node then 13: if explorednodes does not contain node then 14: route.addtofront(node) 15: while current does not equal currnode do 16: route.addtofront(current) 17: current = parents.parentof(current) 18: end while 19: return route 20: else 21: queue.add(node) 22: opened.add(node) 23: parents.add(node, current) 24: end if 25: end if 26: end for 27: end while 28: return route 29: end procedure Reg: 99 29

35 4 Implementation Otherwise, if the node has not previously been explored, the route through the search nodes is reconstructed using the parents map on lines The completed route is then returned on line 19. If no connected, unexplored nodes are found in the set the empty route is returned on line 28. This is the point at which the exploration (and therefore the map) is complete Route Traversal To fully explore a map area the robot must visit all the search nodes which are reachable. In order to do this the robot must retrieve routes and follow them. To follow a route, the robot simply moves between each search node on the route in order. The move between each node can either succeed or be blocked by an obstacle. If the move succeeds the robot continues along the route. When the end of the route is reached, the robot has reached the target search node. A new route is then retrieved. If the move is blocked (an obstacle is detected within a threshold), the robot stops and returns to the previous search node. The entry for these two nodes is removed from the list of connections between search nodes such that it is not used in any future routes. A new route is then retrieved. The search is complete when the a new route is requested and found to be empty. 4 Implementation This section describes the high-level implementation of the system. The implementation of client-server communication, and robot movement and sensor reading is detailed with regard to the relevant lejos interfaces used. The construction of the simple graphical user interface is presented. Finally, an overview of the mapping application is discussed which combines the algorithms described in design. 4.1 Communication, Robot Movement and Sensor Reading Several useful interfaces packaged with the lejos API are built upon to provide communication between the desktop computer (server) and the robot (client), and the re- Reg: 99 30

36 4 Implementation quired robot movements. The server application runs all sensor data processing and path planning while the client simply actions commands from the server, received over a Bluetooth connection Communication The pc.comm and nxt.comm packages provided by lejos are used to communicate across a Bluetooth connection by the desktop computer and robot respectively. The packages provide stream based data transfer of several Java primitive types through methods which established a bluetooth connection between the computer and robot, open data input and output streams and allow primitive data types to be sent on these streams. Simple methods were written to communicate movement actions and sensor readings using these packages. The methods used by the server are: Sending movement orders to the robot, requesting sensor readings and updating robot pose. The methods used by the clients are: Sending the state (success or failure) in response to a movement order, sending the current robot pose and sending a sensor reading. Each of these methods initially send an identifier (an integer value) representing the type of communication, followed by any data relating to that communication. Movement Command Example For the desktop computer to send a movement request to the robot the writecommand(int id, Pose pose) method is used. Initially the identifier representing the movement request (id) is sent over the data output stream. Following this three values of type float are sent: The x-coordinate (pose.x), y-coordinate (pose.y) and heading (pose.heading) which the robot should move to. The robot receives this communication using using two methods. The first method readcommand() reads a communication identifier from the data input stream which determines the communication type. A switch is then used to call a further method to receive that specific communication type. In this case, the identifier is that of a movement request so the readpose() method is called: Three values of type float are read from the input stream and used to construct a Pose object which is returned. Reg: 99 31

37 4 Implementation Sonar Data Example Similarly, the robot can send the results of a sonar range scan to the desktop computer using the writecommand(int id, float dist) method. First, the identifier representing the sensor data communication (id) is sent over the data output stream. Following this a value of type float is sent: The distance to an obstacle from polling the sonar sensor. The desktop computer also receives this communication using two methods. The readcommand() method once again reads a communication identifier from the data input stream which determines the communication type. The readreading() method which corresponds to the communication type is then called: A single value of type float is read from the input stream and returned Movement The navigation and localization packages provided by lejos are used to perform robot motor movements and maintain an estimate of the robots pose. The DifferentialPilot class provides the control of the robot in terms of rotations and straight line movements using the two methods rotate(double angle) and travel(double distance), which use the distance between the wheels and their diameters to calculate and perform the actual motor rotations for the desired movement. The PoseProvider class listens for the movement commands above and updates an internal representation of the robots pose accordingly. The class provides set and get methods for the robot pose which are used to retrieve or update the robot pose while mapping. Movement commands are received by the robot as a specified destination pose. A method was written to decompose a movement defined in this way into a rotationtranslation-rotation triplet and perform each using the DifferentialPilot. The method of decomposition is the same as that used by the motion model. The movement methods used do not block program flow. A movement method was written which polls the robots sonar sensor while performing the rotation-translationrotation movement. If an obstacle is encountered at any time during movement the robot is stopped. The method returns true if a robot movement was not blocked by an obstacle Reg: 99 32

38 4 Implementation and false otherwise. This allows the application running on the robot to quickly detect and react to the presence of an obstacle Sonar Reading The UltrasonicSensor and FeatureDetector packages provided by lejos are used to poll the sonar range sensor. The scan() method of the latter package returns the distance result of a single scan. A method was written to reduce the effect of spontaneous readings: The sensor is polled multiple times, and a mean average reading is calculated. Out-of-character readings (those which deviate greatly from the average) are disregarded. The mean is recalculated and returned. 4.2 Graphical User Interface Figure 4.1: The GUI during mapping. Reg: 99 33

39 4 Implementation Figure 4.1 shows the simple graphical user interface (GUI) created to display a map occupancy grid (from the map set) as it is constructed in near real-time to the user. The only controls provided are to allow the user to scroll through the occupancy grid of each map of the set. The GUI is implemented using the Java Swing GUI widget toolkit. The Panel class is extended to provide a MappingPanel which displays the search nodes and connections state, robot pose and occupancy grid. Each grid cell is represented by a colour: Orange cells are likely occupied by obstacles while white cells are free space. The Swing ActionListener classes are used to provide zooming and panning of the frame with the mouse (wheel and dragging), and the button actions for selecting which map is viewed. 4.3 Mapping Application Overview This section gives a high-level description of both the client and server components of the mapping application. Particular emphasis is given to the application flow of the mapping and exploration procedure Client Application The simple client application runs on the LEGO NXT. The application consists of initialisation and command following. Upon starting, the application initialises the required Bluetooth connection and data streams to the server. The lejos classes required for movement and sensor readings are then instantiated with the robot metrics: Wheel and motor configurations for the movement classes and maximum sensor range for the range finding classes. After initialisation the client begins command following. The program blocks until a communication identifier is received over Bluetooth. The relevant action is then performed using a switch statement on the identifier. After a command has been actioned and any required information transmitted back to the server, the program once again blocks until a new command is received. The action-block cycle continues until the Bluetooth connection is closed. Reg: 99 34

40 4 Implementation Server Application The server application performs the map processing and display, and path-planning on the desktop computer. A model-view-controller pattern is implemented to provide logical separation of the GUI from mapping and exploration logic: Mapping and exploration (models) runs on one thread while the user interface on another (view). Communication between the two is implemented through an event-listener pattern (controller). All manipulation of the maps is handled by the controller, which calls the relevant methods on the mapping and exploration classes. The GUI is registered as a listener of mapping and exploration events. When an event occurs (such as update of the maps with a movement), the event is actioned, and the GUI is notified by the controller and passed updated map data (such as the new robot pose). The mapping and exploration component combines the algorithms presented in design (section 3). The class representing a single map (Map) contains the methods to apply a movement, weight a map and apply sensor readings. The class representing a set of maps (MapSet) contains the method to resample the maps. The class representing the search nodes (Grid) contains the methods to get a new route and remove a connection between two nodes. These methods are implemented exactly as the algorithms were presented in design. A set of sensor measurements are recorded at the initial robot location and at every search node the robot travels through including nodes which have previously been explored, to aid in robot localisation. A reading set consists of twenty-three sonar range readings, angled fifteen degrees between each (covering three-hundred and sixty degrees with an overlap between readings). The map weight for each reading of the set can then be calculated and the mean average recorded in each map. Each reading is then applied to the occupancy grid of each map. Figure 4.2 shows the execution flow of the server application mapping an exploration thread. The application begins by initialising the Bluetooth connection and data streams, the set of maps, and the grid of search nodes and initial robot pose. An initial set of range readings is recorded around the start location of the robot and are applied to each map using the perception model. The exploration of the map then begins. A route to the nearest reachable but un- Reg: 99 35

41 4 Implementation Initialise Take range readings and apply over each map Get new exploration route Finished Yes Is route empty? No Move robot to next node in route Did robot arrive? Yes Get robot pose and apply movement over each map Take range readings, weight and apply over each map Resample map set Synchronise robot pose No Remove link between search nodes Get robot pose and apply movement over each map Move robot back to previous node Get robot pose and apply movement over each map Take range readings, weight and apply over each map No Is route complete? Yes Resample map set Synchronise robot pose Figure 4.2: Execution flow of the server application mapping and exploration thread. Reg: 99 36

42 5 Evaluation explored node is calculated. If an empty route is returned there are no more routes available and the mapping is complete. Otherwise, the route is followed. A move order to the next node in the route is sent to the robot and actioned. The robot reports the result (success or failure due to obstruction) of the movement. If the movement succeeded, the pose is requested and used to update the robot position in each map. If the movement failed the server requests the current robot pose. This pose and the previous is used to update the robots position in each map and the robot is ordered to return to the previous search node. Again, the robot pose is then requested and used to update the robot position in each map. A set of twenty-three range readings are then measured as described above. The readings are used to calculate the weight of and update each map, and the map set is resampled. The pose maintained by the robot is updated: The pose is synchronised with that from the map with the highest weight (that which is most likely the best representation of the environment and actual robot location). This is not necessary for mapping as only the change in pose is used (not the pose itself) but prevents errors in robot odometry desynchronising the robot motion with the search nodes. If the previous movement succeeded and the route is not complete, then another move to the next node in the route is attempted. If the previous movement failed or the route is complete, a new route to the nearest reachable but unexplored node is calculated. 5 Evaluation This section evaluates system performance by mapping several test areas with different obstacle configurations. Each map has a resolution of two centimetres squared per occupancy grid cell. The area of exploration spans sixty centimetres squared. As sensor readings extend up to fifty centimetres beyond the exploration area in all directions the potential mapped area is one-hundred and sixty centimetres squared. Areas of this size take approximately ten minutes to explore and map. However, mapping time varies depending on the number of obstructions found while moving between search nodes. This is due to the need to back-track to the last safe node, measure repeat Reg: 99 37

43 5 Evaluation (a) Photo of the physical area configuration to be mapped. (b) Map produced by the first exploration. Darker grid cells are more likely occupied. (c) Map produced by the second exploration. Darker grid cells are more likely occupied. Figure 5.1: The first area configuration and two maps produced by the system. Reg: 99 38

44 5 Evaluation (a) Photo of the physical area configuration to be mapped. (b) Map produced by the first exploration. Darker grid cells are more likely occupied. (c) Map produced by the second exploration. Darker grid cells are more likely occupied. Figure 5.2: The second area configuration and two maps produced by the system. Reg: 99 39

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta 1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

Advanced Robotics Path Planning & Navigation

Advanced Robotics Path Planning & Navigation Advanced Robotics Path Planning & Navigation 1 Agenda Motivation Basic Definitions Configuration Space Global Planning Local Planning Obstacle Avoidance ROS Navigation Stack 2 Literature Choset, Lynch,

More information

Practical Course WS12/13 Introduction to Monte Carlo Localization

Practical Course WS12/13 Introduction to Monte Carlo Localization Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter

More information

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping Grid Maps Gian Diego Tipaldi, Wolfram Burgard 1 Features vs. Volumetric Maps Courtesy: D. Hähnel Courtesy: E. Nebot 2 Features So far, we only used feature maps Natural choice for Kalman

More information

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive

More information

COS Lecture 13 Autonomous Robot Navigation

COS Lecture 13 Autonomous Robot Navigation COS 495 - Lecture 13 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images: MAPS AND MAPPING Features In the first class we said that navigation begins with what the robot can see. There are several forms this might take, but it will depend on: What sensors the robot has What

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

NERC Gazebo simulation implementation

NERC Gazebo simulation implementation NERC 2015 - Gazebo simulation implementation Hannan Ejaz Keen, Adil Mumtaz Department of Electrical Engineering SBA School of Science & Engineering, LUMS, Pakistan {14060016, 14060037}@lums.edu.pk ABSTRACT

More information

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS Integrated Cooperative SLAM with Visual Odometry within teams of autonomous planetary exploration rovers Author: Ekaterina Peshkova Supervisors: Giuseppe

More information

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang Localization, Mapping and Exploration with Multiple Robots Dr. Daisy Tang Two Presentations A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping, by Thrun, Burgard

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

Obstacle Avoidance (Local Path Planning)

Obstacle Avoidance (Local Path Planning) Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent

More information

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Introduction The goal of this survey paper is to examine the field of robotic mapping and the use of FPGAs in various implementations.

More information

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1 Last update: May 6, 2010 Robotics CMSC 421: Chapter 25 CMSC 421: Chapter 25 1 A machine to perform tasks What is a robot? Some level of autonomy and flexibility, in some type of environment Sensory-motor

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will

More information

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras Motivation

More information

Localization of Multiple Robots with Simple Sensors

Localization of Multiple Robots with Simple Sensors Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab

More information

Obstacle Avoidance (Local Path Planning)

Obstacle Avoidance (Local Path Planning) 6.2.2 Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent

More information

Markov Localization for Mobile Robots in Dynaic Environments

Markov Localization for Mobile Robots in Dynaic Environments Markov Localization for Mobile Robots in Dynaic Environments http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html Next: Introduction Journal of Artificial Intelligence

More information

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology Basics of Localization, Mapping and SLAM Jari Saarinen Aalto University Department of Automation and systems Technology Content Introduction to Problem (s) Localization A few basic equations Dead Reckoning

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

SLAM: Robotic Simultaneous Location and Mapping

SLAM: Robotic Simultaneous Location and Mapping SLAM: Robotic Simultaneous Location and Mapping William Regli Department of Computer Science (and Departments of ECE and MEM) Drexel University Acknowledgments to Sebastian Thrun & others SLAM Lecture

More information

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping AUTONOMOUS SYSTEMS LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping Maria Isabel Ribeiro Pedro Lima with revisions introduced by Rodrigo Ventura Instituto

More information

Navigation methods and systems

Navigation methods and systems Navigation methods and systems Navigare necesse est Content: Navigation of mobile robots a short overview Maps Motion Planning SLAM (Simultaneous Localization and Mapping) Navigation of mobile robots a

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Lecturer: Alex Styler (in for Drew Bagnell) Scribe: Victor Hwang 1 1 Occupancy Mapping When solving the localization

More information

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Mapping 1 Occupancy Mapping When solving the localization problem, we had a map of the world and tried

More information

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing IROS 05 Tutorial SLAM - Getting it Working in Real World Applications Rao-Blackwellized Particle Filters and Loop Closing Cyrill Stachniss and Wolfram Burgard University of Freiburg, Dept. of Computer

More information

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT Norhidayah Mohamad Yatim a,b, Norlida Buniyamin a a Faculty of Engineering, Universiti

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics FastSLAM Sebastian Thrun (abridged and adapted by Rodrigo Ventura in Oct-2008) The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm Robot Mapping FastSLAM Feature-based SLAM with Particle Filters Cyrill Stachniss Particle Filter in Brief! Non-parametric, recursive Bayes filter! Posterior is represented by a set of weighted samples!

More information

Probabilistic Robotics. FastSLAM

Probabilistic Robotics. FastSLAM Probabilistic Robotics FastSLAM The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while estimating the pose of the robot relative to this map Why is SLAM

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic

More information

MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER

MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Engineering By SHASHIDHAR

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Sebastian Lembcke SLAM 1 / 29 MIN Faculty Department of Informatics Simultaneous Localization and Mapping Visual Loop-Closure Detection University of Hamburg Faculty of Mathematics, Informatics and Natural

More information

ROBOTICS AND AUTONOMOUS SYSTEMS

ROBOTICS AND AUTONOMOUS SYSTEMS ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 6 PERCEPTION/ODOMETRY comp329-2013-parsons-lect06 2/43 Today We ll talk about perception and

More information

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning Sören Schwertfeger / 师泽仁 ShanghaiTech University ShanghaiTech University - SIST - 10.05.2017

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Terry Payne Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule: Announcements PS2: due Friday 23:59pm. Final project: 45% of the grade, 10% presentation, 35% write-up Presentations: in lecture Dec 1 and 3 --- schedule: CS 287: Advanced Robotics Fall 2009 Lecture 24:

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment Planning and Navigation Where am I going? How do I get there?? Localization "Position" Global Map Cognition Environment Model Local Map Perception Real World Environment Path Motion Control Competencies

More information

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009 Announcements PS2: due Friday 23:59pm. Final project: 45% of the grade, 10% presentation, 35% write-up Presentations: in lecture Dec 1 and 3 --- schedule: CS 287: Advanced Robotics Fall 2009 Lecture 24:

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping CSE-571 Probabilistic Robotics Fast-SLAM Mapping Particle Filters Represent belief by random samples Estimation of non-gaussian, nonlinear processes Sampling Importance Resampling (SIR) principle Draw

More information

7630 Autonomous Robotics Probabilities for Robotics

7630 Autonomous Robotics Probabilities for Robotics 7630 Autonomous Robotics Probabilities for Robotics Basics of probability theory The Bayes-Filter Introduction to localization problems Monte-Carlo-Localization Based on material from R. Triebel, R. Kästner

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Kaijen Hsiao. Part A: Topics of Fascination

Kaijen Hsiao. Part A: Topics of Fascination Kaijen Hsiao Part A: Topics of Fascination 1) I am primarily interested in SLAM. I plan to do my project on an application of SLAM, and thus I am interested not only in the method we learned about in class,

More information

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu

More information

Chapter 5. Conclusions

Chapter 5. Conclusions Chapter 5 Conclusions The main objective of the research work described in this dissertation was the development of a Localisation methodology based only on laser data that did not require any initial

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2 Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2 Ruler Graph: Analyze your graph 1. Examine the shape formed by the connected dots. i. Does the connected graph create

More information

Mobile Robot Mapping and Localization in Non-Static Environments

Mobile Robot Mapping and Localization in Non-Static Environments Mobile Robot Mapping and Localization in Non-Static Environments Cyrill Stachniss Wolfram Burgard University of Freiburg, Department of Computer Science, D-790 Freiburg, Germany {stachnis burgard @informatik.uni-freiburg.de}

More information

Localization and Mapping Using NI Robotics Kit

Localization and Mapping Using NI Robotics Kit Localization and Mapping Using NI Robotics Kit Anson Dorsey (ajd53), Jeremy Fein (jdf226), Eric Gunther (ecg35) Abstract Keywords: SLAM, localization, mapping Our project attempts to perform simultaneous

More information

5. Tests and results Scan Matching Optimization Parameters Influence

5. Tests and results Scan Matching Optimization Parameters Influence 126 5. Tests and results This chapter presents results obtained using the proposed method on simulated and real data. First, it is analyzed the scan matching optimization; after that, the Scan Matching

More information

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation 242 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 3, JUNE 2001 Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation José E. Guivant and Eduardo

More information

Simultaneous Localization

Simultaneous Localization Simultaneous Localization and Mapping (SLAM) RSS Technical Lecture 16 April 9, 2012 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 Navigation Overview Where am I? Where am I going? Localization Assumed

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Robotics. Haslum COMP3620/6320

Robotics. Haslum COMP3620/6320 Robotics P@trik Haslum COMP3620/6320 Introduction Robotics Industrial Automation * Repetitive manipulation tasks (assembly, etc). * Well-known, controlled environment. * High-power, high-precision, very

More information

Mobile Robots: An Introduction.

Mobile Robots: An Introduction. Mobile Robots: An Introduction Amirkabir University of Technology Computer Engineering & Information Technology Department http://ce.aut.ac.ir/~shiry/lecture/robotics-2004/robotics04.html Introduction

More information

Analysis, optimization, and design of a SLAM solution for an implementation on reconfigurable hardware (FPGA) using CńaSH

Analysis, optimization, and design of a SLAM solution for an implementation on reconfigurable hardware (FPGA) using CńaSH December 12, 2016 MASTER THESIS Analysis, optimization, and design of a SLAM solution for an implementation on reconfigurable hardware (FPGA) using CńaSH Authors: Robin Appel Hendrik Folmer Faculty: Faculty

More information

Robotics. Chapter 25. Chapter 25 1

Robotics. Chapter 25. Chapter 25 1 Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration of robot

More information

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 8: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity

More information

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Lecturer: Drew Bagnell Scribes: {agiri, dmcconac, kumarsha, nbhakta} 1 1 Occupancy Mapping: An Introduction

More information

An actor-critic reinforcement learning controller for a 2-DOF ball-balancer

An actor-critic reinforcement learning controller for a 2-DOF ball-balancer An actor-critic reinforcement learning controller for a 2-DOF ball-balancer Andreas Stückl Michael Meyer Sebastian Schelle Projektpraktikum: Computational Neuro Engineering 2 Empty side for praktikums

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

SIMULATED LIDAR WAVEFORMS FOR THE ANALYSIS OF LIGHT PROPAGATION THROUGH A TREE CANOPY

SIMULATED LIDAR WAVEFORMS FOR THE ANALYSIS OF LIGHT PROPAGATION THROUGH A TREE CANOPY SIMULATED LIDAR WAVEFORMS FOR THE ANALYSIS OF LIGHT PROPAGATION THROUGH A TREE CANOPY Angela M. Kim and Richard C. Olsen Remote Sensing Center Naval Postgraduate School 1 University Circle Monterey, CA

More information

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER Jyoshita, Priti 2, Tejal Sangwan 3,2,3,4 Department of Electronics and Communication Engineering Hindu College of Engineering Sonepat,

More information

Monte Carlo Localization

Monte Carlo Localization Monte Carlo Localization P. Hiemstra & A. Nederveen August 24, 2007 Abstract In this paper we investigate robot localization with the Augmented Monte Carlo Localization (amcl) algorithm. The goal of the

More information

Loop detection and extended target tracking using laser data

Loop detection and extended target tracking using laser data Licentiate seminar 1(39) Loop detection and extended target tracking using laser data Karl Granström Division of Automatic Control Department of Electrical Engineering Linköping University Linköping, Sweden

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Sebastian Thrun Wolfram Burgard Dieter Fox The MIT Press Cambridge, Massachusetts London, England Preface xvii Acknowledgments xix I Basics 1 1 Introduction 3 1.1 Uncertainty in

More information

Encoder applications. I Most common use case: Combination with motors

Encoder applications. I Most common use case: Combination with motors 3.5 Rotation / Motion - Encoder applications 64-424 Intelligent Robotics Encoder applications I Most common use case: Combination with motors I Used to measure relative rotation angle, rotational direction

More information

Practical Robotics (PRAC)

Practical Robotics (PRAC) Practical Robotics (PRAC) A Mobile Robot Navigation System (1) - Sensor and Kinematic Modelling Nick Pears University of York, Department of Computer Science December 17, 2014 nep (UoY CS) PRAC Practical

More information

number Understand the equivalence between recurring decimals and fractions

number Understand the equivalence between recurring decimals and fractions number Understand the equivalence between recurring decimals and fractions Using and Applying Algebra Calculating Shape, Space and Measure Handling Data Use fractions or percentages to solve problems involving

More information

Auto-Digitizer for Fast Graph-to-Data Conversion

Auto-Digitizer for Fast Graph-to-Data Conversion Auto-Digitizer for Fast Graph-to-Data Conversion EE 368 Final Project Report, Winter 2018 Deepti Sanjay Mahajan dmahaj@stanford.edu Sarah Pao Radzihovsky sradzi13@stanford.edu Ching-Hua (Fiona) Wang chwang9@stanford.edu

More information