Parameterized Sensor Model and Handling Specular Reflections for Robot Map Building

Size: px
Start display at page:

Download "Parameterized Sensor Model and Handling Specular Reflections for Robot Map Building"

Transcription

1 Parameterized Sensor Model and Handling Specular Reflections for Robot Map Building Md. Jayedur Rashid May 16, 2006 Master s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Thomas Hellstrom Examiner: Per Lindström Umeå University Department of Computing Science SE UMEÅ SWEDEN

2

3 Abstract Map building is one of the most popular classical problems in mobile robotics field and sensor model is the way to interpret raw sensory information in spatial world especially for building map. For mobile robots, working in indoor environment, Ultrasonic Sonar(US) is famous because of its availability, wide angle and long range sensing ability etc, in spite of some problems. Among them, Specular Reflection problem is the most prominent one. A parameterized sensor model for map building by using occupancy grid with Bayesian updating has been proposed as the first part of this MS Thesis project. To optimize the parameter of the proposed model, a measurement of map goodness strategy has been applied. One new approach, without handcrafted map of the actual environment, for measuring the goodness of maps has been tested with implementation result. Three different processes, statistical analysis, map scoring technique over binary maps and derivative of image using 8-connecting neighbor concept borrowed from image processing field, have been studied to find the measurement of map goodness. Though satisfactory result has not been achieved for measuring goodness of maps without ground map, proposed sensor model generates better maps than standard sensor model, described in R. Murphy [27], with same parameter set has been proved by this study. Dealing with specular reflection problem while using ultrasonic sonar has been investigated in the second part of this MS Thesis project. The robot has a number of ultrasonic sonar that partly overlap, especially over time when the robot is moving. Based on this redundancy, one data processing algorithm, HSR (Handling Specular Reflection) using linear feature which is called Estimated Wall in this project, has been developed. HSR is off line data processing in batch mode. The HSR algorithm is shown to improve the quality of maps in two different experimental environments.

4 ii

5 Contents 1 Introduction Motivation Goals Organization of This paper Literature Review Map Building and Sensor Model Measurement of map goodness Specular Reflection Ultrasonic Sensors Physical Description Advantages and Problems Sonar Based Map Building Map Building Evidence Grid Sonar Sensor Model Bayesian Updating/ Bayesian Sensor Fusion Brief Theoretical Review of Bayesian Method Getting Numerical Value Updating With Bayes Rule Proposed Sensor Model and Measurement of Map goodness Proposed Parameterized Sensor Model Parameter Optimization of the Proposed Model Measurement of map goodness Result, Discussion and Conclusion Dealing with Specular Reflection HSR Algorithm Description with Pseudo-Code Processing Sensor Readings (Step-1) iii

6 iv CONTENTS Estimating Variant Walls(Step-2) Finding and Updating Specularly Reflected Readings(Step-3) Parameters of HSR Algorithm Relation between Openings and Sensor s distance Implementation Experiment Experiment Discussion, Conclusion and Recommendation Acknowledgements 67 References 69 A Pseudo Code of HSR Algorithm 73

7 List of Figures 3.1 Polaroid ultrasonic transducer. The membrane is the disk. This figure is adopted from [27] Typical beam pattern of Polaroid Ultrasonic sensor (adapted from [15] Thomas Hellstrom s Intelligent Robotics class lecture Spring 2005) Sonar cone with a wide angle of 30 0 and range response R Angular uncertainty of a sonar reading. 2α is the VOF and R is the sonar response Specular Reflection Cross-talking problem adapted from Introduction to AI Robotics, R. Murphy[27] Cross-talking problem adapted from [3]. Critical Path is any path of ultrasound that causes cross-talking An example of Occupancy Grid(Taken from Thomas Hellstrom s class lecture [15]). Red, Blue and Green cells represent Occupied, Empty and Unknown respectively Two dimensional representation of a sonar beam projected onto an occupancy grid. β is the half angle of sensor cone width and R is the maximum range of a sonar. Grid elements are divided into four regions (1, 2, 3 and 4) Three dimensional representation of a sonar beam (Taken from R. Murphy [27]) An example of calculating (α,r) for elements of Region 1. Here sensor reading,s=6 and interested cell is dark one, R=10 and β = 15 (Taken from R. Murphy [27] An example of calculating (α,r) for elements of Region 2. Here sensor reading,s=6 and interested cell is dark one, R=10 and β = 15 (Taken from R. Murphy [27]) (a)two dimensional representation (b)three dimensional representation of a sonar beam projected onto an occupancy grid using equations (4.9), (4.10), (4.11) and (4.12) v

8 vi LIST OF FIGURES 5.1 Probability trend for a sensor readout through acoustic axis, α = 0. Sensor readout,s=5, Max occupied = 0.9 and T olerance = ±0.5 in Region 1. r is the distance of grid elements from sensor origin. The graph is generated by simulating the equations (4.9), (4.10), (4.11) and (4.12) Proposed Belief or Probability pattern/trend for Region 1 through acoustic axis, α = 0. This is the generic model without saying anything about parameters. Here, s=assumed exact sensor readout obtained by a sensor, tol=tolerance for Region Proposed Belief or Probability pattern/trend for Region 2 through acoustic axis, α = 0. This is the generic model without saying anything about parameters like Region 1. Here, s=assumed exact sensor readout obtained by a sensor, tol=tolerance for Region Updated Figure of Figure Proposed Parameterized Sensor Model D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5 units. X-axis is the sensor readout D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5 units. X-axis is the sensor readout. No spikes is in opposite direction in Region 1 and Region Map built by using a) Standard Sensor Model (described in Chapter 4) with Beta=15, Tolerance=1.0, Max Occupied = (b) by using Proposed Parameterized Sensor Model, with some arbitrary parameters values; P 1 = 0.02(P 1 = 1 P 4 ),P 2 = 0.15,P 3 = 1.0,P 4 = 0.98,P 5 = 15. For data collection, Amigobot (with 8 Ultrasonic sensors) has been used (a)histogram generated from the mean of standard deviation. (b) Table containing values of histogram Finding optimal parameter set by mean of standard deviation method Optimal map by derivative of maps and comparison method Optimal map with parameter set by comparing two different maps of same environment method Amigobot s Geometry and the direction of a reading sensed by the specific sensor, S (The figures are adopted from Amigobot ActiveMedia Manual) Translating sensor readings into Cartesian (x,y) format by our program. Upper right hand corner table in this figure represents the Robot odometry and sensed sensor reading at one instance at that odometry. And the figure (colored) is the visualization of those readings after translation with origin (500,500) Actual Environment. For the discussion of this section data have been collected while wall 4 was White Card Board (used for making poster) that was very smooth

9 LIST OF FIGURES vii 6.4 Map S is obtained by plotting the data set after translating into Cartesian fashion, of the environment described in Figure 6.3. The blue points are robot s trajectory and red dots represent sensed data. 1,2,3,4 denote the boundary walls of the environment Estimated Wall Concept for a single sensor reading,r. Point A represents the data point for reading r as well as the middle point of estimated wall CB After executing estimatevariantwalls function for some readings with minwallsize=40mm, maxwallsize=150mm and threshold distance, Rth=Rmax=3000mm. This map is called ES Representation of searching points and direction for Step-3 in HSR Algorithm for a sensor reading,r For reading r (represented by point B), any instance of C(x,y), ES[xr][yr] is occupied by estimated wall and that instance of C(x,y) satisfies all three conditions mentioned above. So, C(x, y) becomes C s and reading r is proved as the result of Specular Reflection or Cross-taking. As a result, reading r should be updated by Distance(AC s ) Explanation of Condition 1. When the robot is at PosB, the obstacle (wall represented by long black line on which B is stood) is sensed by the sensor in a normal way. But when the robot is at PosA, the obstacle is out of range. So it takes infinity values. We may also get some readings as infinity because of specular reflection. Point C represents the maximum sensor range from PosA Actual environment, wall AB, is represented by the sensor readings obtained from different time with different odometry positions. In the map, Wall AB can be shown by the points near to the wall (d,c,e,f,b etc), but the point a. Point a is either the result of Cross-talking or specular reflection or not a noise at all, TDTh gives this decision Determining the value of TDTh. This is one example of the experiment when d=2000mm. Several readings with different d values have been taken and average of (r1 r2) gives the value of TDTh The relation between hole/opening size and the distance at which this opening/hole can be sensed almost perfectly by sonar mounted on Amigobot Amigobot ActiveMedia Robot Physical Environment for the experiment Comparison among different values of parameters as well as without HSR Algorithm while no opening/hole in the environment

10 viii LIST OF FIGURES 6.16 Comparison among different values of parameters as well as without HSR Algorithm with different opening sizes in the environment. Sub figure (a) describes the actual environment. D1 and D2 are openings with minimum size of 80cm. Walls E, T and A are not sensed by the robot. So readings from that side are not interesting

11 List of Tables 5.1 Used parameters set with minimum(2 nd column) and maximum(3 rd column) range. 4 th column denotes the optimal parameter set by using lowest column of histogram mentioned in Figure th column contains the lowest VSDNS generated set and 6 th column is obtained by taking the arithmetic mean over 1440 VSDNS. *P1:P1=1-P4 relation has been used This table contains all parameter sets obtained from the first bar of histogram mentioned in figure 5.9. *P1:P1=1-P4 relation has been used Solution area for minwallsize and maxwallsize paremeters. First column of the table represents the size of minimum wall, minwallsize parameter. 2nd and 3rd column of the table represent minimum size and maximum size of maxwallsize respectively for a certain minwallsize ix

12 x LIST OF TABLES

13 Chapter 1 Introduction 1.1 Motivation Service robots are getting popularity day by day. So it becomes attractive to the researchers and developers. For building a service robot, one of the most primary conditions is to work as properly as possible in indoor environment. And to achieve this goal, usually mobile robots use a spatial representation of actual environment called map. Robotic mapping addresses the problem of acquiring the spatial models of physical environment through mobile robot [35]. Correct mapping of working environment is quite helpful for navigating mobile robots autonomously [20, 9]. Actually sensing environment and its interpretation as a map is one of the basics of autonomous robot [1, 17, 18]. Among several problems for building autonomous robots, mapping problem is considered as one of the leading problems, though we have some methods for map building [35]. In an unknown environment, sensors provide information to be intelligent like natural being to an autonomous agent, either the agent is physical or a software [6]. So, sensors work as organ for a robot or an autonomous agent. But to use this sensory information directly for building a map of environment is not convenient. Because a model of the world built from raw sensory data may be incorrect though the sensors are functioning correctly [6]. Generally map is built from raw sensory information through one or more interpreting methods, which be called Sensor Model. A detail discussion of sensor model is available in Chapter 4. Ultrasonic sensor is famous for robotic mapping. Ultrasonic sonar sensors detect obstacle by the reflection of sound. They emit ultrasonic sound that hits obstacles and spread out in several directions. Some energy come back to the sonar sensor because of reflections and after receiving one simple calculation is done with the time-of-flight of sound [37, 34]. With this calculation, sensors give the readings which are used for internal representation of environment, e.g. map [34]. Ultrasonic sensors are inexpensive, fast and have a long operating range with wide angular coverage [27]. But they have several shortcomings as well. According to Robin Murphy [27], major three problems with sonar range readings are as follows (for detail description see Chapter 3): 1) Foreshortening 2) Specular Reflection 3) Cross-Talk 1

14 2 Chapter 1. Introduction Foreshortening: According to Robin R. Murphy [27], A sonar has a 30 degree field of view. This means that sound is being broadcast in a 30 0 wide cone. If the surface is not perpendicular to the transducer, one side of the cone will reach the object first and return a range first. Most software assumes the reading is along the axis of the sound wave. If it uses the reading (which is really the reading for 15 0 ) the robot will respond to erroneous data. This problem is called foreshortening. This is a different type of problem. So far there is no way to solve this foreshortening problem [27]. That s why we over look this problem and assume that ranges are coming from acoustic axis 1 of the sensor. Specular reflection: Specular reflection is when the wave form hits to the surface sufficiently distant from perpendicular of that surface, a very little amount, which is not reliable to consider for calculation, of energy is received and most of energy reflected away [9, 11]. This happens because of several reasons. Cross-Talk: Robin R. Murphy [27] quotes: Consider a ring of multiple sonars. Suppose the sonars fire (emit a sound) at about the same time. Even though they are each covering a different region around the robot, some specularly reflected sound from a sonar might wind up getting received by a completely different sonar. The receiving sonar is unable to tell the difference between sound generated by itself or by its peers. This source of wrong reading is called cross-talk, because the sound waves are getting crossed. This problem is found mainly the corner of a room or that types of places where several walls or obstacles stand side by side and creates consecutive reflections. There are some solutions of this problem. But this is not the main focus of this thesis paper. 1.2 Goals The goal of this master thesis project can be divided into two parts: The first part is to propose a new parameterized sensor model for map building using ultrasonic sonar sensors. The model depends on some parameters. Best model is found based on the optimum values set of those parameters. So, parameter optimization is also included in this part. For finding optimal parameters set, map goodness strategy has been used. This could be done by defining a measure of goodness for the map. A new approach, measuring goodness of maps without using ground map (original map of the environment), has also been tried in this part. The second part in this master project is to analyze when and why specular reflections occur and to try to find methods to reduce the problems. In this part we have proposed one technique/algorithm for handling specular reflections while using ultrasonic sensors, especially for map building. This method is an off line batch processing. 1 Acoustic Axis: The axis along which the sound wave works.

15 1.3. Organization of This paper Organization of This paper Motivation and Goals of this paper have already been described in this Chapter. Chapter 2 contains the review of existing related works. We have used Polaroid ultrasonic sonar in this project and it s physical and theoretical description is available in Chapter 3. Detail discussion on map building and one standard sensor model adapted from [27] have been described in Chapter 4. But only evidence grid with Bayesian updating approach has been gone through in detail for map building problem. In next chapter, Chapter 5, we have proposed one parameterized sensor model and for finding the optimum parameter set for the proposed model, one new optimization technique, a measure of map goodness criteria without using ground map, has been proposed also. Implementation results, Conclusion and Discussion on this part have been given at the end of that Chapter with future works. In Chapter 6, dealing with specular reflection problem, one algorithm, HSR, has been proposed and described with necessary parameters in detail. The HSR Algorithm has been tested in two different environments. Implementation, test results and recommended works for this part have been mentioned in the same Chapter. Acknowledgements have been given in Chapter 7.

16 4 Chapter 1. Introduction

17 Chapter 2 Literature Review In this Chapter, existing literature studies about robotic map building and sensor model, measurement of map goodness and specular reflection problem have been performed sequentially. 2.1 Map Building and Sensor Model A lot of researchers are interested on ultrasonic sensors based mobile robots. Some of them are mentioned (related with robot evidence grid) in the annotated bibliography of [23]. Most of the discussion contains two issues: Map making and Localization. According to R. Murphy [27], both are closely related i.e., conjugate of each other. Without localization, accurate map is not possible and vise versa. But our concentration is mainly map building and the sensor model for sonar sensors, localization is assumed correct. There are two types of map building methods; grid based and feature based [27, 7, 36, 14]. The grid based map building concept is introduced by Moravec and Elfes [23, 25, 11]. In this approach, a two or three dimensional grid is used for representing the robot s working environment. So far two dimensional is popular and each small square of the 2D grid is called cell. Each cell represents the status of the physical environment, either the corresponding place is empty or occupied by obstacle. Many of our sensors report the distance, which is called range, to the nearest object in a given direction. Given the robot s position, the probabilities of the cells near to the indicated object is increased and the probabilities between the sensor and the sensed object is decreased (since it is the first object in that direction). The exact amount of increase or decrease to the various cells forms the sensor model. So, in the representing grid, the value of a cell depends not only the range values of the sensor, but also a probabilistic estimation from sensory information and sensor responses are projected to the grid through a proper sensor model [23]. Most common three methods for using sonar sensor models to update a grid are Bayesian, Dempster-Shafer and HIMM in case of multi sensory information integration [27]. Occupancy Grid or grid based method is also known as certainty and evidence grids [27]. But the grid method was first introduced to turn wide angle range measurements into detailed spatial maps [37, 23]. So it is especially effective for using ultrasonic sensors 5

18 6 Chapter 2. Literature Review with sparse responses [36] and this approach is quite useful in ultrasonic based robotic applications [32, 7, 24], usually in localization and map building. In Feature-based methods demonstrated by [30], first try to locate features in the environment, then localizing them, and finally used them as known landmarks by which to localize the robot as it searches for the next landmarks. Several researchers used this method. In [28], Don Murray and Cullen Jennings describe a visually guided robot that can plan paths, construct maps and explore an indoor environment. For representing the sonar beam, sonar sensor model is needed. Sensor model is the technique that process the sonar reading for visual representation. So it controls the visual map representation of physical environment. Robin Murphy describes a general sensor model [27], that has been described in Chapter 4. For building high resolution map, Moravec and Elfes has given one probabilistic sensor model in [26]. Matthias Fichtner and Axel Grobmann design a probablistic sensor model for camera pose estimation in hallways and cluttered space using 3D geometrical model of environment that is obtained from images taken by camera and they used this model for probabilistic robot localization [13]. In this paper, we have proposed one parameterized probabilistic sensor model for mobile robot mapping. It is the modified version of the general sensor model which is described in [27] and Chapter 4 in this paper. 2.2 Measurement of map goodness Map is used for robot navigation either in known or unknown environment. As map is the visual representation of physical environment, it is better to develop a map for mobile robot as good as possible. For several reasons measuring the goodness of maps is important. As for example, to provide a merit values for automatic learning sensor model [23], building a Self-Organization Map [38], etc. In [23], some process of finding map goodness, like matching, scoring and entropy are described. Moravec uses information theory s formulae for scoring between two maps. Ypma [38] concentrates failure detection in process monitoring involves a classification mainly on the basis of data from normal operation. When a Self-Organizing Map is used for the description of normal system behavior, a compatibility measure is needed for declaring a map and a data set as matching. They have propose a novel variant of one such measure and have investigated usefulness of consisting and novel measures both with synthetic data and in two real world applications. As the Self-Organizing Map involves both a vector quantization and a nonlinear projection of the input space to some (usually lower-dimensional) output space, looking at the average quantization error is measure of the map goodness [38, 19]. Moravec and Elfes [26] proposed some different processes for measuring map goodness or matching maps. Scoring over grid cells similarities, that is, if empty cell of one map overlaps the empty cell of other map, then increase positive score or increase negative score and same process in case of occupied cells. Finding optimal parameters set of sonar model, we have tried with one new approach, which is based on statistical measurement and it has been described in Chapter 5.

19 2.3. Specular Reflection Specular Reflection The definition of specular reflection has been given in Chapter 1 (Introduction). In case of using ultrasonic sonar, it is assumed that ideally all objects would have a flat surface perpendicular to the transducer 1, but of course, this rarely happens [27]. The situation can be worse if the reflected signal bounces off several objects until by coincidence return source energy back to the transducer. Specular reflection itself is not only a problem but also generates another problem, cross-talk [27]. According to one researcher Monet Soldo [27], specular reflection and cross-talk can cause problems in different ways each time. Actually specular reflection is the problem with environment, not the problem with program or physical characteristic of sonar sensor. But sometimes because of low power operating levels, the source signal is not enough strong and the returned signal worthless and this problem is difficult to identify from specular reflection returns [27]. According to Robin Murphy [27], taking the average of three readings, current plus the last two, from each sensor is one method of eliminating spurious readings. This ad hoc strategy is common on purely reactive robots. A good study on physical description and properties of ultrasonic sonar is always helpful for better understanding of specular reflection and other problems with ultrasonic sonar and their solutions as well. Physical principles of ultrasonic sensors can be found in [29]. Johann Borenstein and Alex Cao has done experimental characterization of ultrasonic sonar in [4]. Michael Drumheller discussed the specular reflection and other sonar response problems in [8] at the time of analysis of the Polaroid sonar. In [3], a new successful method for error elimination, Error Eliminating Rapid Ultrasonic Firing (EERUF), was proposed by Johann Borenstein and Yoram Koren for eliminating cross-talks. Zou in his Master Thesis [37], in Chapter 5, mentioned a detail description on the Range Confidence Factor(RCF), which is introduced by John Hwan Lim and Dong Woo Cho [21], to reduce the unreliable ultrasonic readings, mainly specular reflections.he also proposed a new method in integrating the conflict factor in evidence theory into an ultrasonic sensor model for an adaptive model, which reduces errors caused by specular reflections. Also a novel algorithm called outlier rejection algorithm has been proposed in the same paper for further eliminating unreliable readings. In Feature Prediction paper [34], one method called Feature Prediction Algorithm has been introduced for detecting and handling specular reflected readings. In this paper, we have proposed an approach for handling specular reflection in Chapter 6. This is an off line data analysis in batch processing manner. It has some similarities with Feature Prediction Algorithm but not the same. Our Algorithm is also able to handle Cross-Talk problem in some cases. 1 Transducer: it is a mechanism, or element, of the sensor that transforms the energy associated with what is measured into another form of energy. This term is often used interchangeably with sensor.[27]

20 8 Chapter 2. Literature Review

21 Chapter 3 Ultrasonic Sensors In our project, ultrasonic sensor has been used. So the physical and theoretical description of ultrasonic sensor comes as a consequence and this chapter contains these descriptions with some advantages and disadvantages in a nut shell. Ultrasonic sensor is a proximity sensor, which measures the relative distance between the sensor and the object of the environment. Since the sensor is mounted onto the robot, it is a straightforward computation to translate the range relative to the sensor to a range relative to the robot at large. Most of the proximity sensors are active [27], i.e., they put out energy to the environment to either change the energy or enhance it. Ultrasonic sensor is also called sonar and sonar is the most popular proximity sensor. 3.1 Physical Description Sonar refers to any system for using sounds depending on the medium of the environment either air or water to measure range. Generally ultrasonic sound, just at the edge of human hearing, is used in air medium. As a result the terms sonar and ultrasonics are used interchangeably when discussing extracting range from acoustic energy. Ultrasonic is possibly the most common sensor on commercial robots operating indoors and on research robots. They emit a sound and measure the time it takes to bounce back. So they are active like most of the proximity sensors. The time of flight, time from emission to bounce back, along with the speed of sound in the environment (even though air changes density with altitude) is sufficient to compute the range of the object based on equation, S = V t, Where V is velocity of sound, t is half of total traveling period and S is calculated distance. A robotic sonar transducer is shown in Figure 3.1. It consists of a thin metallic membrane, and is like a dollar coin in size and thickness. A very strong electrical signal generates a waveform, causing the membrane on the transducer to produce sound and a timer is set and the membrane becomes stationary. The reflected sound, or echo, vibrates the membrane which is amplified and then thresholded on returned signal strength; if too little sound is received, the sonar assumes the reflected sound is a noise and so ignores it. But if the received signal is strong enough to be valid, the timer is tripped and time of flight is obtained. From the above equation, we know V and t. So we can calculate S, the distance between sensor and object in the environment. In reality, when the sound wave is generated by Polaroid transducer, multiple secondary sound waves, which interact over different regions of space around the transducer 9

22 10 Chapter 3. Ultrasonic Sensors Figure 3.1: Polaroid ultrasonic transducer. The membrane is the disk. This figure is adopted from [27]. before dissipating has been produced by the sound beam. These secondary sound waves are called side lobes. Figure 3.2 gives the sound intensity pattern of the Polaroid ranging module. Figure 3.2: Typical beam pattern of Polaroid Ultrasonic sensor (adapted from [15] Thomas Hellstrom s Intelligent Robotics class lecture Spring 2005). But most robotic systems assume that only sound from the main, or centremost, lobe is responsible for a range measurement. It is generally accepted that the range of a sonar response forms a 30 0 wide angle cone at about 5 meters away, as shown in Figure 3.3. The R is the range response produced from the sonar and α = 15 0 is the half wide angle of the sonar cone.

23 3.2. Advantages and Problems 11 Figure 3.3: Sonar cone with a wide angle of 30 0 and range response R. 3.2 Advantages and Problems Besides providing direct range measurements, ultrasonic sensors are cheap, fast and had terrific coverage. They are easy to maintain which is one of the big advantages in practical using. High range detection accuracy of sonar is another reason to be popular. For Polaroid sensor, a typical accuracy is 1% of the reading over the entire range. Ultrasonic sensor is able to provide range information from m (6 inches) to m (35 feet) ( But in our project, used Amigobot s range is from m (6 inches) to 3m (10 feet)). Though they have the above advantages, ultrasonic sensors have many shortcomings and limitations. They are as follows (They have already been mentioned in Chapter 1 with definitions only): Angular uncertainty or Foreshortening Angular uncertainty means the uncertainty of the angular information of a sonar response while detecting any object of a certain range. When an ultrasonic sensor gets a range response of R meters, the response simply represents a cone with 2α (according to Section 3.1, Figure 3.3) within which the object may be present. There is no way to find out exactly where the position of the object is. If the surface is not perpendicular to the acoustic axis of sonar, one side of the cone may reach the object first and return a range first. As shown in Figure 3.4, VOF (view of focus) of the ultrasonic sensor is 2α, the object can be anywhere in the shaded region for the response R. This angular uncertainty is also known as foreshortening. So far there is no solution of this problem. But according to most of the researchers, for practical purposes, it is assumed that the reading is along the axis of the sound wave or along the acoustic axis [27]. Specular Reflection Specular reflection refers to the sonar response that is not reflected back directly from the object or not reflected back at all to the sonar. In specular reflection, the ultrasound wave emitted from the sonar, hits sufficiently distant from perpendicular of the surface and the sound is reflected away from the reflecting surface, which results in longer range reporting or missing the detection of object altogether.

24 12 Chapter 3. Ultrasonic Sensors Figure 3.4: Angular uncertainty of a sonar reading. 2α is the VOF and R is the sonar response. Figure 3.5: Specular Reflection The specular reflection is due to different relative positions of the ultrasonic transducer and the reflecting surfaces. Figure 3.5 shows sonar responses in two different situations. In Figure 3.5a, the sonar transducer s axis is perpendicular to the reflection surface, i.e., inclined angle zero, so most of the sound energy is reflected directly back to the ultrasonic sensor and only a very small percentage of the energy is scattered in other directions. However in Figure 3.5b, because the sonar transducer s axis is not perpendicular to the surface, much energy will be reflected away and this may produce a longer range reporting or even no response at all. The amount of reflected sound energy depends strongly on the surface structure

25 3.2. Advantages and Problems 13 of the obstacle and the inclined angle,θ (in Figure 3.5). Cross-Talking When more than one sensors or sonar ring is used if one sensor receives the sound emitted by other sensor, this problem is called cross-talking. This is an undesirable phenomenon. Figure 3.6 explains cross-talking problem in detail. Figure 3.6: Cross-talking problem adapted from Introduction to AI Robotics, R. Murphy[27]. Figure 3.7: Cross-talking problem adapted from [3]. Critical Path is any path of ultrasound that causes cross-talking. Cross-talking problem is the result of specular reflection [27] most of the cases. But according to Johann Borenstein, it can be occurred without specular reflection problem too. Figure 3.7, adapted from his article [3], describes the situation. In the same article, he has proposed a method, Error Eliminating Rapid Ultrasonic Firing (EERUF), which can almost eliminate cross-talking problem. In this MS Thesis paper, we have tried to develop an algorithm, HSR (in Chapter 6),

26 14 Chapter 3. Ultrasonic Sensors which can handle specular reflection and cross-talk because of specular reflection as well.

27 Chapter 4 Sonar Based Map Building In this Chapter, the theory behind grid based map building has been described. At the beginning of this Chapter map representation with popular Evidence Grid is focused highly. Then discussion on Sensor model is available. For mapping sensor model values onto evidence grid and fusing them with existing values Bayesian Sensor Fusion, the most popular sensor fusion method, has been discussed in detail as a consequence. 4.1 Map Building The robotic mapping problem is to acquire a spatial representation of robot s working environment. In other word, it s a process of constantly updating the representation of knowledge about robot s surroundings [7]. For collecting information about environment or to perceive the outside world, the robot needs some devices, called sensors. Cameras, range finders using sonar, laser, and infrared technology, radar, tactile sensors, compasses, and GPS are commonly used sensors bringing to bear for this task [35]. These sensors directly provide the environmental information as images or range readings. We have used ultrasonic sonar sensors for map building as they are easily available, inexpensive and easy to control [36, 27]. Generally, there are two approaches for sonar-based map building according to the form of data representation, Feature-Based world model In this model, physical world is represented by points, lines or surface description of the environment from the sensed data. Kalman filter and Extended Kalman filter have been widely used in feature-based world to integrate the sensory information and maintain the map [20]. Grid-Based world model Sensory information is represented onto a two dimensional or three dimensional grid; either each cell of the grid is occupied or empty. This grid is also called occupancy grid [27], as each element in the grid holds a value representing whether the location in space is occupied or empty. For ultrasonic sensor, grid based model is more suitable than other approach, because of their unreliable readings problem [36]. This approach is based on probabilistic model, Bayes Rule [35]. It allows a statistical expression of confidence in the correctness of data by projecting the sonar response onto the grids. 15

28 16 Chapter 4. Sonar Based Map Building We are using ultrasonic sound sensors for our project and they have a wide angle of view (most commonly 30 degree). The evidence grid representation is quite useful to turn wide angle range measurements from ultrasonic sensors into detailed spatial maps. Moreover, diffusive multiple sonar readings are able to be integrated on the evidence grids into increasingly confident and detailed map of a robot s surroundings [37]. Fusing the grid with other sensor readings increases the confidence values of the grid. Though the evidence grid approach computationally expensive, provides more robust and reliable navigation than other methods [23]. The evidence grid approach has been used in a great number of robotic systems [35]. Because of the above reasons, the evidence grid approach has been used for representing the physical world in our project too. 4.2 Evidence Grid In 1983, Martin C. Martin and Hans P. Moravec first formulated the Evidence Grid in CMU (Carnegie Mellon University) Mobile Robot Laboratory to turn wide angle range measurement from cheap ultrasonic sonar mounted on robot to spatial maps [25, 23]. It accumulates diffuse evidence about the occupancy of a grid of small volumes of nearby space from individual sensor readings into increasingly confident and detailed maps of a robot s surroundings [23]. Evidence grid is a two or three dimensional grid (2D is most commonly used and we are using too) corresponding to the actual world. In evidence grid, the world is defined as a discrete spatial lattice in which each cell of the grid is a discrete state variable [15, 10]. Evidence grid approach is the spatial world representation based on probability. Each cell of the grid gets a value for the belief, occupied by a object or empty or unknown by each sensor s readout and some Evidential Methods 1. The grid can contain sufficient information about the environment like, occupancy of objects; reach ability and danger etc which can be sued for various activities such as exploration, localization, motion planning, landmark identification and obstacle avoidance for a mobile robot. Using a 2D grid, to know the 3D shape of object is impossible, but the position of the object can be found. Most common evidential methods for occupancy grid are Bayesian, Dempster- Shafer and HIMM [15, 27]. We have used Bayesian approach, so only Bayesian method has been discussed. Figure 4.1 is an example of occupancy grid. In Figure 4.1 red, blue and green colors represent high evidence for being occupied, empty and unknown respectively. At the beginning, it is assumed that the environment is unknown. The robot moves across Y-Axis for collecting ultrasonic sensor s responses. No information is available beyond the maximum range, 10 units in the figure. One of the most attractive features of the occupancy grid is the possibility to incorporate information from sensors of different types in a single grid. A very accurate localization using evidence grid for a robot mounted with a set of 16 sonar sensors and a triangulation-based structured light range finder has been experimented by A. C. Schultz and W. Adams [33]. 4.3 Sonar Sensor Model In order to incorporate raw sensory information, sensor readouts, onto the occupancy grid, a common sensor information presentation is essential and sensor model function is 1 Evidential Method: Sensor Model with some process to get numerical values from the sensor model.

29 4.3. Sonar Sensor Model 17 Figure 4.1: An example of Occupancy Grid(Taken from Thomas Hellstrom s class lecture [15]). Red, Blue and Green cells represent Occupied, Empty and Unknown respectively. used for converting sensory information to a common representation. Whatever method of updating uncertainty is used requires a sensor model. A sensor model can be generated in a number of ways [27]: Empirical Method is based on testing. Test the sonar, collect data as the correctness of sensor model result. The sensor model is formed from the set of beliefs of all possible observation and the beliefs come from the frequency of correct reading in an observation. Analytical Method focus on the physical properties of the sensor i.e., sensor model is generated from the understanding of the physical properties of the device. Subjective Method is the result of designer s experience, an unconscious expression of empirical testing. Polaroid ultrasonic sensor or sonar has been heavily studied, but the principles of scoring, fusing beliefs etc theoretical concepts are also applicable for any sensor [27]. We have used Amigobot Active Media Robot containing 8 ultrasonic sonars [2]. Most roboticists have converged on a model of sonar uncertainty which looks like Figure 4.2. And Figure 4.3 is the three dimensional view of a standard sensor model. This (Figure 4.2) is a basic model of a single sonar beam. The sonar beam has a field of view specified by β, the half angle representing the width of the cone and R, the maximum range a sensor can detect. This field of view can be projected onto the evidence grid. From the Figure 4.2, the model can be divided into four different regions: Region 1: The state of elements of this region are probably occupied(drawn as hill in Figure 4.3) by object as readout has been returned from this reason.

30 18 Chapter 4. Sonar Based Map Building Figure 4.2: Two dimensional representation of a sonar beam projected onto an occupancy grid. β is the half angle of sensor cone width and R is the maximum range of a sonar. Grid elements are divided into four regions (1, 2, 3 and 4). Figure 4.3: Three dimensional representation of a sonar beam (Taken from R. Murphy [27]). Region 2: Affected elements of this region are probably empty(drawn as valley in Figure 4.3), as this region is in between sonar and the obstacle. Region 3: This region is beyond the sensor readout but inside the view of focus and less than maximum range of the sonar. So logically, condition of the affected elements is unknown(drawn as flat surface in Figure 4.3). Region 4: This region is outside of the sonar sensor s field of view. So nothing is

31 4.4. Bayesian Updating/ Bayesian Sensor Fusion 19 known about the elements of this region. For a given range reading, the elements of Region 2 are more likely to be empty than the elements of Region 1, which are to be really occupied [27]. Elements closely to the acoustic axis are more reliable than that of towards the edge of the sonar cone regardless of empty or occupied. In a nutshell, The closer to the acoustic axis, the higher the belief The nearer to the origin of the sonar beam, the higher the belief. The sensor model discussed above reflects a general concept. There are several different ways to convert the discussed model into a numerical value for belief. Bayesian sensor fusion or Bayesian approach is one of the most popular [25, 27] and other two methods, Dempster-Shafer Theory and HIMM, are also famous. Each of the three methods does the translation slightly differently [27]. In our project, we have used Bayesian approach. So we are going to discuss only Bayesian approach in detail in next section. 4.4 Bayesian Updating/ Bayesian Sensor Fusion The most popular method for fusing evidence is to translate sensor readout into probabilities and to combine probabilities using Bayes rule. In the early 1980 s, Moravec and Elfes at CMU (Carnegie Mellon University) pioneered this probabilistic approach and later Moravec turned to a form of Bayes rule that uses probabilities expressed as likelihood and odds. This has some computational advantages as well as some problems with priors. In the Bayesian Approach, the sensor model generates conditional probabilities of the form P (s H). These are then converted into P (H s), which is expected, by using Bayes rule. It is also possible to fuse two probabilities, either from two different sensors sensing at the same time or from two different times, using Bayes Rule. In the following subsections, these have been described step by step and this part has been adopted from R. Murphy s Introduction to AI Robotics [27] Brief Theoretical Review of Bayesian Method Let H stands for hypothesis obtained from experiment. In our case (updating an occupancy grid using ultrasonic sonar), the experiment is sending the acoustic wave out and measuring the time of flight, and the outcome of the range reading reporting whether the region being sensed is Empty or Occupied. So, the hypothesis for an element grid[i][j] is Empty or Occupied. As a consequence, it can be written H = {H, H} or H = {Occupied, Empty}. Moreover, a probability function scores evidence on a scale of 0 to 1 for a particular event, H. The probability that H has really occurred is presented by P (H): 0 P(H) 1 One of the most interesting features of probabilities is that if the possibility of happening is know then the probability of not happening can be found easily. As we know

32 20 Chapter 4. Sonar Based Map Building that the summation of probabilities for an event is equal to 1. So if P(H) is known, P ( H) can be computed from the following formula, P ( H) = 1 P (H) So far we have discussed about unconditional probabilities. As these probabilities give us only a priori information and they are quite independent on sensor readings, S, they are not interesting. Actually, it is more useful for a robot to have a function computing the state or probability of elements grid[i][j] for a particular sensor readout,s, where the state is either Occupied or Empty. And using conditional probabilities, we can get this sort of information. P (H s) represents the probability that H has really occurred given a particular sensor reading,s; the denotes given. The interesting thing is that conditional probabilities 2 also have the property, P (H s) + P ( H s) = 1.0 For occupancy grid, P (Occupied s) and P (Empty s) are calculated for each element grid[i][j] which is covered by the sensor scan, s. At each grid element, the tuple of two probabilities for the region is stored. But still, there needs a transfer function that can transfer sensor readings into the probabilities for each element in a way like Figure 4.2. The discussion about this transfer function is available in next subsection (Subsection 4.4.2, Getting Numerical Values ). So, we can proceed assuming that we have these numerical values, because for continuation numerical values are not essential right now. From sensor model we get P (s H): the probability that the sensor would return the value being considered given it was really occupied. But we need the probability that the area at grid[i][j] is really occupied for a given particular sensor reading,s : P (H s). So now we need any relation to get the probability P (H s) from P (s H) and Bayes rule is this powerful tool, which specifies the relationship between P (s H) and P (H s). Following is a short note on Bayes Rule adapted from [22, 31]. Bayes Rule: Let A and B are two events, P (A) > 0 and P (B) > 0 P (A B) be the conditional probability of event A when event B is given Then r P (A B) P (A B) = (4.1) P (B) Where P (A B) be the probability of intersection set of A and B, with We can write (4.1) as follows: A B = {x : xɛa and xɛb} (4.2) P (A B) = P (A B)P (B) = P (B A)P (A) (4.3) If {B i } be a countable set of exhaustive events and A be an arbitrary event, then according to total probability rule P (A) = iɛi P (B i )P (A B i ) (4.4) 2 In [27], page-381, it is mentioned as Unconditional probabilities also have the property. I think, it s a printing mistake and it should be Conditional probabilities also have the property instead of previous one.

33 4.4. Bayesian Updating/ Bayesian Sensor Fusion 21 where and I {1, 2, 3...n}. P (B i ) > 0 and B i B j = φ (i j, i, jɛi) A = iɛi (A B i ) Based on the conditional probability and total probability rule described in Equations (4.1) to (4.4), the Bayesian formula can be generalized as [22], P (B j A) = P (A B j) P (B j ) iɛi P (B i)p (A B i ) (4.5) where {B i } is a countable set of exhaustive events such that P (B i ) 0 and A is an event with P (A) 0. Bayes formula (4.5) can be expressed informally in English [31] as posterior = likelihood prior evidence (4.6) From equation (4.5), it is clear that by observing the value of A, we can convert P (A B j ) to the a posterior probability P (B j A)- the probability of the state of elements being B j given that A has been measured with the help of prior probability P (B j ). The term P (A B j ) is called likelihood of B j with respect to A. Actually a posterior probability is the product of likelihood and prior probability; the evidence factor, iɛi P (B i)p (A B i ), works as the scaling factor, which assures the posterior probabilities sum to one, as all good probabilities have [31]. So, from the above short note on Bayes Rule, we can get our expected probabilities P (H s) from P (s H) by using the relation (4.5) as following: P (H s) = P (s H)P (H) P (H)P (s H) + P ( H)P (s H) (4.7) Substituting H by Occupied equation (4.7) becomes P (Occupied s) = P (s Occupied)P (Occupied) P (Occupied)P (s Occupied) + P (Empty)P (s Empty) (4.8) P (s Occupied) and P (s Empty) are obtained from the sensor model. The other terms, P (Empty) and P (Occupied) are unconditional probabilities or prior probabilities, usually called priors. If the prior probabilities are known then it is straightforward to convert the probabilities obtained from sensor model to the form of the occupancy grid. In some cases the prior probabilities are known, but most of the cases they are unknown. If the knowledge isn t available, it is assumed that P (Occupied) = P (Empty) = 0.5 according to [27] Getting Numerical Value How to get numerical values of P (s Occupied) and P (s Empty) from the sensor model described in section 4.3, has been described elaborately this section. This section is one of the most important part of this MS Thesis paper.

34 22 Chapter 4. Sonar Based Map Building According to R. Murphy [27], one set of functions that can quantify the sensor model into probabilities are as follows: Each grid element falling in Region 1 should be updated using the following equations: P (s Occupied) = R r R + β α β 2 Max occupied (4.9) P (s Empty) = 1.0 P (s Occupied) (4.10) where r=the distance of the grid element from the sensor position α=the angle of the grid element from the sensor position R=Maximum Range that is possible to sense on behalf of the sensor β=width of the sensor cone Max occupied is the assumption that a reading of occupied is never completely reliable [27]. From the above equations, we can get some observations that should have a sensor model: The term β α β represents the idea that the closer the grid element to the acoustic axis, the higher the belief. And the other term R r R captures the nearer the grid elements to the origin of the sonar beam, the higher the belief idea. Generally the value of Max occupied =0.98. This means that a grid element can never have a belief of occupied more than Another important thing is the thickness or width of Region 1, generally called tolerance,ɛ. The tolerance comes because of the resolution of the sonar. One common value of tolerance is ±0.5. But tolerance may have different values too. The width of Region 1 depends on this parameter. So, tolerance has a significant impact of covering more elements of the grid. In other sense, Region 1 has a finite thickness because of tolerance. For pictorial explanation please see the Figure 4.4. The dark cell represents the element of interest onto the grid. Figure 4.4: An example of calculating (α,r) for elements of Region 1. Here sensor reading,s=6 and interested cell is dark one, R=10 and β = 15 (Taken from R. Murphy [27].

35 4.4. Bayesian Updating/ Bayesian Sensor Fusion 23 And every grid element falling in Region 2 can be calculated according to the following formulae: P (s Occupied) = 1.0 P (s Empty) (4.11) P (s Empty) = R r R + β α β 2 (4.12) All parameters represent the same meaning as Region 1 except Max occupied. Because in Region 2, an element of grid can have the probability of being empty [27]. Figure 4.5 explains generating probabilities for Region 2 with example calculation. Figure 4.5: An example of calculating (α,r) for elements of Region 2. Here sensor reading,s=6 and interested cell is dark one, R=10 and β = 15 (Taken from R. Murphy [27]). Sensor model obtained from above equations (4.9), (4.10), (4.11) and (4.12) looks like Figure 4.6. So far we have discussed about only one reading. But still the fusion of readings more than one is left. Following sub section explains how to fuse one reading with other readings onto evidence grid using Bayesian method Updating With Bayes Rule As we don t have any prior knowledge about the environment, we can assume the prior probabilities as 0.5 i.e., P (H) = P ( H) = according to Brief Theoretical Review of Bayesian Method (sub section 4.4.1) of this report. So, the grid should be initialized with 0.5 and as a result first updating is quite simple. In case of first observation, for computing new probabilities, Bayes rule is implemented and the prior P (H) is replaced with the new value. 3 The use of 0.5 as priors can give the conclusion that P (Occupied s) and P (s Occupied) are interchangeable, but in general P (H s) P (s H) [27].

36 24 Chapter 4. Sonar Based Map Building (a) (b) Figure 4.6: (a)two dimensional representation (b)three dimensional representation of a sonar beam projected onto an occupancy grid using equations (4.9), (4.10), (4.11) and (4.12). Problem arises at the time of second or more observations, or observations made by other sensors at the same time, because on that time priors are not known directly. But we can use Bayes rule iteratively so that the probability at time t n 1 can be used as the prior for calculating the probability of current observation, t n. With this concept, Bayes rule becomes: P (H s 1, s 2...s n ) = P (s 1, s 2...s n H)P (H) P (s 1, s 2...s n H)P (H) + P (s 1, s 2...s n H)P ( H) (4.13) where s 1, s 2...s n are the n multiple observations. From equation (4.13), it is found that we need a sensor model that can provide us P (s 1, s 2...s n H), occupied or empty values for grid element grid[i][j] with n combinations of sensor readouts, instead of generating only P (s H), evidence values with only single sensor readout. But we can simplify P (s 1, s 2...s n H) to P (s 1 H)P (s 2 H)...P (s n H) if we consider that s 1 is the result of different experiment than s 2 and the others. Now we need to know all previous (n-1) readings. But it s impossible to predict that how many times an element of the grid will be sensed. As a result to implement this idea, one will use linked list data structure for each element of the grid and the list can be very long even 100s of elements. Moreover equation (4.7) needs only 3 multiplication where as now updating needs 3(n-1) multiplications. So in programming point of view, it is not realistic. Fortunately, a recursive version of Bayes rule can be derived by using the property, P (Hs)P (s) = P (s H)P (H). The recursive Bayes rule is as follows: P (H s n ) = P (s n H)P (H s n 1 ) P (s n 1 H)P (H s n 1 ) + P (s n 1 H)P ( H s n 1 ) (4.14) So with each new observation, equation (4.14) is calculated and the obtained result is stored at grid[i][j]. As the rule holds commutative property, order of the simultaneous readings for updating grid doesn t affect at all.

37 4.4. Bayesian Updating/ Bayesian Sensor Fusion 25 Now we have all necessary information for using Bayesian Method onto evidence grid for robotic map building. All maps in this report have been developed using this method, described above, but equations converting sensor model to numeric values. We have proposed new sensor model, Parameterized Sensor Model, with different transfer equations. Chapter 5 contains detail of our proposed model.

38 26 Chapter 4. Sonar Based Map Building

39 Chapter 5 Proposed Sensor Model and Measurement of Map goodness In this chapter, firstly we have described our proposed parameterized sensor model in Section 5.1 with the description of parameters. Section 5.2 and Section 5.3 contain optimization of parameters and measuring map goodness methods, which have been tried in this project with implementations as well. Results, Discussion and Conclusion with future works of this part of this project are available in Section Proposed Parameterized Sensor Model We 1 have tried to improve the sensor model that has been described in detail in Chapter 4. Bayesian approach for building sonar based map has been focused deeply with some equations to convert sensor model values into probabilities. But if we concentrate to the Figure 4.6 (in page 24), we find some discrepancy with the expected sensor model, mentioned in Figure 4.3 (in page 18). According to the theoretical point of view of a good sensor model, Region 1 should not contain the belief that can lead the robot as empty space as well as Region 2 should not with occupied information. In 2D Figure 4.6 of standard sensor model, there are some red cells at the corner of the sensor cone in Region 2 as well as some blue cells at the corner of Region 1. From the description of evidence grid, we know that red cells and blue cells stand for high probability of being occupied and being empty respectively. In 3D view of sensor model, these cells (red in Region 2 and blue in Region 1) have come as the spike in opposite direction i.e., upward peak in Region 2 and downward spike in Region 1. Again because of sensor s sensitivity property (resolution of the sonar) exact distances or ranges of the obstacle are not believable hundred percent [27]. That s why, the sensor model considers tolerance value in Region 1. Actually, tolerance is the parameter, which tells us the error range of sensor model. If we take a look to the sensor model, which has been described in Chapter 4, with the trend of believes (Probabilities) both Empty and Occupied through acoustic axis, where the angle between the grid elements and the sensor origin s position is zero, i.e., α = 0 (α is the notation based on the discussion in Chapter 4), we get a graph like Figure In this report, I have mentioned We instead of I in several Sections and Chapters, especially in the Chapters of our proposed models and proposed methods. To me, it is more logical. 27

40 28 Chapter 5. Proposed Sensor Model and Measurement of Map goodness Figure 5.1: Probability trend for a sensor readout through acoustic axis, α = 0. Sensor readout,s=5, Max occupied = 0.9 and T olerance = ±0.5 in Region 1. r is the distance of grid elements from sensor origin. The graph is generated by simulating the equations (4.9), (4.10), (4.11) and (4.12). In the Figure 5.1, following features are noticed in Region 1 At the beginning of the region the belief is the highest and Up to end of the region, the belief has been decreased gradually. But it is more logical to give the highest belief to the point that has been obtained as the return of the sensor reading and both sides of that point should contain less believes. As for example, if the sensor reading is 500 units, the point that stands exact 500 units away from the origin of the sensor should be more reliable than others in Region 1. As a result, the belief or probability trend for Region 1 should look like the Figure 5.2. Figure 5.2: Proposed Belief or Probability pattern/trend for Region 1 through acoustic axis, α = 0. This is the generic model without saying anything about parameters. Here, s=assumed exact sensor readout obtained by a sensor, tol=tolerance for Region 1. This figure (Figure 5.2) gives the model only for Region 1 on acoustic axis, α = 0. This figure represents just the generic model of Region 1 without saying anything about

41 5.1. Proposed Parameterized Sensor Model 29 the parameters, like how higher should be the value of probability of s with respect to the other points of Region 1 or w.r. to the horizontal line shown in the figure etc. Not even the names or symbols of the parameters are mentioned here. The detail description comes later in this section with the complete proposed model. If we look at Region 2 in Figure 5.1, it is noticeable that the lowest belief of this region is zero (Pr=0) at the beginning of this region. And by analyzing equations 4.11 and 4.12 (in page 23) that convert sensor model values to probabilities or beliefs for Region 2, the highest belief of this region goes up to 0.5 (if r=r and α = 0). But the belief or probability 0.5 states the status of a grid element, grid[i][j], as unknown, i.e., neither Occupied nor Empty. That s (0.5 belief in Region 2) not the correct interpretation for a good sensor model. So, we have the opportunity to modify the belief of Region 2 with a parameter that can control the highest belief of Region 2. Again, if the belief of any element of the grid, grid[i][j], is zero or very little ( close to zero ), and if the same element is judged by other sensor as non zero value or as occupied then the updating of that element can be zero or can be very slow. This happens if the sensor gives some readings shorter than a threshold values. As for example, in case of Amigobot Active media Robot( objects that are closer than 10 cm are considered as 10 cm away [2]. But the same places can be displayed as occupied with other values more than 10 cm by other sensor or another time using the same sensor. So, it is not wiser to consider the lower probability as zero. It should be more than zero and we can control it by another parameter. As a result the generic model of Region 2 looks like as Figure 5.3. Figure 5.3: Proposed Belief or Probability pattern/trend for Region 2 through acoustic axis, α = 0. This is the generic model without saying anything about parameters like Region 1. Here, s=assumed exact sensor readout obtained by a sensor, tol=tolerance for Region 1. Similarly, we can say about Region 1 that this region should not give opinion about the element of a grid as unknown. So, another parameter, which can control the lowest belief value, can be introduced in Region 1. And Figure 5.2 can be modified as Figure 5.4. From the above discussion, we can derive a complete modified probability trend only for acoustic axis where α = 0. Including the parameters, the proposed model looks like Figure 5.5. Actually, we have defined the sensor model in two phases. One phase is for acous-

42 30 Chapter 5. Proposed Sensor Model and Measurement of Map goodness Figure 5.4: Updated Figure of Figure 5.2. Figure 5.5: Proposed Parameterized Sensor Model. tic axis, i.e., for α = 0 and the other phase is for 0 < α view of focus of a sensor. Our concept is to develop the model for acoustic axis and then distribute the probabilities/beliefs in both sides of the acoustic axis. So, the highest priority has been given to the acoustic axis. In Figure 5.5, the notation P (S O r,α ) represents the probability/belief of the element, which generates α angle and r straight line distance with the sensor origin onto the grid. So the belief graph, P (S O r,α=0 ) vs r, is the model for acoustic axis and the equation (5.1) distributes the probability values over the whole covered area by sensor s

43 5.1. Proposed Parameterized Sensor Model 31 view of focus. If the view of focus of the sensor be P 5, the distribution equation for 0 < α P 5 range is as follows: This equation controls: P (S O R,α ) = P 5 α P 5 (P (S O R,α=0 ) 0.5) (5.1) The probability of any grid element, grid[i][j], in Region 2 without acoustic axis will not be more than 0.5. So, no spikes are shown up in this region. There will be no valley in Region 1, as the probability can comes at least (0.5 + P 2) 0.5 where P 2 0. With the increasing values of α,p (S O r,α ) values are decreased. That is as far from the acoustic axis, the less belief. This is one of the most important features of a good sensor model according to many researchers [27, 15]. Our proposed sensor model looks like Figure 5.6 as 2D and Figure 5.7 as 3D.

44 32 Chapter 5. Proposed Sensor Model and Measurement of Map goodness 15 Sensor model y x Figure 5.6: 2D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5 units. X-axis is the sensor readout. Occupancy grid P(Occ(x,y) s 1,...s n )(x,y) y x Figure 5.7: 3D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5 units. X-axis is the sensor readout. No spikes is in opposite direction in Region 1 and Region 2. In our proposed Parameterized Sensor Model we have used some parameters to control the probabilities/beliefs. Description of those parameters is as follows: Parameter 1(P 1 ): This parameter works as the threshold for minimum probability value in Region 2 in acoustic axis. The purposes of this parameter have already been described above briefly. To use P 1 = 1 P 4 is a good idea and the reason has been described in the description of P 4. But it is also interesting to assign individual/different value for this parameter. Parameter 2(P 2 ): This parameter controls the highest probability in Region 2 and the lowest probability in Region 1 through acoustic axis. In other sense,

45 5.1. Proposed Parameterized Sensor Model 33 this parameter is the threshold between known (empty or occupied) and unknown (Probability=0.5) elements through acoustic axis. If someone wants to allow the transition probability 0.5 among Region 1 and Region 2, using P 2 = 0 is enough. Parameter 3(P 3 ): P 3 denotes the tolerance for Region 1. Several researchers use ɛ (epsilon) to represent this tolerance. We have also used in Chapter 4 this parameter as epsilon. Because of sequence, we are using P notation from right now. Parameter 4(P 4 ): Maximum probability of occupancy for a grid element has been controlled by the parameter, P 4. The reason has been described already above in this section and also in Chapter 4. As P 1 is the maximum probability of empty of a grid element, one can consider P 1 = 1 P 4. So Occupied Maximum + Empty Maximum = 1 2. But it is not a mandatory rule, just an interesting relation between P 1 andp 4. Parameter 5(P 5 ): View of focus of the sensor is one of the most important parameter in case of sonar based map building and P 5 stands for this. P 5 is not used in case of acoustic axis model in our proposed model. We convert sensor model value to probability using linear equation ( straight line equation ), through acoustic axis. In Figure 5.5, we have used line1, line3 and line4 for representing all these linear equations. They are as follows: r < (S P 3 ) : line1 P (S O r,α=0 ) = (S P 3 ) r < S : line3 (5.2) S r (S + P 3 ) : line4 We can also express these equations as follows according to straight line equation: line1: y = P 1 + (0.5 P 1 P 2 )/(S P 3 ) r ; if r < (S P 3 ) (5.3) line3: y = (0.5 + P 2 ) + (P P 2 )/P 3 (r S + P 3 ) ; if (S P 3 ) r < S (5.4) line4: y = P 4 + (0.5 + P 2 P 4 )/P 3 (r S) ; if S r (S + P 3 ) (5.5) In our sensor model, we have divided the working regions into two parts, like the existing model (discussed in Chapter 4). We can express these regions as follows: Region 1 (R-I): We can represent this region by line3 and line4 (see Figure 5.5). line3 denotes the increasing probabilities where as line4 stands for decreasing beliefs in Region 1. The intersection of line3 and line4 is the peak value, which is the probability or belief of the sensor readout,s and that s the highest belief in Region 1. Region 2 (R-II): line1 ( see Figure 5.5) is the Sensor Model for this region through acoustic axis. So far, we have described all regions, parameters, the probability conversion functions from sensor model through acoustic axis (α = 0) and the equation ( equation (5.1) ) for distributing the probabilities of acoustic axis over the view of focus of the sensor. Now, we can take a comparative look the output with our sensor model and the model discussed in Chapter 4. Maps of the same environment have been generated by existing and proposed sensor model in Figure 5.8 by using same parameters values as many as possible. 2 Empty Maximum means the lowest probability.

46 34 Chapter 5. Proposed Sensor Model and Measurement of Map goodness SM=Standard Sensor Model, Beta(P5)=15, MaxOcc(P4)=0.98, Tolerance(P3)=1.00 SM=PSM, P1=0.02, P2=0.15, P3=1.00, P4=0.98, P5= (a) (b) Figure 5.8: Map built by using a) Standard Sensor Model (described in Chapter 4) with Beta=15, Tolerance=1.0, Max Occupied = (b) by using Proposed Parameterized Sensor Model, with some arbitrary parameters values; P 1 = 0.02(P 1 = 1 P 4 ),P 2 = 0.15,P 3 = 1.0,P 4 = 0.98,P 5 = 15. For data collection, Amigobot (with 8 Ultrasonic sensors) has been used. In Figure 5.8b, proposed sensor model has been used with some arbitrary values of parameters (P 1 = 0.02(P 1 = 1 P 4 ),P 2 = 0.15,P 3 = 1.0,P 4 = 0.98,P 5 = 15). But the proposed model works better with some specific set or sets of parameters. We can call this parameters set as optimal parameter set (In this report, we will mention the set of parameters, though sets of parameters are also possible). In Section 5.2 of this Chapter, we have described the methods for finding the optimal parameters set. 5.2 Parameter Optimization of the Proposed Model In order to get the best result with the proposed sensor model, described in above section, optimal parameter set is needed. Optimal parameter set can be found by measuring the goodness of maps generated by the sensor model. Some existing methods of this measurement have been described in Section 2.2 in Literature Review Chapter. But in this project, we have tried some new approach to measure the goodness of maps. So far all existing methods use the original handcrafted map of the working environment (ground map) for this purpose. But we have tried some approaches, which can give this measurement without any handcrafted map of the working environment. Next section of this Chapter has explained in detail with implementation of these approaches.

47 5.3. Measurement of map goodness Measurement of map goodness In this MS Thesis project, we have tried to find out one statistical method for measuring goodness of maps without using ground map 3 in variant approaches. A good sensor model should give opinion about one cell of the occupancy grid as uniform as possible either the cell denotes empty or occupied place in the environment is the basic of our first approach. As a result, the standard deviation of that cell after generating the map should be as less as possible. As for example, let grid[x][y] represents an occupied cell by several sensor readings. So grid[x][y] is updated with several different or equal values, obtained from sensor model, with Bayesian updating process. If the values obtained from sensor model are more similar, the sensor model is more optimal. Another approach is to find the noise in map by using derivative of image (described Derivative of maps and Comparison below in detail) and comparing the noise among all maps. Map containing the lowest noise is the best one. And the third approach is to run the robot more than one time in the same environment with same odometry 4 position even odometry heading, compare them with different sets of parameters and give some score based on the matching method according to the first method of [26]. The Highest correct matching and the lowest incorrect matching denotes the best map. This approach has been described in Using two Data sets of same environment below. Mean of Standard Deviations (First Approach): This is our first approach based on standard deviation. For each cell of the mapping grid for a single map, the generated numerical values from our proposed model have been saved and standard deviation of those values has been calculated. If the grid s dimension be L M N, for each cell X i,j in L M plan the standard deviation over third dimension (towards N) is σ i,j = 1 N 1 N k=1 (X i,j,k X i,j ) 2 where iɛl and jɛm, and X i,j = 1 N N k=1 X i,j,k. So, we have L M dimension matrix, MSD containing standard deviation, σ for each element as σ 1,1 σ 1,2... σ 1,M.... MSD = σ L,1 σ L,2... σ L,M But we are interested only to non zero elements of MSD. A vector consisting of non zero σ i,j can be formed from MSD and this vector is called vector of standard deviation of non zero sigma,vsdns in this analysis. So, V SDNS = [σ 1 σ 2...σ P ] where σ p = σ i,j 0; p = 1, 2, 3...P and P is the total number of nonzero element of matrix MSD. It is possible to express each VSDNS by its arithmetic mean according to statistics. As a result, V SDNS = σ = 1 P P p=1 σ p. For our project, we have generated 1440 maps with different combination of parameters and we have calculated V SDNS l where l = 1, After generating a histogram 5 for 1440 VSDNS with 14 bins, we have found that the histogram represents a normal distribution like Figure 5.9. Table of 3 Ground Map: Map of the actual environment drawn by hand or any other painting tools, not generated by robot. 4 Odometry: It represents the position of Robot with respect to global co-ordinate at any moment. In 2D co-ordinate(x-y) representing a point s position, we need only (x,y) and for denoting direction, we need angle, which is θ here, w.r. to X-axis. So, odometry means (x,y,θ) of the robot of that time. The θ is also called heading, heading angle or robot heading. 5 Histogram:A histogram is defined as a bar graph that shows frequency data.

48 36 Chapter 5. Proposed Sensor Model and Measurement of Map goodness Figure 5.9 shows the value containing each bin. 250 Num of Bins=14; Mean of Standard Deviation (a) (b) Figure 5.9: (a)histogram generated from the mean of standard deviation. (b) Table containing values of histogram. As the lower standard deviation vectors (VSDNS) represent better parameter set, we can take the decision about our parameters from the lowest value of this histogram. The width of the bin containing lowest value (in our case to ) can be called Expected Interval or Flat Interval for our parameters set in this project and we can find all maps as well as parameter sets which are responsible for this expected interval. If we take the average of all these parameter sets for individual parameter, we get the best value of each parameter as the optimum parameters set. Table 5.1 denotes the range of different parameters used for testing this method (2 nd and 3 rd column) and obtained optimum parameter set by this approach (4 th column). In Figure 5.10, sub figure (d) contains the optimal parameter set found by this method. If the minimum VSDNS is considered, the obtained parameter set is 5 th column of the table and the map looks like sub figure (e). Instead of using histogram, if we consider V SDNS, we get the sub figure (f) which is remarkably better than sub figure (d) and (e) in Figure 5.10 though this is a crude method and the values of parameters are 6 th column of the table 5.1. If we look at table 5.2, which represents the deeper insight of the first bar of histogram in Figure 5.9, some tendencies are observed for individual parameter. Lower values of P3, P5 and higher values of P2 in the search space give less VSDNS where as P1 and P4 don t show any specific patterns. We can call this observation as stability test for parameters. But there are some flaws in this analysis method. Taking the decision from histogram is not very sound, because the number of bins is variable. This method also needs huge memory and long running time. About 50 hours had been spent to generate 1440 maps for getting VSDNS in Pentium 3(1.5 GHz), Windows-XP and 1.0 GB memory machine.

49 5.3. Measurement of map goodness 37 Parameter Min Max Optimum Optimum Optimum (using histogram) (V SDNS minimum ) (V SDNS) *P P *P P P Table 5.1: Used parameters set with minimum(2 nd column) and maximum(3 rd column) range. 4 th column denotes the optimal parameter set by using lowest column of histogram mentioned in Figure th column contains the lowest VSDNS generated set and 6 th column is obtained by taking the arithmetic mean over 1440 VSDNS. *P1:P1=1-P4 relation has been used. Derivative of maps and Comparison (Second Approach): Here the basic concept is that each cell of the grid should contain similar value with respect to its nearest neighbors. This can be found by using the concept of derivatives of image [16]. If we calculate this value for all cells of the grid, and apply some statistical process like mean or sum etc.(we have tested with sum), we can get a measurement about the mapping grid. We call this measurement the noise of a map in our project and it is calculated as follows: Let the 2D mapping grid be M N and the window be W = m n. So, for any cell,(x,y) of mapping grid, the derivative,d x,y be S x,y = x+ m/2 i=x m/2 y+ n/2 j=y n/2 abs(w i,j W x,y ); D x,y = S x,y /(m n 1); where, x = 2, 3...M 1 andy = 2, 3...N 1; Now, the derivative of a single map or the noise of the map,d map can be calculated as follows: D map = M x=2 N y=2 D x,y The less noise content denotes better map as well as better parameter set of the sensor model has been used for generating the map. We have tried this method with 8 connecting neighbor for 1440 maps with different values of parameters same as Table 5.1 (in Mean of Standard Deviations). For each map, the derivative of all cells has been calculated and their summed value denotes the derivative of that map. As with the change of parameters the size of the map might be changed, so before finding the optimum parameter set normalization is required. For normalization, only updated cells have been considered. We have found P1=0.05 (1- P4); P2=0.2; P3=0.5; P4=0.95; P5=40 parameter set as optimal by this method and the map with this optimal set looks like Figure Though we have found a parameter set as optimal by this method, it does not reflect the actual optimal map. It is worse than most of the maps of Figure So it is not possible to find out an actual optimal set using this approach. Comparing two different maps of same Environment (Third Approach): If the environment is same, the map should be similar all the time. If the robot is driven two times from the same position with the same direction and same exploring behavior as well as same speed, then we get two different maps of the same environment. Let

50 38 Chapter 5. Proposed Sensor Model and Measurement of Map goodness Parameter set: P1=0.04; P2=0.13; 2P3=1.9; P4=0.96; P5=6 Parameter set: P1=0.02; P2=0.13; 2P3=1.0; P4=0.98; P5= (a) (b) Parameter set: P1=0.02; P2=0.2; 2P3=3; P4=0.98; P5=40 Parameter Set:P1=0.03;P2=0.17;2P3=0.78;P4=0.97;P5= (c) (d) Parameter Set:P1=0.02;P2=0.2;2P3=0.5;P4=0.98;P5=6 Parameter Set:P1=0.02;P2=0.05;2P3=2.0;P4=0.98;P5= (e) (f) Figure 5.10: Finding optimal parameter set by mean of standard deviation method. they are A and B respectively. If a threshold value for considering the occupied place onto the map has been used, binary maps A and B containing only obstacle of the real environment can be obtained. Now we can compare binary maps A and B and give some score based on [26]. If any specific cell, grid[x][y], in both map A and map B, represents same status either occupied or empty, this cell can be called correct cell or point and the positive score is increased. Otherwise that specific cell can be considered as wrong information provider and the negative score is increased. (This concept is only for measuring map goodness in this project. That cell can provide correct information for both maps though they are considered wrong with respect to each other in case of this

51 5.3. Measurement of map goodness 39 P5 P4 2P3 P2 VSDNS Table 5.2: This table contains all parameter sets obtained from the first bar of histogram mentioned in figure 5.9. *P1:P1=1-P4 relation has been used. comparison.) Intersection (A,B) and Exclusive-OR (A,B) 6 provide total positive score and total negative score respectively for one combination of parameters. For different combinations of parameters, we can calculate positive and negative scores from two binary maps. The optimum parameter set is the pair of Maximum positive score and Minimum negative score. We have tested with 1440 different combinations of parameters of our proposed sensor model and have found P1=0.02 (1-P4); P2=0.05; 2*P3=3; P4=0.98; P5=35 as optimal set and Figure 5.12 shows the map. It is better than several sets but not the best one. So, this method is not working as expected in theoretical point of view. Another thing, it is not sound to run the robot two times. 6 Intersection( T ) is set operation and Exclusive Or is logical X-OR operation on two data sets.

52 40 Chapter 5. Proposed Sensor Model and Measurement of Map goodness Parameter set: P1=0.05; P2=0.2; 2P3=1.0; P4=0.95; P5= Figure 5.11: Optimal map by derivative of maps and comparison method Parameter Set; P1=0.02;P2=0.05;2*P3=3.0;P4=0.98;P5= Figure 5.12: Optimal map with parameter set by comparing two different maps of same environment method. 5.4 Result, Discussion and Conclusion In this MS Thesis project, we have tried to find out some statistical approach for measuring the goodness of maps without using the ground map or the original map of the environment. And this measurement can be used to find an optimal parameter set of a parameterized sensor model. We have proposed one parameterized sensor model in this Chapter and have wanted to use the measurement of map goodness (various methods described in Section 5.3) without ground map for determining the optimal parameter set. It s a matter of fact that none of our tried measurement of map goodness without ground map has given satisfactory results, though we have got some better parameter set by some methods described above. So, we didn t find the best parameter set of our proposed parameterized

53 5.4. Result, Discussion and Conclusion 41 sensor model. But our Mean of Standard Deviations (V SDN S process) measurement of map goodness gives better result than other ideas. So Mean of Standard Deviation method for measuring the goodness map without ground map can be applied and the parameter set found by this method in our proposed sensor model can be used. Moreover, we have fond that our proposed sensor model gives better result than standard sensor model (described in Chapter 4, Figure 4.6) with same parameter set whether it is optimal or not (See Figure 5.8). In that sense, proposed model is better than sensor model described in Chapter 4. All of our applied methods for measuring the goodness of map depend on searching space. This is one of the limitations. Long running time is another problem with all approaches. In our proposed sensor model, calculation complexity is higher than standard sensor model equations (described in Chapter 4) though proposed models gives better result than standard sensor model. Future Works: Finding optimal parameter set for proposed sensor model by using ground map can be mentioned as future work. Genetic Algorithm can be applied for this purpose. Trying to reduce the calculation complexity as well as running time is important as next step. Though map building algorithm is not the main concern of this project, building combined map by using different types of sensors would be really interesting, especially while sonar is used. It would be nice to represent proposed sensor model by probability distribution function (PDF). Neural Network can be used to convert the sensor model information to numerical values, though collecting sample data is not so easy. We have tried different map goodness methods with short range sensor (Maximum considerable Sensor Range was 1 meter). May be that s why we did not get so much distorted maps with the changes of parameters as we expected like Figure 7 in [23]. We have got all maps (see Figure 5.10) which are nearly similar. So trying with long range sonar can be mentioned as future work also.

54 42 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

55 Chapter 6 Dealing with Specular Reflection In this chapter one algorithm, which can handle the noisy data because of specular reflection while using ultrasonic sonar sensor, has been developed and described in detail. The name of the algorithm is HSR after the name of Handling Specular Reflection. Specular Reflection problem has been described in section 3.2 in Chapter 3 already mentioning why and when it occurs. 6.1 HSR Algorithm Description with Pseudo-Code HSR Algorithm takes raw sensory data with robot odometry 1 and tries to update the specularly reflected readings from the raw data set. Though the updated data set can be used for various purposes later on, but the main concern of my project is to use at indoor robotic map building. This is an off line data processing and batch processing is the basic. We are using global co-ordinate of odometry for all calculation and a grid for representing those calculation as temporary maps at different levels. HSR can be used one time or iteratively. In this project, HSR has been implemented iteratively. If we get any reflected sensor reading, at least one obstacle is there and it is assumed that the obstacle is perpendicular to the direction of the sensor s acoustic axis (see Figure 6.5) [27]. We can assume the obstacle is linearly shaped with fixed length, as all curved shape is piece-wise liner. This type of linear obstacle, which is estimated from the sensor reading based on the above assumption, is called Estimated Wall in our project (for detail explanation please see the Section 6.1.2). If the size of Estimated Wall varies with some parameters (especially with sensor reading), we can call this as Estimated Variant Wall. By using this estimated wall information, we can identify and update the sensor readings, which are the results of specular reflection or crosstalking. If any reading intersects with this estimated wall, we can consider this reading as noise (based on some conditions described in Section 6.1.3) and can update this reading by the distance between Estimated Wall and the relevant sonar. A temporary 1 Odometry: It represents the position of Robot with respect of global co-ordinate at any moment. In 2D co-ordinate(x-y) representing a point s position, we need only (x,y) and for denoting direction, we need angle, which is θ here, w.r to X-axis. So, odometry means (x,y,θ) of the robot of that time. The θ is also called heading, heading angle or robot heading. 43

56 44 Chapter 6. Dealing with Specular Reflection map interpreting all sensor readings as points is called S, which is used for generating Estimated Wall. But we don t need to consider Estimated Wall for all readings. As we know that shorter range readings are more reliable than longer range readings in case of sonar, readings less than a certain range, which is called Threshold Reading in our project, are enough for considering Estimated Wall and these readings are considered as correct readings in preliminary stage though they can be updated. This modified version of map S is called Extended S or ES. In order to apply this idea, we need to process the raw sensor data before estimating the wall. So, HSR Algorithm can be divided into three steps: Processing sensor readings for next steps Estimating Variant Walls for relevant readings Finding and Updating specularly reflected readings by using Estimated Wall information and other conditions For applying this idea, some parameters are essential to consider and those are described in different sections where are needed in this chapter. Following is the Pseudo- Code of HSR Algorithm: function HSR (data, pose, minwallsize, maxwallsize, Rth, TDTh) Input Parameters: data, sensor reading; pose, robot s odometry of data collection; minwallsize and maxwallsize, minimum and maximum size of the estimated wall respectively; R th is the threshold value for considering reliable readings; TDTh, Targeted distance threshold or cross-talking tolerance. (Please see the description of section Step-2 for minwallsize, maxwallsize and Rth ; and section Step-3 for TDTh in detail). Output Parameters: udata, the updated data set Step-1: Processing Sensor Readings Convert all sensor readings, data, into global Cartesian coordinates,(x,y), by using odometry position of each data sample and robot s geometry as well as save all translated information, odometry position, sensor position and data points in a database DBR. DBR [translated odometry, sensor positions and data points]; S Plotting raw sensory information after translation; // S is a temporary map //(for detail please see Section 6.1.1: Processing Sensor Readings ) Step-2: Estimating Variant Walls // (Please see the Section for detail description with parameters) for each data i R th of data and save this information as a map, ES. estimatevariantwalls function does this step. ES estimatevariantwalls (data, pose, DBR, S, minwallsize, maxwallsize, R th ); Step-3: Finding and Updating Specularly Reflected Readings Update sensor readings which are possible by using the information obtained from Step- 2 and conditions mentioned in update function. udata update(data, pose, DBR, ES, TDTh); //(This step has been described with parameters in detail in Section 6.1.3)

57 6.1. HSR Algorithm Description with Pseudo-Code 45 End HSR Each step has been described with relevant description of parameters in different sections below in this Chapter. The complete Pseudo-Code of HSR including various steps is available in Appendix- A Processing Sensor Readings (Step-1) Sonar sensor reading denotes the distance between the sensor and the obstacle by which the ultrasonic sound is reflected and is returned back to the receiver of the sonar. Sensors are mounted on the robot at fixed positions with fixed distances and directions with respect to a specific point of the robot. Let this point be C and one of the sensor s position be S. C can be center or any other point depending on the shape of the robot. So, from the physical structure of the robot, we can calculate the direction of CS (specific point to sensor position). In our project, we have used Amigobot Active Media Robot [2] and from the manual of Amigobot, we get the physical structure of the robot, which looks like Figure 6.1. We can call this physical shape as robot geometry. Figure 6.1: Amigobot s Geometry and the direction of a reading sensed by the specific sensor, S (The figures are adopted from Amigobot ActiveMedia Manual) The direction of the data sensed by a sensor is same as the direction of the sensor (see the Figure 6.1) from the physical point, C of the robot. From the robot geometry, for a specific sensor, the direction of the reading can be calculated. If the direction of

58 46 Chapter 6. Dealing with Specular Reflection the robot has been changed, it is still possible to calculate the direction and position of all sensors as well as all readings by using the simple geometrical method. So, we can use the odometry information without any problem. If we use odometry information, we have the global co-ordinate of odometry and we can convert any reading into Cartesian format, (x, y) with respect to that co-ordinate. So, we can consider each reading as a point in the global co-ordinate of odometry and we can call the reading/data (each sensor data) as a data point. sensorpos and datapos functions do these tasks in our project (Figure 6.2 demonstrates this task). And all these data points are saved in a data base, DBR (in my program, DBR is the combination of DBXR and DBYR. DBXR and DBYR contain x and y co-ordinates of all necessary information respectively). Figure 6.2: Translating sensor readings into Cartesian (x,y) format by our program. Upper right hand corner table in this figure represents the Robot odometry and sensed sensor reading at one instance at that odometry. And the figure (colored) is the visualization of those readings after translation with origin (500,500). After translating all data readings into Cartesian format with the relevant odometry poses, we can get a map of the environment by plotting these translated data points with odometry onto a grid. Let s call this temporary map S. Figure 6.4 is an example of map S that represents the environment described in Figure Estimating Variant Walls(Step-2) The temporary map S, described in the above section, contains all sensor readings either they denote empty or occupied, i.e., either they are received by the receiver of the sonar or not. But we don t need to consider all readings onto the grid in map S; only those readings, which are less than or equal to a threshold distance, R th, are enough, because only those readings will be used for detecting the specularly reflected readings. In other sense, we are expecting that readings less than R th contain less noisy and more reliable

59 6.1. HSR Algorithm Description with Pseudo-Code 47 Figure 6.3: Actual Environment. For the discussion of this section data have been collected while wall 4 was White Card Board (used for making poster) that was very smooth. Figure 6.4: Map S is obtained by plotting the data set after translating into Cartesian fashion, of the environment described in Figure 6.3. The blue points are robot s trajectory and red dots represent sensed data. 1,2,3,4 denote the boundary walls of the environment. readings. In general, one can use the Maximum Range of the sensor, R max, as R th. If the readings are less than R max, but they are not reflecting the exact conjecture of the real environment then R th = R max is not a wiser decision (This concept has been

60 48 Chapter 6. Dealing with Specular Reflection described in detail in Parameters of HSR Algorithm section in this Chapter). So R th is a threshold parameter that can take values other than R max as well as R max in some cases. Though now map S contains only the readings which are less than or equal to R th, still all data points are saved in DBR as before. If we get any reading,r less than R max by any sonar,s, that means there is at least one obstacle at the distance r from the sonar sensor,s. But it is not possible to know where the exact position of the obstacle is in the focus of sonar ( Foreshortening problem). Though sonar s ultra sound works as a cone shape, it is assumed that the obstacle stands on the acoustic axis (along the axis of the sound wave) [27] and the orientation of the obstacle is perpendicular to the acoustic axis. Based on this concept, we can assume that there is some obstacle at the data point, A (from where the sound is reflected and returned to the sonar) correspond to the reading r of sonar s (see Figure 6.5) and it be shaped like straight line, which can be assumed as the wall for that reading and the wall is perpendicular to the direction of sound wave (see Figure 6.5 Estimated Wall Concept ). We can call this wall as Estimated Wall for our project. Figure 6.5: Estimated Wall Concept for a single sensor reading,r. Point A represents the data point for reading r as well as the middle point of estimated wall CB. So, it is possible to modify the map S with these types of lines/walls for sensor readings that are less than or equal to R th. We can say this new map as ES (S with Estimated wall or Extended S). As the sensor s coverage is more at longer distance than shorter, considering variant size of wall proportional to distance is more logical. So for considering the size of the wall, we have two parameters: minimum size of wall (minwallsize) and maximum size of wall (maxwallsize). Every sensor has a sensitivity range and we call this range as minimum and maximum range of sonar as R min and R max respectively. Minimum size of the wall, minwallsize, is at R min and maximum size of wall, maxwallsize is at R max. The in between size of the wall i.e., any reading, r where R min < r < R max, has been calculated using linear interpolation. The formulas

61 6.1. HSR Algorithm Description with Pseudo-Code 49 are as follows: dr R max R min ; dr r R min ; dw all maxw allsize minw allsize; estw all minw allsize + (dw all/dr) dr; //estwall denotes the estimated wall Always the data point, correspond to reading, should be considered as the middle of the estimated wall, because of angular uncertainty problem of sonar and it is assumed that obstacle stands onto the acoustic axis (see Figure 6.5. A is the middle point of CB). So estw all = estw all/2 can be calculated as the length of each side of the wall from the data point. Still the world size has been calculated based on the real world co-ordinate i.e., global co-ordinate that has been used for odometry. Now for plotting the estimated wall, grid has been used. All points on the wall are calculated in the global co-ordinate and are pointed onto the grid. This approach reduces the rounding error. This process has been applied for all readings less than or equal to R th. But any point of estimated wall should not affect any actual information, like original data reading point, points acquired by robot at the time of exploration, sensor s position etc (in Figure 6.4, blue and dark red points should not be overlapped by estimatedwall); because the estimated wall points have the least priority, i.e., physical information should keep unchanged for considering estimated information. And any cell of the grid, grid[x][y], at time t should be considered as estimated wall s cell if grid[x][y] does not contain any of the following information at time t or (t-n) 1. Robot s position 2. Any sensor s position 3. Sensor reading sensed by sensor physically 4. estimated walls information already Where n is any non zero positive number (Please see Figure 6.6). estimatevariantwalls function does this operation so far described in this section (Step-2). The Pseudo-Code of this function is available in Appendix- A. After executing estimatevariantwalls, we get the map ES with estimated wall information. ES looks like Figure 6.6. This figure contains some odometry positions and corresponding readings and explanations for one sensor only. In the figure, dark blue(r) and light blue(s) dots represent robot s trajectories and the single sensor s position respectively. Dark red (D) and light red (W) are for exact reading sensed by that sensor and estimated wall for corresponding D point sequentially. As the reading is longer, estimated wall corresponding that reading is larger Finding and Updating Specularly Reflected Readings(Step- 3) From the above section, we have one temporary map ES, consisting of points and lines, with estimated wall information for readings less than R th. Now we want to find out erroneous readings from the provided data set to our HSR Algorithm and want to update those erroneous data if possible by using the estimated variant wall information. This section is responsible for this task as the Step-3 of the HSR Algorithm. Without knowing the environment, it is not possible to know which reading has been received because of specular reflection from the data set. That is, if we don t have the information about environment either any place is empty or not, it is not possible to make any comments about any sensor reading which may goes out of range especially. But short range data are more reliable than long range data in case of ultrasonic sonar.

62 50 Chapter 6. Dealing with Specular Reflection Figure 6.6: After executing estimatevariantwalls function for some readings with min- WallSize=40mm, maxwallsize=150mm and threshold distance, Rth=Rmax=3000mm. This map is called ES. As we don t know either the reading is the result of specular reflection or not, we want to check all the data with some conditions mentioned below in this section. To do this, we sort all readings in descending order( as shorter range reading is more reliable than longer range readings), take readings one by one and try with the updating procedure described below. We know the odometry position of any time t, specific sensor s position of a reading as well as Cartesian co-ordinate of that reading itself at time t from the data base DBR, which has been created in Step-1 (Section 6.1.1) of HSR Algorithm. Let for a specific reading,r the co-ordinate of the sensor that has sensed the reading, is A(x1,y1) and the co-ordinate of the reading, r is B(x2,y2) (see the Figure 6.7). These co-ordinates are obtained from global axis of odometry, not from the grid. Now we can start to search from point A to B in global axis through a straight line, AB. Let C(x,y) be any point on AB for each searching interval for a particular reading r. For each instance of C(x,y) on the searching line AB, we can check the following conditions (for detail explanation of all conditions, see Explanation of the Conditions sub-section below): 1. Distance(AC) R max ; R max is the maximum range of the sonar sensor and Distance(AC) represents the Cartesian distance between points A and C. 2. C(x,y) is not outside of the temporary map ES. If the above two conditions are satisfied then we have to take a look onto the map ES whether the cell ES[xr][yr], corresponds to C(x,y) where (xr,yr) is scaled values of (x,y) by grid s scaling factor, is occupied by 1) any actual reading sensed by sensor physically or 2) by any estimated wall s information. Because 1) there might be any reading which

63 6.1. HSR Algorithm Description with Pseudo-Code 51 Figure 6.7: Representation of searching points and direction for Step-3 in HSR Algorithm for a sensor reading,r. sensed by another sensor at the same time t or another time t where t t ; or by the same sensor at different time t. This can be happened because of different angular position of odometry or the angular advantages of one sensor to another for that part of the environment. 2) There might be any obstacle, which could also cover the direction of this reading, very near to this point (from the Estimated Wall Concept). If this cell, ES[xr][yr] corresponds to C(x,y), is occupied then we have to consider the following condition (Condition 3) for taking the decision about the reading r, represented by point B(x2,y2), for which we are trying to know whether this is correct reading or the result of specular reflection. 3. Distance(CB) T argetdistancet hreshold (in short TDTh); TDTh is a tolerance parameter for cross-talking or sensitivity of a reading. (TDTh has been explained later in Explanation of the Conditions section in detail in this Chapter). If any one of the above three conditions does not hold then go for the next searching point. Let C s (x, y) is a specific instance of C for the current reading r and all the above conditions are satisfied with this instance. If all the above conditions are satisfied then point B(x2, y2) should not exist at that position, i.e., reading r should not be as long as it is right now. Because in reality, it is not possible to get any reading beyond the obstacle or wall. So, we can say that point B(x2, y2) is the effect of specular reflection, i.e., reading r is the result of specular reflection. In this way we can identify all data which are result of specular reflection. If we find that reading r, represented by point B(x2,y2), is a wrong data i.e., if any C s, which satisfies above three conditions, exists then we can update this reading, r

64 52 Chapter 6. Dealing with Specular Reflection by the value of Distance(AC s ), because C s might be the exact position of B(x2, y2), if there occurs no specular reflection (see Figure 6.8). Still we are using the points of global co-ordinate and that should give better estimation than calculating the distance from the grid as well as should give less rounding error. Figure 6.8: For reading r (represented by point B), any instance of C(x,y), ES[xr][yr] is occupied by estimated wall and that instance of C(x,y) satisfies all three conditions mentioned above. So, C(x, y) becomes C s and reading r is proved as the result of Specular Reflection or Cross-taking. As a result, reading r should be updated by Distance(AC s ). In our project update function finds specularly reflected readings and updates them as described above. The Pseudo-Code of update function is given in Appendix- A. Explanation of the Conditions Condition 1. Distance(AC) R max ; R max is maximum range of the sonar: This condition represents that for any reading r (corresponding data point B(x2,y2)), if the straight line distance between starting point, A(x1,y1) and any intermediate point, C(x,y) on searching line, AB(Figure 6.7), is greater than or equal to the maximum sensor range, R max then we should not consider this reading, r as an erroneous noise. Instead of providing erroneous information, reading r denotes a really empty place in environment. This condition is applied to find and update specularly reflected reading in HSR Algorithm (and all notations are according to the description of Step-3 of HSR). Actually, the reliable reading of sonar is between the maximum and minimum range of the sonar sensor. So, if we update any reading by any value outside of this reliable range, it can add noise to data set instead of purification. And if any situation occurs

65 6.1. HSR Algorithm Description with Pseudo-Code 53 contradictory of this condition in case of reading r then reading r denotes actual empty place i.e., really no obstacle at that place and the reading r is correct. Figure 6.9 explains better this condition. Figure 6.9: Explanation of Condition 1. When the robot is at PosB, the obstacle (wall represented by long black line on which B is stood) is sensed by the sensor in a normal way. But when the robot is at PosA, the obstacle is out of range. So it takes infinity values. We may also get some readings as infinity because of specular reflection. Point C represents the maximum sensor range from PosA. In the Figure 6.9, the data has been sensed from two different positions of the robot at different time. When the robot was at position PosA, the reading was d A and when the robot was at PosB the reading was d B and R max = distance(p osa, C). After plotting the collected data with robot s positions (odometry), point A correspond to reading d A and point B correspond to reading d B. As distance(p osa, W all) > R max, d A is a very large value, like specularly reflected reading that goes away from the sonar though it is not really specularly reflected. That s why point A goes beyond the actual wall. On the other hand, point B is reasonable correct reading less than R max. According to our concept, if we don t consider this condition (Condition 1) then point A (reading d A ) could be updated by either point B (reading d B ) or any other point on estimated wall generated from reading d B (perpendicular on PosB to B line). But here reading d A is not because of specular reflection; it denotes really an empty place. Condition 2. C is not outside of ES: Like Condition 1, C(x,y) denotes the searching point on line AB from sensor position A(x1, y1) to data point B(x2, y2), which represents the reading r (see the Figure 6.7). ES is the map containing actual points as well as the estimated walls information for finding and updating erroneous readings. Update is occurred if C becomes C s (see Figure 6.8) according to the above description in Step-3 of HSR Algorithm. As ES

66 54 Chapter 6. Dealing with Specular Reflection contains the information based on which any instance of C becomes C s, if the searching point, C goes out of ES, there is no possibility of B being updated by any reliable reading or estimated wall information. So, it is better to skip searching in this case and reading r represents actual empty place. Condition 3. Distance(CB) T argetdistancet hreshold (in short TDTh): A(x1,y1), B(x2,y2) and C(x2,y2) represent the same meaning as in Figure 6.7 and other conditions. As in case of searching, A is starting and B is target to go on by using point C on line AB, we can call the straight line distance from C to B as the target distance. If we get any searching point so that C becomes C s (see Figure 6.8), then we should check either C s represents the point B itself or not and this can be done by using TDTh (Target Distance Threshold value). Because it is meaning less, moreover is distorting the actual environment in map representation if any reading is updated by itself. The reason why this type of situation can be come and how can we control it, has been described below. It is very normal to get different readings from the same obstacle or wall in different time with different odometry position in case of sonar sensor. The readings may not be the exact position onto grid or real odometry co-ordinate, though they are denoting the same object in the same environment like in the Figure Figure 6.10: Actual environment, wall AB, is represented by the sensor readings obtained from different time with different odometry positions. In the map, Wall AB can be shown by the points near to the wall (d,c,e,f,b etc), but the point a. Point a is either the result of Cross-talking or specular reflection or not a noise at all, TDTh gives this decision. If the situation is like Figure 6.10, then it is a matter of question either the points a, c, b or f should be updated or not. c, b and f are quite closer to the wall, AB. And the parameter TargetDistanceThreshold (TDTh) is the answer of this question. TDTh fixes the region how long someone wants to think that these points are same. This type of situation can also be occurred because of calculation error that is not possible to avoid, like rounding error or truncation error while a grid has been introduced. Again, at the time of searching from A(x1,y1) to B(x2,y2) and updating B(x2,y2) by C s (x, y) (see the Figure 6.8), TDTh protects B(x2,y2) to be updated by itself calculated from global

67 6.2. Parameters of HSR Algorithm 55 co-ordinate value instead of its actual value. The situation like this Figure (Figure 6.10 ) is very common at the corner of room or environment because of cross-talking. So this parameter can also protect cross-talking error. From the above discussion, we can call TDTh as a sensitivity of a sonar reading or tolerance of cross-talking. 6.2 Parameters of HSR Algorithm Threshold Distance, R th : Sensor readings lower or equal than this parameter are considered for the estimation of walls as correct readings (in Step-2 of HSR Algorithm). This parameter gives just the initial guess about readings, because some of these readings can be updated too. R th = R max is sufficient guess for this case, where R max is the maximum range of sensor. But if there is opening or hole in the environment, like doors, R th may take different value that depends on the size of the opening and the distance the opening can be sensed by sonar correctly at the time of robot s exploration. I have found a relation between opening size and the distance the opening can be sensed by sonar correctly (see the Section 6.3 for this study). Based on this relation, it is found that R th = 800(mm) is giving better result with the opening size 80 cm, where the data has been collected sufficient near (less than R th ) to the opening so that the sonar can detect the opening clearly. In the second experiment (see Subsection 6.4.2), I have used this value. Note: the data collected by Amigobot is in millimeter. So R th = 800mm. Estimated Wall Size: This parameter has been calculated from maximum and minimum size of the wall. minwallsize (minimum wall size) is the size of the estimated wall at minimum range of sensor and maxwallsize (maximum wall size) is the size of the estimated wall at maximum range of sensor. This parameter depends on the environment. So, it is very hard to get actual value of this parameter. But from our experiment, we have found that minwallsize=20 mm and maxwallsize= mm works well usually. But it is not correct all time, because it depends on the environment as well as the sensor of the robot. In case of opening or hole in the environment, small wall size should work better. But that time, noise will not be removed as much as without openings in the environment. Based on the relation between the size of the opening and the distance from which the opening can be sensed correctly, I have found that with the minwallsize=20 mm and maxwallsize=80 mm is giving a reasonable result while R th = 800mm (see the Experiment 2 in Section of this Chapter below). TargetDistanceThreshold: It is the threshold value for considering sensitivity of a reading or data point (described in the Explanation of Conditions section in third condition in Step-3 of HSR Algorithm). From experiment (trial and error method with a data set), I have found that TDTh=60 (mm) is giving reasonable result with Amigobot. Another way to measure this parameter is to calculate this value from physical property of sensor. We know that sonar works with cone shaped area. So for a certain distance, d it is possible to measure the sensor reading through the acoustic axis at the distance d at point A and at the farthest distant point B, which is possible to be sensed by the sonar for that certain distance,d from the center of the cone,a (see Figure 6.11 below). Let the reading at A is r1 and at B is r2. Though the readings are different, theoretically they will be considered as the same point in case of map building. Because it is not possible to understand from which point of the sonar cone we are getting the readings either from A or B. So we can say, lower than the difference (r1-r2) are the same readings in this case and B is an instance of A. The maximum range of the sonar

68 56 Chapter 6. Dealing with Specular Reflection of Amigobot is 3 meter and we get very less noisy values below 1 meter. So we have considered d=2 meter (average reading between 1 and 3 meters) near about for this TDTh experiment shown in the Figure Obtained r1 =2005mm and r2=2067mm was one instance with d=2000mm. Several readings have been tested with changing d values and the average TDTh=(r1-r2) has been found as 60mm. Figure 6.11: Determining the value of TDTh. This is one example of the experiment when d=2000mm. Several readings with different d values have been taken and average of (r1 r2) gives the value of TDTh. Resolution of the grid for ES: It is better to use lower resolution for representing ES map, which contains the estimated walls information in HSR Algorithm, as ES has been used in case of searching (see Condition-3 in the pseudo-code of update function in HSR Algorithm). Unit resolution gives almost no error in searching in theoretical point of view, but it needs large memory and longer operation time. In our project we have used 1 cm (10 mm) as each small square, as our data set s unit is in millimeters and it has given reasonable results. R max and R min : These parameters are sensor s parameters. We have used Amigobot mounted with 8 sonar sensors of same type for this project and their R min = 100mm and R max = 3000mm have been mentioned in manual. But practically, we have found R min = 165mm and R max = 3000mm. So, we have used the second parameters set. 6.3 Relation between Openings and Sensor s distance For building the map similar to physical environment, it is important to sense the open and occupied space as correctly as possible by ultrasonic sonar. During this project,

69 6.4. Implementation 57 we have found that if there is any hole or opening, as for example open door, in the environment, the sonar can not detect this opening/hole after a certain distance from the hole. This distance depends on the physical property of the sonar (like, view of focus, maximum range etc) and the size of the opening. But in our algorithm, for getting the idea about the parameter R th (described in parameters of the algorithm Section 6.2), to know from which distance the opening can be sensed correctly is important. In other sense, the robot should explore less than this distance for generating actual conjecture of the environment perfectly. We have done one experiment to find out the relation between the holes in environment and the distance from which they can be sensed clearly. This section describes this experiment and obtained results. Experimental environment is very simple, just making a hole, which works as the door in real environment, between two smooth poster boards. We have used Amigobot ActiveMedia robot in this experiment also and we know that the minimum range of Amigobot s sonar is 165 mm and VOF is So, if we derive the robot straight with one of it s sensors perpendicular to the walls and less than 165 mm distant from walls and hole, we get the number of readings, which denote the hole and this is the maximum number of readings for detecting that hole. Let the number of readings is n and the hole s size is h, where the distance is d = d min < 165 mm. Now, increase the distance d i and count the number of readings, n i which detect the hole from d i distance. Select the distance d j as the maximum distance at which the hole h can be sensed clearly if n j = 60% n relation holds, where j is one instance of i for the hole size h. For the different hole size h, we have got the following (Figure 6.12) relationship between h and d. (a) Graphical Representation (b) Collected data table for fig a Figure 6.12: The relation between hole/opening size and the distance at which this opening/hole can be sensed almost perfectly by sonar mounted on Amigobot. 6.4 Implementation We have tested our developed HSR Algorithm in two different environments; first environment consists of no hole or opening (Experiment 1) and the other one contains three

70 58 Chapter 6. Dealing with Specular Reflection openings of different sizes (Experiment 2). We have run the HSR algorithm iteratively and the terminating conditions are as follows: If the number of readings which are updated in first iteration be X and any other iteration be Y then iteration should continue until Y = 10% X satisfies. Because improvement less than 10% is not cost effective; it does not improve the quality of map remarkably than the first iteration. Again we have noticed that if any iteration updates more readings than it s immediate previous iteration, the map starts to distort. So, this should not be allowed, i.e., (prevu pdatedreadings curu pdatedreadings) > 0 is another terminating condition for this iterative process. Equipments: Amigobot ActiveMedia Robot (See the Figure 6.13) has been used for collecting data. Amigobot has the following technical specifications[2]: Eight (8) same type of range finder sonar with 30 degree angular coverage each are mounted on Amigobot; 6 sensors are in front and 2 are in rear. Two wheels with individual motors and one rear caster Two 500-tic shaft encoders IEEE b wireless ethernet communication Figure 6.13: Amigobot ActiveMedia Robot. Speed and exploration behavior has been described in each experiment section including sampling rate. Program development environment is MATLAB 7.0 and it has been tested in the computer with Microsoft Windows XP Professional, ARIA 2.1 and ARIA-MATLAB Adapter (developed by J. Borgström in his MS Thesis [5] at Umea University, Sweden, 2005 in Computer Science department under the supervision of Thomas Hellstrom ) for communicating with the robot. Map building Algorithm: For map building, evidence grid method with Bayesian Updating has been used (as described in Chapter 4).

71 6.4. Implementation 59 Figure 6.14: Physical Environment for the experiment Experiment 1 Environment: The environment for this experiment looks like Figure There was no hole or opening in the environment. There was no objects inside the room, but most of the walls are enough smooth for specular reflection. Data Collection: Data has been collected by using kind of follow-wall behavior, but it doesn t depend on sensor data. The speed has been kept constant and it s speed=[40 75], (speed=[leftwheel RightWheel]). As a result, the robot has been moved creating a constant trajectory. And the sampling time of data with this speed has been 175ms/sampling i.e., on an average in each second 5.7 data samples have been collected. Result: We have tested HSR Algorithm running iteratively by the collected data with different values of our parameters, especially variant wall size parameters, minwallsize from 10mm to 70mm and maxwallsize from 20mm to 250mm. As we found that TDTh=60 is sufficient, this value is same for every case and this is enough for the sonar that has been mounted on Amigobot. As there is no opening in the environment, R th = R max is sufficient guess and we have done that. Figure 6.15 expresses the effect of HSR Algorithm with different parameters. In Figure 6.15, sub figure (a) is generated without HSR Algorithm and it contains the specularly reflected readings. To get idea about actual environment, described in Fifure 6.14, from this sub figure is quite impossible. But with HSR algorithm, for the mentioned parameter set/range in Table 6.1, we have got similar output like sub figure (c), which is pretty close to the actual environment. First column of the table represents the size of minimum wall, minwallsize parameter. For each minimum wall size, we can get similar improvement with different maximum wall sizes and 2nd and 3rd columns of the table represent minimum size and maximum size of maxwallsize for a certain minwallsize respectively. Sub figure (b) is obtained if maxwallsize becomes less than the values of 2nd column of the table with a specific minwallsize. And sub figure (d) represents the situation while maxwallsize is more than the mentioned values in 3rd column of the table with corresponding minwallsize. But in the Figure 6.15, any sub figure is better than the sub figure (a), which is generated without HSR Algorithm.

72 60 Chapter 6. Dealing with Specular Reflection (a) After HSR; Rth=3000, Rmax=3000, Scale=10, wall= (b) Figure 6.15: Comparison among different values of parameters as well as without HSR Algorithm while no opening/hole in the environment.

73 6.4. Implementation 61 After HSR; Rth=3000, Rmax=3000, Scale=10, wall= (c) After HSR; Rth=3000, Rmax=3000, Scale=10, wall= (d) Figure 6.15 (Continued).

74 62 Chapter 6. Dealing with Specular Reflection minwallsize(mm) Minimum maxwallsize(mm) Maximum maxwallsize(mm) Table 6.1: Solution area for minwallsize and maxwallsize paremeters. First column of the table represents the size of minimum wall, minwallsize parameter. 2nd and 3rd column of the table represent minimum size and maximum size of maxwallsize respectively for a certain minwallsize. So from this experiment, we can say that HSR Algorithm is able to handle specularly reflected data while there is no opening in the environment and it works better with minwallsize=(10 to 50)mm and maxwallsize=(190 to 60)mm. But using minwall- Size=20mm with maxwallsize between (110 to 190)mm is better, because the solution area is wider and probability to add noise is less within this range while no openings in the environment Experiment 2 Environment: We have done this experiment with some openings or holes in the environment. Sub figure (a) in Figure 6.16 represents the actual environment. Wall, consists of glass, is also included. There was no object inside the room. The measurement of the walls and holes (doors) are mentioned in figure. Data Collection: Amigobot has been used for this experiment with same kind of follow wall behavior. The speed is kept constant and it is speed=[40 75], (speed=[leftwheel RightWheel]). As a result, the robot moves creating a constant trajectory. And the sampling time of data with this speed is 175ms/sample i.e., on an average in each second 5.7 data samples have been collected. At the time of data collection D1, and D2 have been kept open (see Figure 6.16). So, totally 3 holes are in the environment. But most important thing is that the robot should be closer enough to the openings so that all openings can be detected by the sonar correctly. In this experiment, the minimum opening size was 80 cm in length and according to Section 6.3, robot s sensing distance should not be more than 80 cm from openings. Result: We have tested the collected data with different estimated wall size for being sure about openings and walls with our HSR Algorithm. As we have used same sonar, TDTh=60mm is kept unchanged like the first experiment. But R th has been changed and mentioned on the top of the figures. Followings are the figures (Figure 6.16) that are obtained with HSR Algorithm and without HSR Algorithm. After running the HSR Algorithm iteratively, we have found that minw allsize = 10mm and maxw allsize = 80mm is working better than other combinations while minimum opening size is 80cm in the environment (sub figure (e)). maxw allsize < 80 (like 40 to 70 etc) works better in openings but they can t handle specularly reflected readings very well (see sub fiugre (c) and (d)). On the other hand, maxw allsize >

75 6.4. Implementation 63 (a)original Environment. (b) (c) (d) (e) Figure 6.16: Comparison among different values of parameters as well as without HSR Algorithm with different opening sizes in the environment. Sub figure (a) describes the actual environment. D1 and D2 are openings with minimum size of 80cm. Walls E, T and A are not sensed by the robot. So readings from that side are not interesting.

76 64 Chapter 6. Dealing with Specular Reflection (f) (g) (h) (i) Figure 6.16 (Continued).

Obstacle Avoidance (Local Path Planning)

Obstacle Avoidance (Local Path Planning) Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent

More information

Obstacle Avoidance (Local Path Planning)

Obstacle Avoidance (Local Path Planning) 6.2.2 Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent

More information

Characteristics of Sonar Sensors for Short Range Measurement

Characteristics of Sonar Sensors for Short Range Measurement Characteristics of Sonar Sensors for Short Range Measurement Leena Chandrashekar Assistant Professor, ECE Dept, R N S Institute of Technology, Bangalore, India. Abstract Sonar, originally used for underwater

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER Jyoshita, Priti 2, Tejal Sangwan 3,2,3,4 Department of Electronics and Communication Engineering Hindu College of Engineering Sonepat,

More information

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta 1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Lecturer: Alex Styler (in for Drew Bagnell) Scribe: Victor Hwang 1 1 Occupancy Mapping When solving the localization

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Lecturer: Drew Bagnell Scribes: {agiri, dmcconac, kumarsha, nbhakta} 1 1 Occupancy Mapping: An Introduction

More information

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Mapping 1 Occupancy Mapping When solving the localization problem, we had a map of the world and tried

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Probabilistic Motion and Sensor Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Sensors for Mobile

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2 Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2 Ruler Graph: Analyze your graph 1. Examine the shape formed by the connected dots. i. Does the connected graph create

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

6 y [m] y [m] x [m] x [m]

6 y [m] y [m] x [m] x [m] An Error Detection Model for Ultrasonic Sensor Evaluation on Autonomous Mobile Systems D. Bank Research Institute for Applied Knowledge Processing (FAW) Helmholtzstr. D-898 Ulm, Germany Email: bank@faw.uni-ulm.de

More information

Online Simultaneous Localization and Mapping in Dynamic Environments

Online Simultaneous Localization and Mapping in Dynamic Environments To appear in Proceedings of the Intl. Conf. on Robotics and Automation ICRA New Orleans, Louisiana, Apr, 2004 Online Simultaneous Localization and Mapping in Dynamic Environments Denis Wolf and Gaurav

More information

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping AUTONOMOUS SYSTEMS LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping Maria Isabel Ribeiro Pedro Lima with revisions introduced by Rodrigo Ventura Instituto

More information

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING Inženýrská MECHANIKA, roč. 15, 2008, č. 5, s. 337 344 337 DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING Jiří Krejsa, Stanislav Věchet* The paper presents Potential-Based

More information

Real-time Obstacle Avoidance and Mapping for AUVs Operating in Complex Environments

Real-time Obstacle Avoidance and Mapping for AUVs Operating in Complex Environments Real-time Obstacle Avoidance and Mapping for AUVs Operating in Complex Environments Jacques C. Leedekerken, John J. Leonard, Michael C. Bosse, and Arjuna Balasuriya Massachusetts Institute of Technology

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

MTRX4700: Experimental Robotics

MTRX4700: Experimental Robotics Stefan B. Williams April, 2013 MTR4700: Experimental Robotics Assignment 3 Note: This assignment contributes 10% towards your final mark. This assignment is due on Friday, May 10 th during Week 9 before

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Visually Augmented POMDP for Indoor Robot Navigation

Visually Augmented POMDP for Indoor Robot Navigation Visually Augmented POMDP for Indoor obot Navigation LÓPEZ M.E., BAEA., BEGASA L.M., ESCUDEO M.S. Electronics Department University of Alcalá Campus Universitario. 28871 Alcalá de Henares (Madrid) SPAIN

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Scene Modeling from Motion-Free Radar Sensing

Scene Modeling from Motion-Free Radar Sensing Scene Modeling from Motion-Free Radar Sensing Alex Foessel Robotics Institute Carnegie Mellon University Ph.D. Thesis Proposal May 13, 1999 Motivation - 2 - Presentation I. Research on Radar for Robots

More information

Improved Occupancy Grids for Map Building

Improved Occupancy Grids for Map Building Autonomous Robots 4, 351 367 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Improved Occupancy Grids for Map Building KURT KONOLIGE Artificial Intelligence Center, SRI International,

More information

5. Tests and results Scan Matching Optimization Parameters Influence

5. Tests and results Scan Matching Optimization Parameters Influence 126 5. Tests and results This chapter presents results obtained using the proposed method on simulated and real data. First, it is analyzed the scan matching optimization; after that, the Scan Matching

More information

(Refer Slide Time: 00:02:02)

(Refer Slide Time: 00:02:02) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 20 Clipping: Lines and Polygons Hello and welcome everybody to the lecture

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

An Introduction to Markov Chain Monte Carlo

An Introduction to Markov Chain Monte Carlo An Introduction to Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) refers to a suite of processes for simulating a posterior distribution based on a random (ie. monte carlo) process. In other

More information

SLAM: Robotic Simultaneous Location and Mapping

SLAM: Robotic Simultaneous Location and Mapping SLAM: Robotic Simultaneous Location and Mapping William Regli Department of Computer Science (and Departments of ECE and MEM) Drexel University Acknowledgments to Sebastian Thrun & others SLAM Lecture

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

CSE-571 Robotics. Sensors for Mobile Robots. Beam-based Sensor Model. Proximity Sensors. Probabilistic Sensor Models. Beam-based Scan-based Landmarks

CSE-571 Robotics. Sensors for Mobile Robots. Beam-based Sensor Model. Proximity Sensors. Probabilistic Sensor Models. Beam-based Scan-based Landmarks Sensors for Mobile Robots CSE-57 Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks Contact sensors: Bumpers Internal sensors Accelerometers (spring-mounted masses) Gyroscopes (spinning

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

ME 3200 Mechatronics Laboratory FALL 2002 Lab Exercise 7: Ultrasonic Sensors

ME 3200 Mechatronics Laboratory FALL 2002 Lab Exercise 7: Ultrasonic Sensors ME 3200 Mechatronics Laboratory FALL 2002 Lab Exercise 7: Ultrasonic Sensors The objective of this lab is to provide you with the experience of using an ultrasonic sensor. Ultrasonic sensors, or sonar

More information

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS Integrated Cooperative SLAM with Visual Odometry within teams of autonomous planetary exploration rovers Author: Ekaterina Peshkova Supervisors: Giuseppe

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Appendix E: Software

Appendix E: Software Appendix E: Software Video Analysis of Motion Analyzing pictures (movies or videos) is a powerful tool for understanding how objects move. Like most forms of data, video is most easily analyzed using a

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

Arc Carving: Obtaining Accurate, Low Latency Maps from Ultrasonic Range Sensors

Arc Carving: Obtaining Accurate, Low Latency Maps from Ultrasonic Range Sensors Arc Carving: Obtaining Accurate, Low Latency Maps from Ultrasonic Range Sensors David Silver, Deryck Morales, Ioannis Rekleitis, Brad Lisien, and Howie Choset Carnegie Mellon University, Pittsburgh, PA

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

3.2 Level 1 Processing

3.2 Level 1 Processing SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term

More information

Diffraction. Factors that affect Diffraction

Diffraction. Factors that affect Diffraction Diffraction What is one common property the four images share? Diffraction: Factors that affect Diffraction TELJR Publications 2017 1 Young s Experiment AIM: Does light have properties of a particle? Or

More information

25-1 Interference from Two Sources

25-1 Interference from Two Sources 25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other

More information

University of Moratuwa

University of Moratuwa University of Moratuwa B.Sc. Engineering MAP BUILDING WITH ROTATING ULTRASONIC RANGE SENSOR By 020075 A.C. De Silva (EE) 020138 E.A.S.M. Hemachandra (ENTC) 020166 P.G. Jayasekara (ENTC) 020208 S. Kodagoda

More information

Chapter 12 Notes: Optics

Chapter 12 Notes: Optics Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be

More information

Topological Navigation and Path Planning

Topological Navigation and Path Planning Topological Navigation and Path Planning Topological Navigation and Path Planning Based upon points of interest E.g., landmarks Navigation is relational between points of interest E.g., Go past the corner

More information

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images: MAPS AND MAPPING Features In the first class we said that navigation begins with what the robot can see. There are several forms this might take, but it will depend on: What sensors the robot has What

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Mapping and Exploration with Mobile Robots using Coverage Maps

Mapping and Exploration with Mobile Robots using Coverage Maps Mapping and Exploration with Mobile Robots using Coverage Maps Cyrill Stachniss Wolfram Burgard University of Freiburg Department of Computer Science D-79110 Freiburg, Germany {stachnis burgard}@informatik.uni-freiburg.de

More information

Sensor technology for mobile robots

Sensor technology for mobile robots Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.uni-hamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance

More information

Indoor WiFi Localization with a Dense Fingerprint Model

Indoor WiFi Localization with a Dense Fingerprint Model Indoor WiFi Localization with a Dense Fingerprint Model Plamen Levchev, Chaoran Yu, Michael Krishnan, Avideh Zakhor Department of Electrical Engineering and Computer Sciences, University of California,

More information

ARCHAEOLOGICAL 3D MAPPING USING A LOW-COST POSITIONING SYSTEM BASED ON ACOUSTIC WAVES

ARCHAEOLOGICAL 3D MAPPING USING A LOW-COST POSITIONING SYSTEM BASED ON ACOUSTIC WAVES ARCHAEOLOGICAL 3D MAPPING USING A LOW-COST POSITIONING SYSTEM BASED ON ACOUSTIC WAVES P. Guidorzi a, E. Vecchietti b, M. Garai a a Department of Industrial Engineering (DIN), Viale Risorgimento 2, 40136

More information

Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks

Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks Ted Brown, Deniz Sarioz, Amotz Bar-Noy, Tom LaPorta, Dinesh Verma, Matthew Johnson, Hosam Rowaihy November 20, 2006 1 Introduction

More information

Mobile Robots: An Introduction.

Mobile Robots: An Introduction. Mobile Robots: An Introduction Amirkabir University of Technology Computer Engineering & Information Technology Department http://ce.aut.ac.ir/~shiry/lecture/robotics-2004/robotics04.html Introduction

More information

Lecture 7 Notes: 07 / 11. Reflection and refraction

Lecture 7 Notes: 07 / 11. Reflection and refraction Lecture 7 Notes: 07 / 11 Reflection and refraction When an electromagnetic wave, such as light, encounters the surface of a medium, some of it is reflected off the surface, while some crosses the boundary

More information

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping Grid Maps Gian Diego Tipaldi, Wolfram Burgard 1 Features vs. Volumetric Maps Courtesy: D. Hähnel Courtesy: E. Nebot 2 Features So far, we only used feature maps Natural choice for Kalman

More information

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS Improved Laser-based Navigation for Mobile Robots 79 SDU International Journal of Technologic Sciences pp. 79-92 Computer Technology IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS Muhammad AWAIS Abstract

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Navigation and Metric Path Planning

Navigation and Metric Path Planning Navigation and Metric Path Planning October 4, 2011 Minerva tour guide robot (CMU): Gave tours in Smithsonian s National Museum of History Example of Minerva s occupancy map used for navigation Objectives

More information

Surveying Prof. Bharat Lohani Indian Institute of Technology, Kanpur. Lecture - 1 Module - 6 Triangulation and Trilateration

Surveying Prof. Bharat Lohani Indian Institute of Technology, Kanpur. Lecture - 1 Module - 6 Triangulation and Trilateration Surveying Prof. Bharat Lohani Indian Institute of Technology, Kanpur Lecture - 1 Module - 6 Triangulation and Trilateration (Refer Slide Time: 00:21) Welcome to this another lecture on basic surveying.

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Homework Set 3 Due Thursday, 07/14

Homework Set 3 Due Thursday, 07/14 Homework Set 3 Due Thursday, 07/14 Problem 1 A room contains two parallel wall mirrors, on opposite walls 5 meters apart. The mirrors are 8 meters long. Suppose that one person stands in a doorway, in

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight local variation of one variable with respect to another.

More information

Descriptive Statistics, Standard Deviation and Standard Error

Descriptive Statistics, Standard Deviation and Standard Error AP Biology Calculations: Descriptive Statistics, Standard Deviation and Standard Error SBI4UP The Scientific Method & Experimental Design Scientific method is used to explore observations and answer questions.

More information

A Comparison of Position Estimation Techniques Using Occupancy Grids

A Comparison of Position Estimation Techniques Using Occupancy Grids A Comparison of Position Estimation Techniques Using Occupancy Grids Bernt Schiele and James L. Crowley LIFIA (IMAG) - I.N.P. Grenoble 46 Avenue Félix Viallet 38031 Grenoble Cedex FRANCE Abstract A mobile

More information

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Introduction The goal of this survey paper is to examine the field of robotic mapping and the use of FPGAs in various implementations.

More information

CS 584 Data Mining. Classification 1

CS 584 Data Mining. Classification 1 CS 584 Data Mining Classification 1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class. Find a model for

More information

Modifications of VFH navigation methods for mobile robots

Modifications of VFH navigation methods for mobile robots Available online at www.sciencedirect.com Procedia Engineering 48 (01 ) 10 14 MMaMS 01 Modifications of VFH navigation methods for mobile robots Andre Babinec a * Martin Dean a Františe Ducho a Anton Vito

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Sensor Models / Occupancy Mapping / Density Mapping

Sensor Models / Occupancy Mapping / Density Mapping Statistical Techniques in Robotics (16-831, F09) Lecture #3 (09/01/2009) Sensor Models / Occupancy Mapping / Density Mapping Lecturer: Drew Bagnell Scribe: Mehmet R. Dogar 1 Sensor Models 1.1 Beam Model

More information

PROGRAMA DE CURSO. Robotics, Sensing and Autonomous Systems. SCT Auxiliar. Personal

PROGRAMA DE CURSO. Robotics, Sensing and Autonomous Systems. SCT Auxiliar. Personal PROGRAMA DE CURSO Código Nombre EL7031 Robotics, Sensing and Autonomous Systems Nombre en Inglés Robotics, Sensing and Autonomous Systems es Horas de Horas Docencia Horas de Trabajo SCT Docentes Cátedra

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will

More information

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing IROS 05 Tutorial SLAM - Getting it Working in Real World Applications Rao-Blackwellized Particle Filters and Loop Closing Cyrill Stachniss and Wolfram Burgard University of Freiburg, Dept. of Computer

More information

Glass Confidence Maps Building Based on Neural Networks Using Laser Range-finders for Mobile Robots

Glass Confidence Maps Building Based on Neural Networks Using Laser Range-finders for Mobile Robots Glass Confidence Maps Building Based on Neural Networks Using Laser Range-finders for Mobile Robots Jun Jiang, Renato Miyagusuku, Atsushi Yamashita, and Hajime Asama Abstract In this paper, we propose

More information

Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty

Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Michael Comstock December 6, 2012 1 Introduction This paper presents a comparison of two different machine learning systems

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Formations in flow fields

Formations in flow fields Formations in flow fields Rick van Meer June 11, 2015 1 Introduction Pathfinding and formations (i.e. an arrangement of agents) are both a part of artificial intelligence and can be used in games. However,

More information

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Hua Zhong 1, Takeo Kanade 1,andDavidSchwartzman 2 1 Computer Science Department, Carnegie Mellon University, USA 2 University

More information

COS Lecture 13 Autonomous Robot Navigation

COS Lecture 13 Autonomous Robot Navigation COS 495 - Lecture 13 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information