MECHENG 706 AUTONOMOUS SLAM ROBOT. Group 15 Sam Daysh Craig Horide Karan Purohit Luke Jennings Alex Pereyaslavets

Size: px
Start display at page:

Download "MECHENG 706 AUTONOMOUS SLAM ROBOT. Group 15 Sam Daysh Craig Horide Karan Purohit Luke Jennings Alex Pereyaslavets"

Transcription

1 MECHENG 706 AUTONOMOUS SLAM ROBOT Group 15 Sam Daysh Craig Horide Karan Purohit Luke Jennings Alex Pereyaslavets

2 Executive Summary The task given was to create an autonomous SLAM robot to cover an area, avoid obstacles and map the path taken. To achieve this, each sensor was calibrated and the limitations were found. Optimal path planning was considered and an inwards spiral was agreed upon. Sensor limitations and the desired route requirements were taken into account and a sensor arrangement was designed. A finite state machine was created to realise the robot s movement. The software was constructed and tested before to ensure each state in the finite state machine was working and performing as intended before entering its next state. In the end, the robot attempted to cover the area quickly and avoid obstacles, however it was not successful on the day. An attempted implementation of the map of the environment was also done. i

3 Table of Contents Executive Summary... i Table of Figures... iv 1. Introduction Problem Description Problem Specification Sensors and Calibration Arduino Mega SHARP Infrared Sensors HC-SR04 ITead Studio Sonar InvenSense MPU Sensor Placement Motion and Movement of the Robot Mecanum Wheels Speed and Aligning Rotating vs Strafing Path Planning Zig-Zag Method Outward Spiralling Inward Spiralling Full SLAM Software Interface Library General Functions Millis Ninety Turn Scanning Get Sensor Data Display Sensor Readings Align to wall Finite State Machine Initialisation Initial Wall Find Moving to Wall ii

4 4.3.4 Aligning to Wall Moving to Corner Running Object Detected Object Detected Forwards Object Passing Re-adjusting Stopped Mapping Testing and Results Discussion, Limitations and Improvements IR Sensors Number of Sensors Motors Limited Movement Directions Servo Motor Usage MPU for Turning PID Control Fuzzy Logic Active SLAM Finite State Machine Conclusion Appendices... A-1 Appendix I Sonar Mount Drawing... A-2 Appendix II MPU Mount Drawing... A-3 Appendix III Ninety Turn Code... A-4 Appendix IV Scanning Code... A-5 Appendix V Get Sensor Data Code... A-6 Appendix VI Align to Wall Code... A-7 Appendix VII Finite State Machine... A-8 iii

5 Table of Figures Figure 1 - Arduino Mega Figure 2 - Medium Range Sensor... 2 Figure 3 - Long Range Sensor... 2 Figure 4 - Long Range Measure versus Real Values... 3 Figure 5 - Medium Range Measure versus Real Values... 3 Figure 6 - Sonar Range Finder... 4 Figure 7 - MPU Figure 8 - First Arrangement of Sensors... 5 Figure 9 - Second Arrangement of Sensors... 5 Figure 10 - Final Arrangement of Sensors... 6 Figure 11 - Basic Control System... 7 Figure 12 - Mecanum Wheels Kinematics... 7 Figure 13 - Zig-Zag Method... 9 Figure 14 - Zig-Zag Method with Obstacles... 9 Figure 15 - Outward Spiral Method Figure 16 - Outward Spiral Method with Obstacles Figure 17 - Inward Spiral Method Figure 18 - Inward Spiral Method with Obstacles Figure 19 - Scanning Function Figure 20 - Basic Finite State Machine Figure 21 - Initialisation Sequence Figure 22 - Object Detected Figure 23 - Robot Strafing Left in Object Detected Figure 24 - Object Detected Forwards Figure 25 - Object Passing Figure 26 - Object Re-adjusting Figure 27 - Object Continuing on Path Figure 28 - Final Finite State Machine Figure 29 - Mapping Implementation Figure 30 - On Day Results iv

6 1. Introduction Simultaneous Localisation and Mapping is a difficult problem which is key to successful and effective autonomous robotics and entails generating a map of an unknown environment while tracking the pose of the robot within it. The difficulty lies in the circular dependency of localisation requiring a map to localise against, and mapping requiring a known pose to accurately generate a map. If a SLAM algorithm is successfully implemented, it can be used for many autonomous robotics applications such as vacuum cleaning, generating maps of environments too small for humans, and reef monitoring underwater. 1.1 Problem Description With the intention of autonomous vacuum cleaning, the task given was to design and create an autonomous SLAM robot using given sensors and chassis with mecanum wheels. The robot was to navigate around an unknown area, whilst avoiding an unknown number of obstacles of unknown size. It was also required to output the map of the area and the path traversed. The success of the robot s operation would be measured by how fast the robot covers the area, how well the robot avoids obstacles, and its mapping and localisation capabilities. 1.2 Problem Specification The robot was given mecanum wheels for movement using VEX 2-wire 393 motors, a selection of range finding sensors including an array of infrared and a sonar, an MPU-9150, and an Arduino Mega 2560 controller board for memory and logic control. It also contained a Bluetooth module to communicate with a computer for troubleshooting and mapping capabilities. 1

7 2. Sensors and Calibration 2.1 Arduino Mega 2560 Figure 1 - Arduino Mega 2560 The Arduino Mega 2560, shown in Figure 1, is a microcontroller board, with 54 digital I/O pins, 16 analogue inputs and its own 16 MHz crystal oscillator. It has an 8 KB SRAM memory which is used for global variables and a 256 KB flash memory. It is powered by a battery through USB port. It proves effective for the purposes of SLAM on a small scale - if a more computationally difficult SLAM algorithm was required, or a higher resolution map, it would not be sufficient. 2.2 SHARP Infrared Sensors The Infrared (IR) sensors were manufactured by SHARP and the device numbers are GP2D120XJ00F (short range), GP2Y0A21YK (medium range) and GP2Y0A02YK (long range). As a result of these sensors having been used for the past five years for similar projects, the factory specifications and voltage to distance conversions were no longer as accurate as those specified in the data sheets. Additionally, the range of the sensors has reduced significantly over time. The medium range IR sensors, shown in Figure 2,have an effective range of cm while the data sheet specifies that they can measure up to 80 cm and the long range IR sensors, shown in Figure 3, measure up to 80 cm while the data sheet specifies a range of up to 150 cm. Figure 2 - Medium Range Sensor Figure 3 - Long Range Sensor 2

8 Measured (sensor) Measured (sensor) Each IR sensor had to be calibrated to gather meaningful data. This was done by measuring the output of the sensors over a variety of measurements. These points were then fitted to the smallest R squared curve relationship in Excel to form an equation for each sensor. Calibration of these sensors also involved accounting for a physical positional offset of the sensor. Each sensor was calibrated separately as it was found there was variations of sensors within the same model. An example calibration graph for long range is shown in Figure 4, and for the medium range sensor in Figure 5. As a result of the limited range of the sensors, no short range IR sensors were used. Three long range, and one medium range IR sensors were attached to the robot and wired to analogue ports. 120 y = x R² = Real Figure 4 - Long Range Measure versus Real Values y = x R² = Real Figure 5 - Medium Range Measure versus Real Values 3

9 2.3 HC-SR04 ITead Studio Sonar The HC-SR04 sonar range finder, in Figure 6, is manufactured by ITead Studio was used to detect objects in front of the robot. The sonar sensor was the most accurate of the sensors that were available. It also has the longest range measuring up to 4 m. The sensor takes a trigger signal to initialise a pulse, and the length of the high signal can be used to determine the distance an object is from the sensor. The NewPing library written by Tim Eckel (Eckel, n.d.) was used to interface with the sensor. This library uses the data sheets relationship between the time that the signal is high and distance. This relationship was very accurate and did not need any additional calibration like the IR sensors did, other than defining the offset required. Figure 6 - Sonar Range Finder 2.4 InvenSense MPU The InvenSense MPU-9150 in Figure 7 was provided, which can provide 9-axis of fused data via a digital motion processor. These 9 axes consist of a 3-axis gyroscope, a 3-axis accelerometer and a 3-axis magnetometer. The fused data, however, could not be used as a result of the magnetic field produced by the Vex motors interfering with the magnetometer. The fused data had significant drift issues when measuring orientation angle. The most accurate representation of changes in angle was experimentally found to be in the step before the data was fused. This was read out of the m_dmpeulerpose variable. The accelerometer data was tested for displacement measurement however, as this required the second integral, it was found to be too inaccurate due to large noise issues. Hence, this was not used. The gyrometer was used for the majority of code writing in implementing turns and measuring the angles. However, upon testing, is was found it had large inaccuracies and lag, and this function wasn t used. The gyroscope was used ultimately for counting the amount of turns carried out. Figure 7 - MPU

10 2.5 Sensor Placement There were several different sensor position concepts, which had positives and negatives, and ultimately the sensor placement was decided by being the most practical. The sonar was always placed in the centre facing forward, and the MPU vertical behind the pins. The mounting structure drawings are attached in Appendices I and II. The changes were due to the IR sensor usage and effective range. All sensors are attached using VEX Robotics components which offer stable and standard sizes. The placement shown in Figure 8 was the first arrangement. The idea was to have the Front Short Range (FSR) look for obstacles close, the Front Long Range (FLR) to look for walls ahead in conjunction with the sonar, the Right Short Range (RSR) to align to walls and be used for object avoidance, and the Left Long Range (LLR) to be used for predicting what obstacles the robot will come across in the future. Figure 8 - First Arrangement of Sensors After much deliberation, it was decided that it is more important to travel in a straight path, than to have probability mapping. Hence, the LLR became the Right Long Range (RLR) and the RSR became a medium range sensor - this is to ensure walls were seen, and by having the medium range sensor it allowed acceptable object avoidance. This layout is shown in Figure 9. Figure 9 - Second Arrangement of Sensors 5

11 The final arrangement, shown in Figure 10, moved the FSR to the back and made it a long range as well. This was done as it was found the original FLR saw obstacles as well as FSR - but with it being long range has the advantage of seeing walls as well. There were flaws in this arrangement due to the limitation on the number of sensors, however the practical impact of these were negligible. Figure 10 - Final Arrangement of Sensors 6

12 3. Motion and Movement of the Robot The robot s motion is generated through four mecanum wheels attached to DC motors which are powered by the 3.7V, 2000mAh rechargeable battery. The Arduino Mega2560 board acts as a digital signal processor and drives the motors individually. The basic control system is shown in Figure 11 with the output being the desired motion. 3.1 Mecanum Wheels Figure 11 - Basic Control System Figure 12 - Mecanum Wheels Kinematics The mecanum wheels have rollers around the circumference, which is an actuator redundant system allowing 3 degrees of freedom motion. This allows the robot to travel straight forward, reverse, strafe in both directions and rotate clockwise and counter-clockwise. If correct values are set, the robot can also move in an omnidirectional manner. For simplicity, functions which moved to robot forwards, backwards, strafe left and right and rotated clockwise and counter-clockwise were written. Each had an overload which could change the speed of the operation. The task given did not require more complicated control of the robot's velocity and the 7

13 functions written provided full control of the robot's position in the three degrees of freedom available. The movement was set through inverse kinematics, as the code provides overall movement required, and this is interpreted into angular velocities that are sent to each wheel each wheel via their respective motor. The kinematics are shown in Figure 12. There are several risks with using mecanum wheels - this is due to mecanum wheels slipping while running, and inefficient motors. This causes issues with relying on the wheels for odometry which was found in mapping and in spiral control. 3.2 Speed and Aligning As the speed of covering the area was a major part of the measure of success of the robot, it was decided early on that the speed of the robot is essential. The speed of each wheel was set to 200 mm/s, but this was found to have serious veering off the straight path. After manual tuning, the speeds were adjusted to match each other, allowing the robot to travel straight. When the robot is running, its speed can be adjusted by a multiplier if a lower speed is required for a certain purpose. It was also found through time trials that despite the lowest speed of the wheels being set to 200 mm/s, the actual speed of the robot was about 150 mm/s, proving that odometry straight from the wheels would not be possible for this project. 3.3 Rotating vs Strafing As the robot can strafe and rotate the usage for both had to be decided to be most optimal. There were two places where such decisions had to be made for corners and for obstacle avoidance. For corners, due to the fixed position of the sensors, it was decided that turning would be the most optimal solution. For obstacle avoidance, either turning or strafing could be used. The turning and avoid method would require a more complex finite state machine as two types of corner turning would be required one for turning corners at the end of a length and one for obstacle avoidance. An alternative method was to rotate at an obstacle to confirm that it is an obstacle not a wall, and find a clear path to avoid the obstacle, and then strafe around it. This method was decided to be slow, and prioritising speed, it was discarded. The method that was ultimately chosen was to strafe around the obstacles. This allowed the side with the most sensors, the front, to be constantly searching forward for other obstacles, while the side sensors detected when the object was passed. For keeping aligned with the wall, the side sensors measured the distance from the wall. The robot then strafed to avoid crashing into the wall. After strafing, the robot would rotate to make sure it was parallel to the wall. 8

14 3.4 Path Planning There are several ways that a robot can navigate through an environment and each has its own benefits and weaknesses. The primary techniques that were considered are described below Zig-Zag Method The Zig-Zag method is implemented by the robot initially locating and driving to a corner of the environment. Once the robot knows that it is in a corner, it begins following the wall and upon reaching a second wall, which crosses the intended path of the robot, the robot turns 90, travels the length of itself, and turns 90 once again. The robot is then able to traverse the length of the environment again before reaching the end wall and repeating until the entire environment is covered. This is shown in Figure 13. It avoids obstacles simply by going around them and continuing, as shown in Figure 14. Zig-zagging could also be implemented using strafing from wall-to-wall. This eliminates the need to rotate and keeps the robot always facing forward. It however requires more sensors as both sides have to be able to detect the wall. The Zig-Zig method is a valid method however, if the robot starts on a short side of the environment the time taken to cover the area will be very large due to the constant turning required. Figure 13 - Zig-Zag Method Figure 14 - Zig-Zag Method with Obstacles Outward Spiralling The outward spiral method finds the centre by looking in two opposite directions and finding the middle of them. The robot then rotates 90 degrees and repeats the finding centre point to find the centre of the environment. It then travels while mapping and rotates when it has ventured into an unmapped area. This is shown in Figure 15. Its obstacle avoidance method is simply to go around the obstacle and continue as shown in Figure 16. This method is good for stopping unnecessary motion as a closed loop is formed. However, this method requires an active SLAM 9

15 algorithm which is undesirable on an Arduino Mega, as it doesn t have enough memory to handle such a process. Figure 15 - Outward Spiral Method Figure 16 - Outward Spiral Method with Obstacles Inward Spiralling Inward spiralling is another method for path planning, as shown in Figure 17. It is realised by finding a wall and then following it to a corner. A variable is set which specifies the distance from the wall that each trajectory runs from the wall parallel to it. Once this variable is set the robot can begin travelling along a wall until it reaches a corner, where it will turn 90. The robot will repeat this until it approaches the initial corner. From the initial corner onwards, the robot s distance from the wall is incremented. Avoiding obstacles is the same as previous methods simply avoid, and then continue as though there was no interference, as shown in Figure 18. The inward spiralling method was the one that was ultimately chosen as it allows the fastest area coverage with minimal processing power. For simplicity s sake, it was decided to use the right hand coordinate system, meaning that the robot will always turn counter-clockwise. Figure 17 - Inward Spiral Method Figure 18 - Inward Spiral Method with Obstacles Full SLAM A full SLAM method would be able to begin at any point in the map, assess the area, and proceed to decide on the best path to take to map the area. This would require high processing power due to the necessity of running more powerful filtering algorithms, such as a Kalman or a Particle Filter. It would also not be 10

16 efficient in area coverage, as there would be parts of the environment that do not require to be entered into to be mapped, yet the requirements of the project would require the path to cover this area. Full SLAM is overly complicated for such a simple project, hence, it was discarded as an option for path planning. 11

17 4. Software Once the electronics were set up and basic logic developed the code had to be implemented. The running code was written mostly in C with inbuilt Arduino functions, and the map display was written in C# Interface Library The interface library is a header file which encapsulates all of the robots interfacing with the outside world. The intention was to create a series of functions which would incorporate the calibration of the sensors and motor and simplify the tasks of moving the robot into simple and intuitive functions. The library sets the relative speed for the motors and has the calibrated relationships for each sensor, as well as the input from the MPU. 4.2 General Functions There are several functions that are used by the whole code. These are described below Millis All timing is done through the function millis(). This is an inbuilt function in Arduino that runs from the beginning of the program and returns the time in milliseconds. The timing is done through comparing the current output of millis() with previous ones set for individual purposes. Once this exceeds the acceptable limit, which is different for each state, the state runs. This allows other states, such as object avoidance, to be triggered, without leaving the current functionality Ninety Turn The ninety turn function is used whenever a 90 o turn is required, which is at every corner and in finding the initial corner. Initially, through the use of the MPU gyrometer, this function would take the robot's initial angle, and turn until the initial angle plus ninety degrees was achieved. This proved to be inaccurate due to lag when updating the read angle and inconsistencies with readings in different battery charge levels. The MPU based function was replaced by timing a 90 o turn, and providing a counter-clockwise rotation for that length of time. The final ninety turn code is attached in Appendix III. 12

18 4.2.2 Scanning Scanning is a function used to determine whether the robot is facing a wall, an obstacle or free space, and is attached in Appendix IV. All three of the front facing sensors are used, each returning readings which can be plotted as three points. It can be assumed that the three points will always form a triangle and the area of this triangle can be determined. Theoretically, if the area of a triangle formed by any three points is 0, then the three points are collinear. In the application of the SLAM robot, if the three points are relatively collinear, the robot determines that it is detecting a wall and returns a 0. If the robot sees an obstacle, which is determined by having a large output area, the function returns a 1. Otherwise if the robot sees no obstacles in front of it, the function returns a 2. The method employed to detect whether the sensors are seeing a wall is quite robust as it works in cases where the robot approaches the wall from a significant angle, as shown in Figure 19. Figure 19 - Scanning Function Get Sensor Data At early stages of the code, each of the states would make a call to read each sensor individually and output the readings. This was taking up a lot of lines and unnecessary repetition. To remedy the problem, this was split into two different functions. The first function was get_sensor_data() which read values from the sensor and output them as floats. During testing it was found that the sensor readings fluctuated and some functionality was added to increase accuracy of the readings. The final function incorporated a filter which worked by summing up the values and averaging them. To increase speed of the function, the filter length was kept low. This is displayed in Appendix V Display Sensor Readings The second sensor function to help readability of the code was for testing purposes. display_sensor_readings() was used to identify the outputs given from get_sensor_data(). As extra functions were added, display_sensor_readings() was 13

19 also used to output certain values to the serial port, such as the difference between two sensors Align to wall After testing the robot's movement, problems with accurate path following due to drift and turning overshoots made it unable to correctly cover the given area. To correct this, the function align_to_wall(), which is attached in Appendix VI, reads the side sensors and adjusts the robot to realign itself parallel to the wall. When the robot is in the centre this can prove to be problematic for the medium range sensor, so only the long range data is effective. Thus, if the readings show that the robot is far away from the wall, the function only makes choices based off the long range sensor. 4.3 Finite State Machine Figure 20 - Basic Finite State Machine To realise the robot s movements, a finite state machine was created that covers each part of the movement, with a basic model shown in Figure 20. It begins at initialising, finds the initial corner to use, runs through the inward spiral and avoids 14

20 obstacles, and ends with a stopped state. Every time a state loops, or another state is called, the sensor outputs are read. If an object is detected the previous state is noted, and object detection is executed. This allows obstacle detection to be the highest priority state. The final Finite State Machine is shown at the end of the section in Figure 28, and the code for the switching and the object detected interrupt is shown in Appendix VII Initialisation The initialisation sequence of the robot is primarily for setup. This is when the robot can enable its motors, and test the sensors and MPU to ensure valid values are being returned. It then passes to INITIAL_WALL_FIND Initial Wall Find The INITIAL_WALL_FIND state is part of the initial corner finding, with the whole sequence shown in Figure 21. It is used to begin the search for the initial corner, from which spiralling can take place. The robot is moved forward from wherever it is placed, checking whether a wall is in front of it. Once a wall is detected, it passes to MOVING_TO_WALL. If it is headed into an object, the finite state machine avoids the obstacle and continues. Figure 21 - Initialisation Sequence 15

21 4.3.3 Moving to Wall The MOVING_TO_WALL state allows the robot to approach the wall slowly. The robot moves forwards while all the forward facing sensors give a reading of more than 30 cm. When under this distance, the robot s movement decreases to 70% of the speed. Once any of the front sensor readings are below a calibrated distance, in this case being 18 cm, the robot stops and returns ALIGNING_TO_WALL Aligning to Wall In the ALIGNING_TO_WALL state the robot orients itself so that it is facing the wall. This is done by comparing both of the forward-facing IR sensors and determining which way the robot needs to rotate in order to face the wall. Once the difference between the two sensors is below 3 cm the robot is determined to be facing the wall. The robot then rotates counter-clockwise by 90 so that it will be oriented parallel to the wall. Once this turn is complete the state changes to MOVING_TO_CORNER Moving to Corner The MOVING_TO_CORNER state moves the robot along the wall until it reaches the first corner, which will begin the mapping sequence using the corner as a definitive datum. When in this state, the robot will remains moving parallel to the length of the wall until it reaches the corner, where it turns counter-clockwise by 90 and stop. If it begins to drift away or towards the wall, it realigns itself and strafes back to the predetermined position using the align to wall function discussed previously. Once the 90 turn is completed, the state is switched to RUNNING Running The RUNNING state keeps the robot moving in a spiral motion. The robot s movement is checked every 500 ms, and it realigns to be going parallel to the wall every 750 ms. The time between consecutive align to wall calls represents a tradeoff between positional accuracy and speed, as the functionality available to move the robot is limited to one degree of freedom at a single time. At 750 ms intervals there was a suitable compromise where the robot would be able to move fast enough while maintaining a sufficiently uniform distance away from the wall. If the robot sees a wall ahead, it increments the corner count. Once the corner count reaches three, it means that the next iteration will have to stop earlier, so 15 cm is added to the value the robot looks for in front of itself. Once the corner count reaches four it means the spiral is complete, and the distance that it keeps away from the walls on the side increases appropriately. 16

22 4.3.8 Object Detected Figure 22 - Object Detected To enter the object detected state, one of the forward sensors has to detect an object in front of the robot, as shown in Figure 22. Using the scanning function, the robot determines that the object is not a wall and it needs to avoid this object. Figure 23 - Robot Strafing Left in Object Detected To ensure it is, in fact, an object that has been detected, the robot continues to scan as it moves towards the anomaly. If it detects the object continuously it will come to a certain pre-set distance and begin strafing left, as shown in Figure 23. The time taken to strafe past the object is measured and stored in a variable called strafe_ticks for the return strafe after it has passed the object. 17

23 4.3.9 Object Detected Forwards Figure 24 - Object Detected Forwards Figure 24 shows the robot in the state of OBJECT_DETECTED_FORWARDS. The robot can no longer detect the object and is clear to move forwards. It moves forward while scanning ahead of it, allowing it to avoid another object if there is another one ahead of it. It identifies when it is passing the previously detected object as the sensor on the side will give a sharp decrease in value. When this occurs it changes states to OBJECT_PASSING Object Passing Figure 25 - Object Passing 18

24 Object Passing, displayed in Figure 25, is a short state in which the rear side sensor can detect the object as it passes it. When the robot has passed the object, the sensor will no longer be outputting a short value and change into the state READJUSTING Re-adjusting Figure 26 - Object Re-adjusting After the robot has passed the object it, it is off the intended pathing route. To get back onto the route, READJUSTING strafes back for the same duration it strafed in the Object Detected state, as shown in Figure 26. This means that if the robot passed two objects, strafing left twice, it will strafe right the correct amount, getting back onto the intended path it was originally on. After this process is complete the robot will return to the state before the object was detected, and continue on its path as in Figure 27. Figure 27 - Object Continuing on Path 19

25 Stopped The STOPPED state is used to completely stop the robot. It is entered when the robot detects it is low battery or is giving a fatal error. It is also called when the robot has completed its intended task. Figure 28 - Final Finite State Machine 4.4 Mapping When deciding on possible implementations of mapping there was a great difficulty when it was discovered that the MPU provides unreliable readings from the accelerometer, so the odometry would either rely on the motors or on the sensors, both of which were proven previously to not be ideal. It was decided that the map will be set at a predetermined size, 50 by 50 bytes, and the resolution of each grid square was 10 cm by 10 cm. This gives a large map, which will be able to handle a maximum area of 5 m by 5 m. 20

26 A difficulty arose when trying to implement simple sensor odometry relying on the front facing sensors. This idea was to use the original sensor reading at the start of each length, relying mostly on the sonar. By subtracting the current sensor readings, this would give the current position. The side sensors would give the other coordinate. The problem arising was if an obstacle was in the current length the front coordinate would be off, and hence the map would be plotted wrong. The second method is relying on the wheels for odometry. The velocity for going forward and strafing was measured, as well as the variations when lower speeds were used for difficult manoeuvres. At the beginning of each length and manoeuvre, a timer began, which when multiplied by the velocity output the distance travelled. This was complemented with sensor readings for side measurements. This method worked, however, the velocities varied greatly depending on battery level. The robot was also found to not always be travelling parallel to the wall as intended, which threw off the measurements. Figure 29 - Mapping Implementation The final mapping code was implemented in C# and was a basic scan matching technique with an occupancy grid. It takes the sensor readings at each point and compares them with the previous ones to see how far it has travelled. It also maps where it sees obstacles. In Figure 29 the green square represents the initial position of the robot with the black lines representing the forward and sideways sensor readings. The sensors are used to find the angles in X and Y directions, and they are projected to set a grid square as occupied (as in wall or obstacle is present). The orange line represents the movement to the t+1 position, with the new pose shown as the 21

27 orange rectangle. This pose also sets the obstacle grids. By comparing the current and previous pose s sensor readings, the correct movement is logged and the pose is charted on the grid. The pre-move x and y coordinates are added to the movement values, and then divided by 10 to put in the correct square. A difficulty was discovered with interfacing the Bluetooth with C#. Despite troubleshooting this was not fixed in time for the assessment. However, this code worked with example values within the debugger. 22

28 5. Testing and Results The robot did not perform as intended. During the demonstration runs the initial aligned sequence was not completed accurately resulting a large error in the direction which the robot was facing, as shown in Figure 30. Additionally, the scanning algorithm picked up obstacles when it was in fact a wall was present. This was likely due to variations in the IR sensors with respect to voltage levels and other environmental factors. The algorithm for discerning between walls and obstacles was extensively tested and proved to be working as intending. Despite this there were instances in the demonstration where the robot did avoid obstacles and detect walls correctly for reasonable amount of time. While testing, a video was recorded proving the robot s capability to spiral inwards and detect walls. This can be found at The differences added past this stage were changing MPU turning to time based turning and obstacle avoidance integration. Figure 30 - On Day Results 23

29 6. Discussion, Limitations and Improvements 6.1 IR Sensors The infrared sensors had multiple problems throughout the project. The range of the sensors was about half of the range specified on the data sheet. Additionally, some of the sensors would have a large variation in their output as the battery of the robot was drained. Over a day of testing the calibration set at the start of the day would become increasingly meaningless. Not knowing this at the start of the project was the cause of significant delays to the continuation of it. 6.2 Number of Sensors If the project allowed more sensors, this would allow better position control and more freedom of movement. The limited number of sensors resulted in a large blind spot where an object or a wall could not be detected. Any objects behind or to the left of the robot could not be seen. 6.3 Motors The motors, although similar, did not have the same performance at each speed level. The back left motor would run slower than the rest of the motors. The slower speed was accounted for by running that motor faster than the others. This approximation was useful however, it did not fully account for all the error. Ultimately the motors worked well enough over the small distance specified in this project, but over longer distances it would be advantageous to implement a control system and state machine to continuously adjust for the different motor characteristics. 6.4 Limited Movement Directions The use of simple movement functions which simplified the movement to one direction or rotation at a time was helpful during the initial stages of development. As the project progressed and a more accurate movements were desired it became apparent the full functionality was not being used. A good example of this was the aligning to wall function. A more sophisticated implementation would be to individually control each wheel to move the robot back into the correct alignment with the wall. Instead the robot would stop, strafe and then rotate to align to the wall. This takes significantly more time to complete than a more sophisticated approach. The simplifications did however allow for better odometry. As the robot would only be moving in exactly one direction the distance moved would be directly proportional to the time taken. This relationship proved to be useful in the obstacle avoidance states and helped to offset the changes in motor speed with respect to battery voltage. 24

30 6.5 Servo Motor Usage This project allowed the use of a servo motor for mounting a sensor of the group s choosing to be free to rotate as programmed. This was not an option that was taken, as it was felt that all the required sides were covered. However, in hindsight, it would have been beneficial to utilise the servo motor, as it would provide angle measurement capabilities, and the use of one sensor for more directions, aiding with the limitations of the sensor placement. It would also check corners for obstructions, as it was found due to sensor placement if the robot was approaching the wall at too steep of an angle, the sensors could not identify the obstacle. 6.6 MPU for Turning The MPU was originally used for completing 90 degree turns, but was taken out due to the slow response of the MPU causing constant overshooting in the turns. This was replaced with a timed implementation of the turn. Although, initially tests were positive, due to the varying speeds of the motors depending on battery level, this quickly became troublesome. Unfortunately, due to time constraints, this wasn t fixed in time for the demonstration, and the presentation suffered greatly. Ideally, the MPU should have been used for turning with input from the sensors and some sort of closed loop control for achieving the perfect 90 degree turn. 6.7 PID Control A PID controller would have theoretically resulted in no error when turning to face and align to walls. However, the sensor data and the calibration used was not accurate enough for this to be the case. A bang-bang or hysteresis controller was chosen over a PID controller because it would be similarly accurate and significantly quicker. 6.8 Fuzzy Logic In writing code such as that for aligning to wall, envelopes were developed, within which range readings from the sensors were deemed acceptable for the robot to be in. This would have been much better implemented through fuzzy logic. This would have been done through defining distance and angle in relation to wall sets. The output sets would include strafing away or towards the wall, and the steering angle to correct the movement. This would have allowed a much smoother alignment, without considerable overshoot which occurred. 25

31 6.9 Active SLAM This solution used entirely passive SLAM. Mapping was not considered in the path planning algorithms and overall code functionality. If the mapping capabilities were completed earlier, the map could have been used to feed information back into the path planner, and create a better algorithm. This would not limit the processing power significantly, but would provide a superior path for area coverage and time required Finite State Machine The implemented finite state machine was simplified significantly. The obstacle avoidance states were implemented poorly not allowing the obstacle avoidance sequence to be broken out of back to the running state unless the obstacle had been completely avoided. In the demonstration this proved to be a large issue especially in the rare case when a wall was mistakenly classified as an obstacle. The number and function of each individual state was not mapped out from the beginning of the project. A more complete planning process would have likely resolved a lot of the state switching problems that were encountered. 7. Conclusion The autonomous SLAM robot project provided a great learning opportunity. It allowed for the integration of hardware, electronics and software. The robot used a finite state machine for operation, with an inward spiral path, strafing for object avoidance and an occupancy grid for mapping. Despite the robot not functioning as intended in the final presentation, it had a lot of potential for functionality, and with further development and more time, would have been very successful in its operation. 26

32 Appendices Appendix I Appendix II Appendix III Appendix IV Appendix V Appendix VI Appendix VII Sonar Mount Drawing MPU Mount Drawing Ninety Turn Code Scanning Code Get Sensor Data Code Align to Wall Code Finite State Machine A-1

33 Appendix I Sonar Mount Drawing A-2

34 Appendix II MPU Mount Drawing A-3

35 Appendix III Ninety Turn Code A-4

36 Appendix IV Scanning Code A-5

37 Appendix V Get Sensor Data Code A-6

38 Appendix VI Align to Wall Code A-7

39 Appendix VII Finite State Machine A-8

MECHENG 706 SLAM Robot

MECHENG 706 SLAM Robot Group 18 Timothy Shardlow Benjamin Krebs Michael Lambert Liang Wang MECHENG 706 SLAM Robot Executive Summary This report details the process of designing an autonomous SLAM robot using a 4-wheel Omni-directional

More information

35 The check straight Function. Uses the Long Range IR Sensors The avoid left Function The align wall Function...

35 The check straight Function. Uses the Long Range IR Sensors The avoid left Function The align wall Function... AUTONOMOUS SLAM ROBOT Blair Duncan Craig Tims Sam Wrait Andy Yu MECHENG 706 Group 5 Executive Summary Simultaneous Localisation and Mapping (SLAM) is a di cult problem to solve. This is because of the

More information

A user interface was built to provide simple control over robot and display the map generated by the robot.

A user interface was built to provide simple control over robot and display the map generated by the robot. 1 Abstract Given a Mecanum wheeled mobile robot, it was required to develop an intelligent system that could perform room vacuum cleaning autonomously. The system was required to perform simultaneous localisation

More information

Group 16: Minh Cao Rongze Ma Xiang Wang Yifang Hou. Mechatronics Design & Integration Autonomous SLAM Robot. MECHENG 706 Mechatronics Design

Group 16: Minh Cao Rongze Ma Xiang Wang Yifang Hou. Mechatronics Design & Integration Autonomous SLAM Robot. MECHENG 706 Mechatronics Design Group 16: Minh Cao Rongze Ma Xiang Wang Yifang Hou Mechatronics Design & Integration Autonomous SLAM Robot MECHENG 706 Mechatronics Design Executive Summary A team of four prominent mechatronic engineers

More information

Autonomous SLAM Robot

Autonomous SLAM Robot THE UNIVERSITY OF AUCKLAND DEPARTMENT OF MECHANICAL ENGINEERING THE UNIVERSITY OF AUCKLAND DEPARTMENT OF MECHANICAL ENGINEERING Autonomous SLAM Robot Project Report MECHENG 706 Mechatronics Design Group

More information

MECHENG 706 PROJECT B: SLAM. Group 3 Jayanth Miranda, Michael Seow, Niclas von Ahsen, Edward Yau

MECHENG 706 PROJECT B: SLAM. Group 3 Jayanth Miranda, Michael Seow, Niclas von Ahsen, Edward Yau MECHENG 706 PROJECT B: SLAM Group 3 Jayanth Miranda, Michael Seow, Niclas von Ahsen, Edward Yau Executive Summary SLAM stands for Simultaneous Localisation and Mapping. It is difficult to achieve due to

More information

Project S.L.A.M. Group 3: Daniel Ing, Sam Chang, Hemanth Pemmaraju, Andrew Wong The University of Auckland 6/7/2015

Project S.L.A.M. Group 3: Daniel Ing, Sam Chang, Hemanth Pemmaraju, Andrew Wong The University of Auckland 6/7/2015 2015 Project S.L.A.M. Group 3: Daniel Ing, Sam Chang, Hemanth Pemmaraju, Andrew Wong The University of Auckland 6/7/2015 i Executive Summary The problem of SLAM is a classical problem involved in autonomous

More information

PROJECT IN MECHATRONICS ENGINEERING

PROJECT IN MECHATRONICS ENGINEERING PROJECT IN MECHATRONICS ENGINEERING AUTONOMOUS SLAM ROBOT Group 1 Project Report Lecturer: Peter Xu Tutor: James Kutia Department of Mechanical Engineering University of Auckland 30 May 2015 Abstract In

More information

AUTONOMOUS SLAM ROBOT Project Report MECHENG 706 Mechatronics Design

AUTONOMOUS SLAM ROBOT Project Report MECHENG 706 Mechatronics Design AUTONOMOUS SLAM ROBOT Project Report MECHENG 706 Mechatronics Design GROUP 8: Corben Taylor Sam Frizelle Santosh Durlabh Stewart Gifford Semester 1 6 th June 2014 Abstract SLAM (simultaneous localisation

More information

PROJECT IN MECHATRONICS ENGINEERING. Alexander Cashen Darren Clark Brierley Smith Jarrod Walsh

PROJECT IN MECHATRONICS ENGINEERING. Alexander Cashen Darren Clark Brierley Smith Jarrod Walsh PROJECT IN MECHATRONICS ENGINEERING AUTONOMOUS SLAM ROBOT MECHENG 706 Group 11 Alexander Cashen Darren Clark Brierley Smith Jarrod Walsh Department of Mechanical Engineering The University of Auckland

More information

MECHENG 706 Project B

MECHENG 706 Project B MECHENG 706 Project B Design of an Autonomous SLAM Robot Group 6 Douglas Grant Shivam Jakhu Joy Rubio William Zhu 1 Executive Summary This report outlines the work done for the MECHENG 706 Project B, the

More information

Wall-Follower. Xiaodong Fang. EEL5666 Intelligent Machines Design Laboratory University of Florida School of Electrical and Computer Engineering

Wall-Follower. Xiaodong Fang. EEL5666 Intelligent Machines Design Laboratory University of Florida School of Electrical and Computer Engineering Wall-Follower Xiaodong Fang EEL5666 Intelligent Machines Design Laboratory University of Florida School of Electrical and Computer Engineering TAs: Tim Martin Josh Weaver Instructors: Dr. A. Antonio Arroyo

More information

PROJECT IN MECHANICAL ENGINEERING or MECHATRONICS. Group 14

PROJECT IN MECHANICAL ENGINEERING or MECHATRONICS. Group 14 PROJECT IN MECHANICAL ENGINEERING or MECHATRONICS AUTONOMOUS SIMULTANEOUS LOCALIZATION AND MAPPING ROBOT Group 14 Project Report MT00-2015 Group Members: Supervisor: B. Eastwood (Convener) M. McCauley

More information

PROJECT IN MECHATRONICS ENGINEERING. David Browne Ariah-Bella Peters Abigail Birkin-Hall Jonathon Beardmore

PROJECT IN MECHATRONICS ENGINEERING. David Browne Ariah-Bella Peters Abigail Birkin-Hall Jonathon Beardmore PROJECT IN MECHATRONICS ENGINEERING SLAM Robot MECHENG 706 David Browne Ariah-Bella Peters Abigail Birkin-Hall Jonathon Beardmore Department of Mechanical Engineering The University of Auckland 7 June

More information

A Simple Introduction to Omni Roller Robots (3rd April 2015)

A Simple Introduction to Omni Roller Robots (3rd April 2015) A Simple Introduction to Omni Roller Robots (3rd April 2015) Omni wheels have rollers all the way round the tread so they can slip laterally as well as drive in the direction of a regular wheel. The three-wheeled

More information

MECHENG 706: MECHATRONICS DESIGN PROJECT B AUTONOMOUS S.L.A.M. ROBOT

MECHENG 706: MECHATRONICS DESIGN PROJECT B AUTONOMOUS S.L.A.M. ROBOT MECHENG 706: MECHATRONICS DESIGN PROJECT B AUTONOMOUS S.L.A.M. ROBOT Authors: Harry Hebden, Peter Kim, Jared Pickery-Jordan, Jang Su Ryu Tutor: Chong Deng Abstract An autonomous SLAM robot has been designed

More information

AUTONOMOUS SLAM ROBOT / MECHENG 706

AUTONOMOUS SLAM ROBOT / MECHENG 706 AUTONOMOUS SLAM ROBOT / MECHENG 706 LOGAN ANDREW / NICHOLAS FINCH / STORM HARPHAM / BRANDON JAMES JUNE 2015 i Executive Summary Simultaneous localisation and mapping (SLAM) is fundamental to the operation

More information

EV3 Programming Workshop for FLL Coaches

EV3 Programming Workshop for FLL Coaches EV3 Programming Workshop for FLL Coaches Tony Ayad 2017 Outline This workshop is intended for FLL coaches who are interested in learning about Mindstorms EV3 programming language. Programming EV3 Controller

More information

Practical Robotics (PRAC)

Practical Robotics (PRAC) Practical Robotics (PRAC) A Mobile Robot Navigation System (1) - Sensor and Kinematic Modelling Nick Pears University of York, Department of Computer Science December 17, 2014 nep (UoY CS) PRAC Practical

More information

Everything You Always Wanted To Know About Programming Behaviors But Were Afraid To Ask

Everything You Always Wanted To Know About Programming Behaviors But Were Afraid To Ask Everything You Always Wanted To Know About Programming Behaviors But Were Afraid To Ask By Kevin Harrelson Machine Intelligence Lab University of Florida Spring, 1995 Overview Programming multiple behaviors

More information

Project 1 : Dead Reckoning and Tracking

Project 1 : Dead Reckoning and Tracking CS3630 Spring 2012 Project 1 : Dead Reckoning and Tracking Group : Wayward Sons Sameer Ansari, David Bernal, Tommy Kazenstein 2/8/2012 Wayward Sons CS3630 Spring 12 Project 1 Page 2 of 12 CS 3630 (Spring

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Direction Control of Robotic Fish Using Infrared Sensor Modules and IPMC Activation Schemes with a dspic30f4013 DSC

Direction Control of Robotic Fish Using Infrared Sensor Modules and IPMC Activation Schemes with a dspic30f4013 DSC Direction Control of Robotic Fish Using Infrared Sensor Modules and IPMC Activation Schemes with a dspic30f4013 DSC Carl A. Coppola 04/03/2009 ECE 480, Team 04 ME 481, Team 09 Abstract This application

More information

Autonomous, Surveillance Fire Extinguisher Robotic Vehicle with Obstacle Detection and Bypass using Arduino Microcontroller

Autonomous, Surveillance Fire Extinguisher Robotic Vehicle with Obstacle Detection and Bypass using Arduino Microcontroller Autonomous, Surveillance Fire Extinguisher Robotic Vehicle with Obstacle Detection and Bypass using Arduino Microcontroller Sumanta Chatterjee Asst. Professor JIS College of Engineering Kalyani, WB, India

More information

FPGA Radar Guidance:

FPGA Radar Guidance: FPGA Radar Guidance: Final Project Proposal Brian Plancher November 13, 2015 1 Overview Imagine that NASA s Curiosity rover suffered a catastrophic failure to its guidance and communications systems. All

More information

This was written by a designer of inertial guidance machines, & is correct. **********************************************************************

This was written by a designer of inertial guidance machines, & is correct. ********************************************************************** EXPLANATORY NOTES ON THE SIMPLE INERTIAL NAVIGATION MACHINE How does the missile know where it is at all times? It knows this because it knows where it isn't. By subtracting where it is from where it isn't

More information

Control of an 8-Legged, 24 DOF, Mechatronic Robot

Control of an 8-Legged, 24 DOF, Mechatronic Robot Control of an 8-Legged, 24 DOF, Mechatronic Robot Submitted by Brian Lim Youliang Kuvesvaran s/o Paramasivan National Junior College Assoc. Prof. Dr. Francis Malcolm John Nickols Abstract The objective

More information

Encoder applications. I Most common use case: Combination with motors

Encoder applications. I Most common use case: Combination with motors 3.5 Rotation / Motion - Encoder applications 64-424 Intelligent Robotics Encoder applications I Most common use case: Combination with motors I Used to measure relative rotation angle, rotational direction

More information

SENSOR TEAM #3. Miguel Rufino Charlie Hirsch

SENSOR TEAM #3. Miguel Rufino Charlie Hirsch SENSOR TEAM #3 Miguel Rufino marufino@ncsu.edu Charlie Hirsch chirsch8@gmail.com 1. Project Goal: The overall goal of the project was to design and build a drone, capable of flying in an indoor environment

More information

FPGA Radar Guidance:

FPGA Radar Guidance: FPGA Radar Guidance: Final Project Proposal Brian Plancher November 1, 2015 1 Overview Imagine that NASA s Curiosity rover suffered a catastrophic failure to its guidance and communications systems. All

More information

Localization of Multiple Robots with Simple Sensors

Localization of Multiple Robots with Simple Sensors Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab

More information

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao

More information

Adept Lynx Triangle Drive: Configuration and Applications

Adept Lynx Triangle Drive: Configuration and Applications Adept Lynx Triangle Drive: Configuration and Applications Parker Conroy Version 4: April 7 2014 Summary Triangle drive is a positioning method by which the Lynx maneuvers with increased accuracy relative

More information

AUTONOMOUS CONTROL OF AN OMNI-DIRECTIONAL MOBILE ROBOT

AUTONOMOUS CONTROL OF AN OMNI-DIRECTIONAL MOBILE ROBOT Projects, Vol. 11, 2004 ISSN 1172-8426 Printed in New Zealand. All rights reserved. 2004 College of Sciences, Massey University AUTONOMOUS CONTROL OF AN OMNI-DIRECTIONAL MOBILE ROBOT C. J. Duncan Abstract:

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

International Journal of Artificial Intelligence and Applications (IJAIA), Vol.9, No.3, May Bashir Ahmad

International Journal of Artificial Intelligence and Applications (IJAIA), Vol.9, No.3, May Bashir Ahmad OUTDOOR MOBILE ROBOTIC ASSISTANT MICRO-CONTROLLER MODULE (ARDUINO), FIRMWARE AND INFRARED SENSOR CIRCUIT DESIGN AND IMPLEMENTATION, OPERATING PRINCIPLE AND USAGE OF PIRMOTION SENSOR Bashir Ahmad Faculty

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

IMU and Encoders. Team project Robocon 2016

IMU and Encoders. Team project Robocon 2016 IMU and Encoders Team project Robocon 2016 Harsh Sinha, 14265, harshsin@iitk.ac.in Deepak Gangwar, 14208, dgangwar@iitk.ac.in Swati Gupta, 14742, swatig@iitk.ac.in March 17 th 2016 IMU and Encoders Module

More information

To Measure a Constant Velocity. Enter.

To Measure a Constant Velocity. Enter. To Measure a Constant Velocity Apparatus calculator, black lead, calculator based ranger (cbr, shown), Physics application this text, the use of the program becomes second nature. At the Vernier Software

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

Kinematics of Machines Prof. A. K. Mallik Department of Mechanical Engineering Indian Institute of Technology, Kanpur. Module - 3 Lecture - 1

Kinematics of Machines Prof. A. K. Mallik Department of Mechanical Engineering Indian Institute of Technology, Kanpur. Module - 3 Lecture - 1 Kinematics of Machines Prof. A. K. Mallik Department of Mechanical Engineering Indian Institute of Technology, Kanpur Module - 3 Lecture - 1 In an earlier lecture, we have already mentioned that there

More information

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT 1 SOE YU MAUNG MAUNG, 2 NU NU WIN, 3 MYINT HTAY 1,2,3 Mechatronic Engineering Department, Mandalay Technological University, The Republic

More information

Robot Navigation Worksheet 1: Obstacle Navigation

Robot Navigation Worksheet 1: Obstacle Navigation Robot Navigation Worksheet 1: Obstacle Navigation Team name: Group members: In this challenge you will learn about the different features of the Move block, test a range of different turns for your robot,

More information

Selection and Integration of Sensors Alex Spitzer 11/23/14

Selection and Integration of Sensors Alex Spitzer 11/23/14 Selection and Integration of Sensors Alex Spitzer aes368@cornell.edu 11/23/14 Sensors Perception of the outside world Cameras, DVL, Sonar, Pressure Accelerometers, Gyroscopes, Magnetometers Position vs

More information

Robotics Adventure Book Scouter manual STEM 1

Robotics Adventure Book Scouter manual STEM 1 Robotics Robotics Adventure Book Scouter Manual Robotics Adventure Book Scouter manual STEM 1 A word with our Scouters: This activity is designed around a space exploration theme. Your Scouts will learn

More information

Implementation of an Automated Sorting System with a 2-Link Robotic Arm

Implementation of an Automated Sorting System with a 2-Link Robotic Arm Implementation of an Automated Sorting System with a 2-Link Robotic Arm Thomas Carlone, Frederik Clinckemaille, Raymond Short Worcester Polytechnic Institute, Worcester, MA, USA E-mail: tomcarlone@wpi.edu,

More information

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z Basic Linear Algebra Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ 1 5 ] 7 9 3 11 Often matrices are used to describe in a simpler way a series of linear equations.

More information

Obstacle Avoidance Project: Final Report

Obstacle Avoidance Project: Final Report ERTS: Embedded & Real Time System Version: 0.0.1 Date: December 19, 2008 Purpose: A report on P545 project: Obstacle Avoidance. This document serves as report for P545 class project on obstacle avoidance

More information

IEEE SoutheastCon Hardware Challenge

IEEE SoutheastCon Hardware Challenge IEEE SoutheastCon Hardware Challenge Cameron McSweeney, Kendall Knapp Brian Roskuszka, Daniel Hofstetter May 2, 207 Advisors: Dr. Jing Wang, Dr. Yufeng Lu, Dr. In Soo Ahn 2 Task 3 - Bring Down the Shields

More information

Control System Consideration of IR Sensors based Tricycle Drive Wheeled Mobile Robot

Control System Consideration of IR Sensors based Tricycle Drive Wheeled Mobile Robot Control System Consideration of IR Sensors based Tricycle Drive Wheeled Mobile Robot Aye Aye New, Aye Aye Zan, and Wai Phyo Aung Abstract Nowadays, Wheeled Mobile Robots (WMRs) are built and the control

More information

Homework Assignment /645 Fall Instructions and Score Sheet (hand in with answers)

Homework Assignment /645 Fall Instructions and Score Sheet (hand in with answers) Homework Assignment 4 600.445/645 Fall 2018 Instructions and Score Sheet (hand in with answers Name Email Other contact information (optional Signature (required I have followed the rules in completing

More information

SPARTAN ROBOTICS FRC 971

SPARTAN ROBOTICS FRC 971 SPARTAN ROBOTICS FRC 971 Controls Documentation 2015 Design Goals Create a reliable and effective system for controlling and debugging robot code that provides greater flexibility and higher performance

More information

TEAM 12: TERMANATOR PROJECT PROPOSAL. TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu

TEAM 12: TERMANATOR PROJECT PROPOSAL. TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu TEAM 12: TERMANATOR PROJECT PROPOSAL TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu 1. INTRODUCTION: This project involves the design and implementation of a unique, first-person shooting game. The

More information

Discover Robotics & Programming CURRICULUM SAMPLE

Discover Robotics & Programming CURRICULUM SAMPLE OOUTLINE 5 POINTS FOR EDP Yellow Level Overview Robotics incorporates mechanical engineering, electrical engineering and computer science - all of which deal with the design, construction, operation and

More information

Polar and Polygon Path Traversal of a Ball and Plate System

Polar and Polygon Path Traversal of a Ball and Plate System Polar and Polygon Path Traversal of a Ball and Plate System Aneeq Zia Electrical Engineering Department, LUMS School of Science and Engineering D.H.A, Lahore Cantt, 54792, Pakistan aneeq91@hotmail.com

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Reliable Line Tracking

Reliable Line Tracking Reliable Line Tracking William Dubel January 5, 2004 Introduction: A common task for a robot is to follow a predetermined path. One popular way to mark a path is with a high contrast line, such as black

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Rover 5. Explorer kit

Rover 5. Explorer kit Rover 5 Explorer kit The explorer kit provides the perfect interface between your Rover 5 chassis and your micro-controller with all the hardware you need so you can start programming right away. PCB Features:

More information

Age Related Maths Expectations

Age Related Maths Expectations Step 1 Times Tables Addition Subtraction Multiplication Division Fractions Decimals Percentage & I can count in 2 s, 5 s and 10 s from 0 to 100 I can add in 1 s using practical resources I can add in 1

More information

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 INTELLIGENT OPTO SENSOR Number 38 DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 Overview TAOS proximity sensors operate by flashing an infrared (IR) light towards a surface

More information

Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer

Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer Lydia Lorenti Advisor: David Heddle April 29, 2018 Abstract The CLAS12 spectrometer at Jefferson

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

Modern Robotics Inc. Sensor Documentation

Modern Robotics Inc. Sensor Documentation Sensor Documentation Version 1.0.1 September 9, 2016 Contents 1. Document Control... 3 2. Introduction... 4 3. Three-Wire Analog & Digital Sensors... 5 3.1. Program Control Button (45-2002)... 6 3.2. Optical

More information

Omnidirectional Drive Systems Kinematics and Control

Omnidirectional Drive Systems Kinematics and Control Omnidirectional Drive Systems Kinematics and Control Presented by: Andy Baker President, AndyMark, Inc., FRC 45 Ian Mackenzie Master s Student, Univ. of Waterloo, FRC 1114 2008 FIRST Robotics Conference

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Terry Payne Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

The Concept of Sample Rate. Digitized amplitude and time

The Concept of Sample Rate. Digitized amplitude and time Data Acquisition Basics Data acquisition is the sampling of continuous real world information to generate data that can be manipulated by a computer. Acquired data can be displayed, analyzed, and stored

More information

University of Jordan Faculty of Engineering and Technology Mechatronics Engineering Department

University of Jordan Faculty of Engineering and Technology Mechatronics Engineering Department University of Jordan Faculty of Engineering and Technology Mechatronics Engineering Department 2016 Control and Measurement Laboratory Robotino Robot (Mobile Robot System) Robotino Robot Objectives: The

More information

Localization and Mapping Using NI Robotics Kit

Localization and Mapping Using NI Robotics Kit Localization and Mapping Using NI Robotics Kit Anson Dorsey (ajd53), Jeremy Fein (jdf226), Eric Gunther (ecg35) Abstract Keywords: SLAM, localization, mapping Our project attempts to perform simultaneous

More information

Fire Bird V Insect - Nex Robotics

Fire Bird V Insect - Nex Robotics Fire Bird V Insect is a small six legged robot. It has three pair of legs driven by one servo each. Robot can navigate itself using Sharp IR range sensors. It can be controlled wirelessly using ZigBee

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET After reading through the Introduction, Purpose and Principles sections of the lab manual (and skimming through the procedures), answer the following

More information

LME Software Block Quick Reference 1. Common Palette

LME Software Block Quick Reference 1. Common Palette LME Software Block Quick Reference Common Palette Move Block Use this block to set your robot to go forwards or backwards in a straight line or to turn by following a curve. Define how far your robot will

More information

1.0 The System Architecture and Design Features

1.0 The System Architecture and Design Features 1.0 The System Architecture and Design Features Figure 1. System Architecture The overall guiding design philosophy behind the Data Capture and Logging System Architecture is to have a clean design that

More information

RMCWin. WalkThrough. This document is intended for walking through RMCWin with customers over the telephone/internet.

RMCWin. WalkThrough. This document is intended for walking through RMCWin with customers over the telephone/internet. RMCWin WalkThrough This document is intended for walking through RMCWin with customers over the telephone/internet. Figure 1. Typical RMC100 and RMCWin installation. PC running RMCWin Setup and Diagnostics

More information

logic table of contents: squarebot logic subsystem 7.1 parts & assembly concepts to understand 7 subsystems interfaces 7 logic subsystem inventory 7

logic table of contents: squarebot logic subsystem 7.1 parts & assembly concepts to understand 7 subsystems interfaces 7 logic subsystem inventory 7 logic table of contents: squarebot logic subsystem 7.1 parts & assembly concepts to understand 7 subsystems interfaces 7 logic subsystem inventory 7 7 1 The Vex Micro Controller coordinates the flow of

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

BCC 3D Extruded Image Shatter Filter

BCC 3D Extruded Image Shatter Filter BCC 3D Extruded Image Shatter Filter 3D Extruded Image Shatter shatters the image in 3D space and disperses the image fragments. Unlike the 3D Image Shatter filter, this filter allows you to create threedimensional

More information

Maths PoS: Year 7 HT1. Students will colour code as they work through the scheme of work. Students will learn about Number and Shape

Maths PoS: Year 7 HT1. Students will colour code as they work through the scheme of work. Students will learn about Number and Shape Maths PoS: Year 7 HT1 Students will learn about Number and Shape Number: Use positive and negative numbers in context and position them on a number line. Recall quickly multiplication facts up to 10 10

More information

Autonomous Ground Vehicle (AGV) Project

Autonomous Ground Vehicle (AGV) Project utonomous Ground Vehicle (GV) Project Demetrus Rorie Computer Science Department Texas &M University College Station, TX 77843 dmrorie@mail.ecsu.edu BSTRCT The goal of this project is to construct an autonomous

More information

Linear and Rotary Infrared Scan System for Measuring Circumference

Linear and Rotary Infrared Scan System for Measuring Circumference Linear and Rotary Infrared Scan System for Measuring Circumference M. F. Miskon*, A. M. Deraman, A.Y.B. Ahmad, M. A. Ibrahim, A. Ahmad, Z. Sani and M.H. Jamaluddin Faculty of Electrical Engineering, UTeM

More information

Grade 8: Content and Reporting Targets

Grade 8: Content and Reporting Targets Grade 8: Content and Reporting Targets Selecting Tools and Computational Strategies, Connecting, Representing, Communicating Term 1 Content Targets Term 2 Content Targets Term 3 Content Targets Number

More information

MS4SSA Robotics Module:

MS4SSA Robotics Module: Robotics Module: Programming and Sensors Kim Hollan Why Program a Robot? Building a robot teaches many valuable skills; however, the learning doesn t stop there Programming also teaches valuable life skills

More information

½ Caution! Introduction. Blind.switch 5701/1.0

½ Caution! Introduction. Blind.switch 5701/1.0 Blind.switch 5701/1.0 Introduction General information This software application enables you to program blind/switch actuators using manual mode (referred to below as actuator), control blind and roller

More information

Formations in flow fields

Formations in flow fields Formations in flow fields Rick van Meer June 11, 2015 1 Introduction Pathfinding and formations (i.e. an arrangement of agents) are both a part of artificial intelligence and can be used in games. However,

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

MATHEMATICS ASSESSMENT RECORD - YEAR 1

MATHEMATICS ASSESSMENT RECORD - YEAR 1 MATHEMATICS ASSESSMENT RECORD - YEAR 1 Count to and across 100, forwards and backwards, beginning with 0 or 1, or from any given number Count, read and write numbers to 100 in numerals; count in multiples

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Advanced Motion Solutions Using Simple Superposition Technique

Advanced Motion Solutions Using Simple Superposition Technique Advanced Motion Solutions Using Simple Superposition Technique J. Randolph Andrews Douloi Automation 740 Camden Avenue Suite B Campbell, CA 95008-4102 (408) 374-6322 Abstract A U T O M A T I O N Paper

More information

UItiMotion. Paul J. Gray, Ph.D. Manager Path Planning Front-End Design R&D

UItiMotion. Paul J. Gray, Ph.D. Manager Path Planning Front-End Design R&D UItiMotion Paul J. Gray, Ph.D. Manager Path Planning Front-End Design R&D What is UltiMotion? An entirely new software-based motion control system Wholly owned by Hurco Awarded 4 patents Superior to Hurco

More information

OBSTACLE AVOIDANCE ROBOT

OBSTACLE AVOIDANCE ROBOT e-issn 2455 1392 Volume 3 Issue 4, April 2017 pp. 85 89 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com OBSTACLE AVOIDANCE ROBOT Sanjay Jaiswal 1, Saurabh Kumar Singh 2, Rahul Kumar 3 1,2,3

More information

National Curriculum 2014: Progression in Mathematics

National Curriculum 2014: Progression in Mathematics Number and Place Value Year 1 Year 2 Year 3 count to and across 100, forwards and backwards, beginning with 0 or 1, or from any given number count, read and write numbers to 100 in numerals, count in different

More information

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM Jorma Selkäinaho, Aarne Halme and Janne Paanajärvi Automation Technology Laboratory, Helsinki University of Technology, Espoo,

More information

Getting Started Guide

Getting Started Guide Getting Started Guide 1860 38th St. Boulder, CO 80301 www.modrobotics.com 1. Make Your First Robot The Dimbot Uses a clear Flashlight Action block, black Distance Sense block, and a blueish-gray Battery

More information

ROBOTICS AND AUTONOMOUS SYSTEMS

ROBOTICS AND AUTONOMOUS SYSTEMS ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 6 PERCEPTION/ODOMETRY comp329-2013-parsons-lect06 2/43 Today We ll talk about perception and

More information

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor Andy Sayler & Constantin Berzan November 30, 2010 Abstract In the past two weeks, we implemented and tested landmark extraction based

More information