Fragile Reconnaissance Object Processing System

Size: px
Start display at page:

Download "Fragile Reconnaissance Object Processing System"

Transcription

1 F.R.O.P.S Fragile Reconnaissance Object Processing System Authors: Stephan Bosch Michiel Hendriks Rob Leemkuil Group 3 Ontwerpproject Tutors: Nirvana Meratnia Maurice van Keulen Henk Ernst Blok - 1 -

2 Abstract In this document the design of a system capable of scouting and mapping an environment using a LEGO robot is described. This is achieved by building a LEGO robot that can drive around freely and take spatial measurements of the environment using a distance sensor mounted on top of it in a radar-like setup. The robot has problems achieving the required movement precision for the demanded map detail. Furthermore the movement strategy is not as efficient and thorough as required, because there was simply no more time to evolve it further. Successive projects should improve the part of robot s construction responsible for the movement precision. The movement strategy should be easy to improve to meet the requirements

3 Foreword As part of the computer science postgraduate program at the University of Twente, the course ontwerpproject is given. The name of the course literally means design project and that captures the essence of the course. At the beginning of the course, project groups of four or more students are formed. During the length of the second trimester, the project groups work on a project assignment of their own choice. The project assignments are provided by the various computer science departments. After the formation of our project group, the search for a project assignment began. The assignment of a LEGO robot explorer from the Database department immediately got our attention, but we were not the only group interested in this assignment. After consultation with course tutors (the employers ) of the project, the decision was made to assign it to both project groups and have them work parallel on the same assignment. A competition element was thereby added to the course. The name F.R.O.P.S. stands for Fragile Reconnaissance Object Processing System. Original the F in this name stood for fast, because the intention was to make it a fast system. Yet, the final design was far from fast, so an alternative had to be found. In a playful mood the F was chosen to mean Fragile, because the LEGO construction is not very rigid. Originally the working title was ROLMOPS, but we unfortunately forgot to note what it meant. During the project, it was decided to split the assignment into two different parts, due to personal problems of one of the original project members. He was responsible for an important part of the system under development. At the present time, the intention is that he will continue the project for his part in the next trimester and we wish him good luck with the continuation of this interesting project. Enschede, March Project Group 3 Stephan Bosch Michiel Hendriks Rob Leemkuil - 3 -

4 Contents 1. Introduction Planning Analysis Problem analysis Problem Description Requirements Functional Description LEGO System design Overview Components Robot Control Unit GIS Communication RCX communication The internal to external control unit protocol The GIS to external control unit protocol Robot design Design alternatives Robot position measurement Robot movement Spatial data acquisition Proximity detection Available sensors Final design The vehicle itself Wheel unit RCX control Transmissions The radar The bumper Summary The final robot RCX Software The state diagram Framework Framework construction The Panic state Environment RCX vehicle control Initializing the environment Robot positioning Motor routines Motor and sensor initialization Are You There? Bump Sensors Recovery Shutter RCX sensor control

5 6.5.1 Initializing the environment Scanning the environment Summary External Control Unit RCX Communication The SendMessage command The datalog Power saving GIS Communication Motion Planning Follow the wall Sweeping The Combination Conclusion Testing Robot The synchrodrive The Radar The bumper Conclusion RCX Software Framework Bump Recovery Are You There? Conclusion Control Unit RCX Communication Motion planning Conclusion Visualization Visual spatial data representation Control unit front-end Scanned environments Conclusion Conclusion Evaluation Epilogue Bibliography A. Planning A.1 Week planning A.2 Project planning A.3 Tasks B. RCX State Diagram C. The GIS to external control unit protocol D. Sensor Data Sheet

6 1. Introduction Building with LEGO, thus back to your childhood? Of course not, Lego Mindstorms is a toy for scientist. Many have preceeded you This quote from the description of the project assignment gives a good indication of the possibilities of LEGO Mindstorms. Like the quote mentions, the construction of an advanced piece of technique like a robot is made possible for anyone with LEGO Mindstorms. Many people before us have demonstrated this, for example the TEFAL (The Errorcorrecting Flag Assembly Line) project. The project described in this document belongs to the ontwerpproject course given at the computer science department of the University of Twente. The ultimate goal of this project is to construct system capable of exploring and mapping a dynamic environment. This system includes a robot constructed with LEGO Mindstorms and a Geographic Information System(GIS). The robot is used to explore the environment and the collected environment data from the exploration process is stored in the GIS. Using the data in the GIS, the system will construct a map of the environment. The ultimate system must be able to deal with dynamic objects in the environment, for example a sleeping cat. This document describes an initial project in the path to that ultimate goal. Therefore, the requirements for this project are much less restrictive. The dynamic properties of the environment are not yet taken into account. The fact that LEGO Mindstorms is not perfect in respect to communication and not complete in respect to the available hardware makes this project challenging. For instance a custom sensor will have to be built to acquire the necessary spatial data. Unfortunately, during the project one of the project members stopped participating to the project. His assignment was to implement the GIS database system. Therefore this subsystem was never implemented. Only an improvised visualization was built to give the system the ability to show the desired map. The approach to the construction of this system follows a model explained in chapter two. This chapter is about the project planning. The described model is a good indication for activities performed during the project and to the contents of this report describing these activities. Following the model, an analysis of the problem description is made in chapter three. Requirements of the system are stated there as well. After the analysis of the project assignment, a global system description is given in chapter 4. The different system components are identified and described. After the description of this global system design, each of the identified components is designed and explained in more detail. The design process of each component is described in chapters 5, 6 and 7. After the construction, the designed components are tested for their performance and compliance to the requirements. The various test methods, the test techniques and the results are described in chapter 8. Also the problems detected during the test phase are described and the solutions that are incorporated in the design are stated

7 Finally, at the end of the project, an overall conclusion is given on the developed system in chapter 9 and the projects process is evaluated in chapter

8 2. Planning We have chosen to approach the project in two phases (phased approach). In the first phase a prototype is developed, which evolves in a final product in the second phase. Each phase of the project follows the linear sequential model [PRESSMAN]. The project consist thus of two iterations of the linear sequential model. Inside each phase we can distinguish the activities dictated by the linear sequential model, namely analysis, design, coding and testing. For the project, some changes are required to the linear sequential model. In the first place, the coding activity needs to be extended with a building activity, due to the fact that the project assignment includes the construction of a robot from Lego (Mindstorms). Another extension made to the linear sequential model is the addition of a documentation activity. In principle, documenting the project activities is implicit in the model or any professional engineering project, but for the project of great importance on account of recording the project activities and result for the future and future projects in this area. The documentation activity of the project takes place during the whole iteration or phase (and project) and is the final activity of an iteration or phase. Each other activity has interaction with the documentation activity during the project. Research Analysis Design Building Coding Testing Documentation Figure 1 Linear Sequential Model (Modified) The extended model of the linear sequential model used in the project is visualized in the figure above (See Figure 1)

9 3. Analysis In this chapter the problem is analyzed. First the problem is stated and requirements are deducted. These requirements are explained further in the functional description where the system functions are identifier. Finally in 3.2 the available LEGO kit is specified. 3.1 Problem analysis In this chapter the problem is defined, requirements are stated and a functional description is given. Note that, although it is not implemented, the GIS is still mentioned in the analysis Problem Description The ultimate goal of this project is to build/design a robot and GIS (Geographic Information System) capable of mapping a dynamic environment. Dynamic means that objects can move and appear or disappear within the environment. These dynamic objects should be recognized and stored as such in the GIS database. The environment of the robot will be the floor of the room the robot is released in. This project is the initial step to the solution of this problem. For this project, the problem is limited to building a system which is capable of mapping an environment using a robot. Dynamic properties of the environment are not yet taken into account. The focus lies on building and testing the necessary infrastructure to perform this task. For this initial project the requirements are much less restrictive. The robot must be implemented using LEGO Mindstorms components. Thus the robot design is constrained by the characteristics of LEGO. Yet, it is possible to extend LEGO with custom components like extra sensors Requirements In this section, the requirements are stated for the system. First, a description of the assumption made on the environment is given. This description is followed by a statement of the general requirements of the total system. Finally, requirements are stated for each system component/functionality. These requirements and components are explained in more detail in the functional analysis in the next section (See 3.1.3). Environment! The environment is regarded as a 2-dimensional plane. Hence, the system will not have to recognize three-dimensional properties of the environment like the height of objects. The environment is considered completely unknown: the system should not need any prior knowledge of the environment. General! The entire (reachable) room surface of the environment must be mapped by the system

10 ! The mapping of the environment by the system must be performed as quickly as possible.! The map produced by the system must clearly show features of the environment of 5 centimeters in size or more. Anything smaller is allowed to be missed. Robot! This robot prototype must be built using LEGO Mindstorms. It is allowed to extend LEGO with extra sensors for example.! The robot must be able to move in any direction from its starting position.! The robot must give the control software the ability to guarantee that the movements are performed precisely.! The control software must be able to position the robot with sufficient precision: the actual location of the robot should not differ too much from the position the control system has tracked. Too much means that the resulting map would not satisfy the general requirement that the map must clearly show features of the environment with 5 centimeters in size or more.! By means of sensors the robot must be able to perform spatial measurements. The acquired data must be accurate enough to build a two dimensional map of the environment. Again, the general requirement that the map must show sufficient detail must be met.! The robot must be able to do these measurements efficiently. Hence, the robot must be able to measure at a reasonable distance from the robot. It is not efficient if the robot must physically move to every possible location to map the environment. A reasonable distance is defined to be at least as far as the diameter of the robot.! The robot must be able to detect and avoid objects/obstacles in its path even if these were not recognized by the spatial measurement sensor(s). Control Unit! The control unit must be able to control the robot. It must steer the robot through the environment precisely using a well defined strategy. This strategy must satisfy the general requirements.! If a part of the control software exists externally to the robot control software, it must be able to communicate though a reliable channel. If the communication channel breaks down, the control software must take appropriate action and notify the user.! The controller must prevent the robot from getting stuck at all costs. Getting stuck can mean that the robot bumped into something and cannot move anymore or simply has lost connection with the external control unit. Hence, it must be able to deal with detected obstacles and it must take appropriate action if the robot loses connection with the external control software.! The control unit must perform conversions on the raw spatial measurements. The backend GIS database should not be fed with raw measurements from the robot s sensor(s). Typically, the only things the GIS should get are map points or vectors. GIS! The GIS must represent the gathered information in a database system in such a way that it can be used to build a map for visualization.! The GIS must be able to recognize dynamic (moving, appearing or disappearing) objects in the environment. These objects should be flagged accordingly in the database

11 Visualization! The visualization software must present the user with a correct image of the scanned environment. Any dynamic objects should be recognizable in this map and should not appear to be static in this representation of the environment.! The visualization software must give the user the ability to view the map during the reconnaissance of the robot. This way the user can see what the system is doing and how things are progressing Functional Description A problem can generally be solved by assigning separate functions for each part of the problem. In this section a description of each function to solve the problem is given. Analogous to these functions the designed system is divided into components. The related requirements of section are explained further in this section. In the problem description (section 3.1.1) it is stated that a robot should be used for reconnaissance. Obviously, the first part of this problem is to design this robot. Secondly, a robot without control software will not do anything useful, so the design of this control software is a part of the problem as well. Furthermore, the problem description states the presence of a GIS which stores the spatial data and gives the possibility to construct a map. And finally, the construction of that map is handled by the visualization software. This division into subsystems is depicted in the Figure 2. Figure 2 System diagram Robot The first part of this system is the robot itself (which means the mechanical part only). As stated in the requirements of it must be built using LEGO Mindstorms. This is a very versatile programmable LEGO construction kit that for example supports

12 controlling motors and reading sensors. This will be explained more thoroughly in the next section. The robot must be able drive around freely in its environment to be able to scout the environment. A robot fixed on a single position would only be able to scan direct surroundings and if it would be able to move, but would be limited in its movements, it could not reach the entire environment. This way, the problem of mapping the entire environment would not be solved. The robot is guided through the environment by the control software. The robot s movements must be performed with high precision in order for the control software to track these movements accurately. This precision can either be acquired by guaranteeing that the desired movement is performed in the desired way, either by measuring the actual movement. Furthermore it is very important that the robot moves in a consistent way. With these guarantees the control software is able to track the movements of the robot to its current position accurately. If the robot s movements are indeed performed with high precision, the tracked position should approximate the actual position of the robot. If this is not the case and the difference in actual and tracked position is too high, the produced map could become inaccurate and highly distorted: the system might for instance detect an identical object on different locations while the object has not moved. During reconnaissance, it must be capable to do spatial measurements on the environment. The collected data must be such that the system can construct a map using this data. The robot must be able to look in a reasonable distance for reconnaissance. It is not efficient if the robot would have to physically move to and touch every location in the environment in order to map the environment entirely. The robot must have some facility that provides the opportunity to track its movements. It must be possible to relate the gathered spatial data to the robot s current position. The requirements demand that the design of the robot must guarantee the robot cannot get stuck for frivolous reasons. Therefore it must always have the ability to detect that it has (nearly) bumped into an object, even if this object was not detected with the measurements done for reconnaissance. Hence, the robot must be equipped with some sort of sensor capable of detecting this. Control unit Secondly, there a system component must be designed that controls the robot. It is possible to make the robot fully autonomous by putting this control unit entirely on the robot itself, but it can also be (partially) situated elsewhere. If the control unit is divided in a robot and an external part, both parts must be able to communicate using a well defined protocol. The control unit must direct the robot through the environment in an efficient way: it is required to map the entire (reachable) environment as quickly as possible using a well defined strategy, as stated in the general requirements. To make it possible to build a map of the environment the control software must keep track of the robot s current position. Without this information any gathered

13 sensor data is useless. This requirement is met using the facilities provided by the robot described in the previous section. Furthermore, the control unit must interpret the reconnaissance data gathered by the robot. The underlying database system should not be fed with the raw sensor data from the robot, so the control unit will have to perform conversion to spatial data (e.g. points or vectors and not angles and distances for instance). Finally, the requirements demand that the control unit must react appropriately to bump events which the control unit did not predict from the measurements taken. It must prevent the robot from getting stuck at all costs. Getting stuck can also mean that it has lost connection with the control unit if this is partially situated elsewhere. It is also very important that the robot does not affect the environment in a destructive or otherwise altering way: it should not tip over objects or push away moveable objects (making them falsely dynamic). The GIS The GIS must process and store the spatial data acquired from the environment in such a way that a map of that environment can be constructed. The GIS must be able to communicate with the control unit using a well specified protocol. Using this protocol the GIS can acquire the spatial data from the robot control unit. If necessary the robot control unit must be able to read data from the GIS if this is required for the movement strategy. Visualization The final goal of this project is to build a map of the scouted environment. The system to be built must provide the user with some facility to view this map. This facility must compile the GIS data into a correct representation of the scanned environment. To speed up results, it must be possible to view the compiled map during the robot s reconnaissance, so the user can actually see what it is doing. This task is performed by the visualization software. 3.2 LEGO LEGO Mindstorms is a special LEGO construction line that focuses around a programmable controller. It is mainly intended for educational purposes. One of the construction kits in the Mindstorms line is called the Robot Invention System (RIS). This kit contains a programmable controller (called the RCX), a pair of motors, a couple of sensors and various other normal LEGO parts. The RCX has the following features:! The RCX unit can control three separate motors. It can control each motor separately to turn clockwise or counterclockwise at seven different power levels. These power levels determine the speed at which the motor s axles turn.! Three separate sensors can be connected to the RCX. The sensors can draw power from the RCX is necessary. The LEGO light sensor for instance needs a power supply. The RCX can only read analog data (voltages) from the sensors. This value is converted internally to an 8 bit integer. Thus, unfortunately digital sensors are only connectable using a complex digital to analog interface. The measured sensor values are available to the RCX programmer in the form of raw

14 values, percentages or sensor specific value formats (binary for the touch sensor for instance).! The RCX has a display that can display sensor values or program variables. This is especially useful for debugging.! The RCX runs its own small operating system. This OS is commonly called the firmware. This firmware can be updated or replaced if necessary. Standard LEGO firmware implementations as well as various custom firmware implementations are available. Custom firmwares usually provide some other features and other programming languages. The default LEGO firmware can be programmed with the LEGO schematic programming tool or the NQC programming language. The custom firmware lejos for instance uses a limited form of java.! The RCX contains 32KB of RAM, and a 16KB ROM that contains the initial firmware to boot the RCX. The 32KB RAM is used for the extended firmware (16KB) and the uploaded programs.! The RCX has an infrared transceiver that uses the Consumer Infrared Frequency to communicate with other RCX units or with the LEGO infrared tower that can be connected to the computer. The LEGO infra Tower is available in two versions. The Robot Invention System versions 1 and 1.5 use an infrared tower that uses a serial connection (RS232) to the computer and version 2.0 uses an infrared tower that uses USB. For the actual communication this does not matter, since both systems use serial communication internally. The RIS comes with one RCX, 2 motors, 2 touch sensors, 1 light sensor and one IR Tower. For this project, four RIS 1.5 kits and some additional touch and light sensors are available. Note that, because the RIS 1.5 is used, the connection from a tower to a computer is a serial (RS232) connection

15 4. System design The design is divided into several parts analogous to the parts described in the analysis. In this chapter, the design of the system in general is described to provide an overview. The system s components and links between these various parts of the system are explained. The communication channels between the individual parts stated and the related protocols are designed. In subsequent chapters the design of the individual system parts is described in high detail. These chapters refer intensively to the contents of this chapter. First an overview of the components of the system is given describing each component globally. After this overview, each component is described in more detail and the communication links between the individual components are explained as well. In the final paragraph the protocols for these communication links are designed. 4.1 Overview In Figure 3 a schematic representation of the whole system is shown. Globally, the system consists of:! The robot. This is the mobile part of the system that actually does the scouting of the environment. It is the only part of the system that interacts with the environment that has to be mapped.! The (external) control unit. It is the software that controls the robot and processes the spatial data acquired by the robot. This spatial data is fed to the GIS component described hereafter.! The GIS. This is a database system that stores the acquired spatial data for map construction. This part of the system will also perform filtering on the data in order to get a more accurate view on the environment.! The visualization software. The required map of the environment is presented by this program in real time as required in section This program interacts with the user. Figure 3 Overview of the whole system

16 The communication links between the various system components are also shown in Figure 3:! The control unit gives the robot commands to drive around and acquire data. The robot gives the control unit the ability to download the acquired data. Furthermore the robot reports error conditions to the control unit, for instance if it bumps into an object.! The acquired data from the robot is processed by the control unit and fed to the GIS. For efficiency, the GIS will direct the control unit to steer the robot to locations in the environment that have not been seen by the system.! The visualization software requests the data it needs from the GIS system to build a map. 4.2 Components In this paragraph each system component is described in more detail including its relation to the other parts in the system. For each component design, it is evaluated how it intends to solve the requirements Robot The composition of the robot is shown in Figure 4. Some of the more detailed features of the control system are described in the next section. Figure 4 Detailed overview of the robot and the control unit

17 The robot consists of the following components:! A vehicle part. This part of the robot gives it the ability to drive around freely in the environment using two motors, for forward/backward movement and steering respectively. To keep track of its movements, the (internal) control unit can read from two rotary sensors, again one for movement and one for steering. This done using odometry. This position measurement technique is explained in chapter 5. Thus, this part of the robot provides the solution to the requirements that the robot should be able move freely trough the environment and that the control software must have the ability position it precisely in the environment.! A spatial measurement device. This robot component does the required spatial measurements for reconnaissance. The control software will pass the acquired data to the GIS, which constructs a map with this data. This part of the robot! A proximity detection device. This device is used by the system to detect whether the robot has unexpectedly (nearly) bumped into an object. This is necessary to meet the requirement that the robot must never get stuck. The control unit is responsible for taking appropriate action to correct the situation Control Unit The control unit consists of an internal part and an external part. This is depicted in Figure 4. The internal control unit is situated on the robot itself and the external part is an external computer system. As depicted in Figure 4, the internal part of the robot consists of two RCX control units that were described in paragraph 3.2. As explained in the analysis of that paragraph an RCX control unit can control three individual motors and can connect to three sensors. This is not enough to control the designed robot, as will appear more clearly in the robot design of chapter 5. Therefore two individual RCX units are used. The robot could be controlled autonomously the internal part represented by the two RCX controllers. However, because the LEGO Mindstorms RCX units cannot store very complex programs (see paragraph 3.2), the biggest part of this control software is situated on the external computer, earlier referred to as the external control unit. All the movement algorithms are handled by the external control unit. For this motion planning, it keeps records of the information gathered thus far. Moreover, control unit makes sure the robot gathers enough information about the environment for the GIS for map construction. Also, all raw sensor data is converted by the external controller into a format that the GIS can use. Thus, the internal control software on the robot itself is only the interface between its motors and sensors and the external controller. In essence, the control software on the robot itself does not do anything on its own, except when it detects that the robot has unexpectedly bumped into something or lost its connection with the external control unit. Then, it takes appropriate action to attempt to solve the problem. This is needed to meet the requirement that the control software ensures that the robot can never get stuck. To solve the problem, the internal controller drives the robot back to the last position before the execution of the last command

18 Furthermore, it reports the failure to the external part, so appropriate actions can be taken. This is explained in further detail in chapter 6. The vehicle part and the proximity detection device of the robot described in the previous section are controlled by the first LEGO RCX control unit, referred to as the vehicle RCX. This RCX executes movement commands for the robot and handles proximity events. The second RCX controls the reconnaissance sensor. It is referred to as the sensor RCX. On command it will perform a scan of the nearby environment of the robot and it will present the collected data to the external control unit to be downloaded. The RCX controllers are linked to the external computer through the LEGO infrared link earlier described in paragraph 3.2. This type of link is very much like a television remote control, thus the signal has line-of-sight characteristics: if an object is in between transmitter and receiver (potentially) nothing is received. Reflections can still cause a signal to arrive to the receiver though. The infrared link does not go all the way to the external computer. In between is the infrared tower that translates the serial computer signal into RCX comprehensible infrared signals and the other way around. The fact that two RCX controllers are used has consequences for this communication between internal and external control unit. Because LEGO RCX controllers use an infrared broadcast channel, the two RCX units can corrupt each other s messages. A protocol is designed in paragraph 4.3 to solve this problem. The RCX controllers of the internal control unit do not communicate directly with each other, but only with the external controller. As established before, the external control unit gives the command and the internal controllers execute them like slaves. Therefore, there is no need for the two RCX controllers to communicate with each other GIS In this section the GIS is described and its relation to the other objects is explained in more detail. The GIS does need to be on the same system as the control unit. In fact it can be beneficial to spread these system components over two computer systems, due to scalability problems. Also, an extension to the design presented in this report could be to have multiple robots scanning an environment. This way, multiple control units would be needed, but still only one central GIS database. Each one of these control units could be situated in a separate location. To provide for this future extension, the GIS and control unit are designed as two completely separate units, which communicate via a network connection. The GIS gives the command to start creating a map of the area. The GIS sends it is commands to the control unit who will convert it to the right commands for the robot to gather the information. Unfortunately, due to the project issues explained in the introduction, the GIS is not implemented for this project. The GIS and the corresponding visualization software

19 are replaced by an improvised visualization program that directly displays the resulting map without intervention of the GIS. Possibly the GIS will be implemented by the person responsible in a successive project. 4.3 Communication This paragraph describes the designed communication protocols for the communication between the earlier described system components. But first the available means of communication are evaluated RCX communication As mentioned in the section the internal control unit of the robot comprises of two RCX controllers. These are integrated in the design of the robot explained in chapter 5. Earlier in section 3.2 is explained that LEGO RCX controllers communicate with an external computer using infrared (IR) signals. For this communication LEGO provides an infra-red tower that can be connected with the computer. LEGO uses the Consumer Infrared Frequency 1 (CIR) and a custom protocol to handle the communication between the RCX controller and the infrared tower. Unfortunately it is not possible to use a different infrared protocol, like IrDA. IrDA uses another frequency than CIR, the IR communication in the RCX is fixed and can not be changed. As described in section 3.2 the LEGO infrared tower available to this project is the serial (RS232) version. For the actual communication it does not matter which version of the tower is available, since both systems in fact use serial communication Multicast Communication There are various APIs available to communicate with the RCX controller from a computer. LEGO provides standard components that can be used from the Windows operating system, the so called GhostAPI. Some other APIs wrap the GhostAPI in a different programming language. There are also APIs that handle their own communication, like RCXJava, TclRCX, librcx (C) and LEGO::RCX (Perl). Most of these APIs provide standard functions to call most of the remote executable op-codes of the RCX. However, all these APIs have one very important limitation: they assume the communication takes place with a single RCX controller. For this project more than one RCX controller is needed; hence an interface capable of multicast communication is needed. In addition, it is impossible to use most of the op-codes of the RCX controller. Commands that are sent by the infrared tower are received by all RCX controllers capable of communicating with that tower. When multiple RCX controllers simultaneously send data to the tower, data corruption will occur. This data corruption can not be fixed using software. This means that all op-codes, for which 1 This frequency is used for consumer products like remote control of devices like televisions

20 the RCX controller will return data, cannot be used. These op-codes include the ones controlling motors and sensors. There is one op-code that can be used to broadcast a single byte of data, called SendMessage. This op-code can be used to create a protocol on top of the LEGO infrared protocol. This protocol can be designed to support multicast communication and to support communication with multiple RCX controllers. With the SendMessage op-code there are two limitations:! Only one byte can be sent over the communication channel in a single operation;! The message cannot have the value 0 (zero). The reason why the message can not have the value zero, is that the software running on the RCX does not get notified when a message has been received. The internal value will just silently be updated to the new value. In order to know whether a message was received, the software running on the RCS has to set the internal value to zero and at regular intervals it has to check if the value has changed. This value does not necessarily have to be zero; this choice is arbitrary, so any value from 0 to 256 can be used for this purpose. But, you will always lose one value in the range from 0 to 256. Another possibility is the use of an alternative firmware instead of the LEGO firmware. But all alternative firmwares still rely on the LEGO infrared tower. That is why they are forced to use the LEGO IR protocol. However, they are not forced to use the default remote op-codes of the RCX. Using a different firmware also means that there is a different way the RCX must be programmed. The default LEGO byte code does not work with the alternative firmwares. The first option, that uses the LEGO firmware and the SendMessage op-code to build our own protocol, is chosen. The only reason why choosing another firmware would possibly be useful, is that the alternative firmware can avoid this LEGO protocol. Hence, it does not really matter what firmware is chosen, since the LEGO infrared protocol always has to be used. Fortunately, the limitations enforced by the LEGO firmware are not preventing the use of the multiple RCX controllers needed by this project s design Communicating using SendMessage A way to implement a communication system using the SendMessage feature is to make all the commands, which are sent to the RCX controllers, unique. This way a message value is only accepted by one of the RCX controllers. This limits the number of commands to a total of 255. This would only be a good solution when only one-way communication is needed. Furthermore, the total number of commands is very limited and the commands cannot have additional parameters. The reason why parameters are not allowed is that a command parameter could have the same value as a command. This one-way communication is not an option for the design of the robot. The number of commands needed is limited, but a two way communication is a necessity. For example, to command the robot s reconnaissance sensor to perform spatial measurements on the environment, there has to some kind of return value or data

21 Moreover, it can be necessary to have the possibility to create commands with additional parameters. Before the design of the actual protocol this is not known, but it is never practical to limit the possibilities beforehand The datalog As will be described in the design choices of section a radar-like setup will be used for the spatial data acquisition. And because the sensor RCX is mounted on that radar, it will rotate along with this radar. Therefore, the RCX might lose its infrared link. This happens when the infrared transceiver of the RCX is no longer facing the infrared tower. That means the measurements have to be stored in the RCX and transmitted to the external computer when the scanning is complete. There are two ways to store the measurements:! It is possible to create an array that is large enough to provide space for all the measurement data. To support this, a command must be added to the designed protocol to read these values. The size of this array can only be set at compile time.! The other alternative is to use the datalog of the RCX. The datalog is a section of memory where the RCX can store arbitrary data. The size of datalog can be set from within the RCX program. However, the datalog is write-only for the RCX itself; it can only store values, but it cannot read them. By contrast, the computer can read the complete datalog. The complete datalog can easily be retrieved using the correct LEGO op-codes. The retrieval of all the measurements stored in an array would require the use of many SendMessage commands. Each SendMessage command can only retrieve a single measurement value. This will take a considerable amount of time and it is error prone Datalog problems However, it is expectable that downloading a datalog from an RCX can go wrong if another RCX also sends its datalog to this reply. An RCX always send its datalog on request when it is running. There is no way to stop this behavior using software. To solve this issue, two alternatives are available: 1. disabling the datalog feature in the firmware for the other RCX unit 2. disabling the communication for one RCX when downloading the datalog The first option requires hacking of the LEGO firmware and disabling the datalog op-codes. As an alternative another open-source firmware could be used. Altering the LEGO firmware would take too much time. Because, at the time this issue was discovered much RCX code was already written, using an alternative firmware is out of the question. The second option provides the solution. There is no way to disable the infrared communication from within the RCX software, except from turning off the RCX. Turning off the RCX is not an option since it can only be turned on by hand. So, we chose to use a method to block the infrared transceiver of the RCX (see section 3.2) for a while

22 The solution is to install a shutter in front of the RCX transceiver that can be closed on request from the external controller. The shutter is in fact a small slide door that slides in front of the RCX transmitter and obstructs its communication capabilities. This will be explained further in section Conclusion It is chosen that a two-way communication protocol using the RCX SendMessage command and the standard LEGO firmware are to be used. Moreover, it is decided to use the datalog feature, because it provides more freedom in the number of measurements and it makes the implementation much easier. However, this choice causes problems when multiple RCX units are used. This is solved by creating a controllable obstruction on front of the transceiver of the vehicle RCX called the shutter The internal to external control unit protocol The fact we are dealing with multiple RCX controllers (see section 4.2.2) requires that the first part of the protocol must contain some sort of identification. This way, the right RCX controller will know its being addressed. This authentication can be as easy as sending a start message, to signal authentication has started, followed by an identification number: the RCX ID. The RCX ID is unique for each RCX controller. When the identification is complete, commands can be sent to the RCX controller. These commands can be different for each RCX controller. Finally a finalization message has to be sent to all the RCX controllers. This returns the RCX controllers to their initial state. In summary, the protocol basically consists out of three parts: 1. authentication (start) 2. command cycle 3. finalization (stop) When the authentication fails for an RCX controller it should ignore everything in the command cycle. This will exclude all the RCX controllers from the communication, except the one being addressed. This way, it is ensured that the communication takes place with one specific RCX controller. In broad outline the protocol would look something like shown in Table 1: Computer RCX 1 RCX 2 Start byte Accepts Accepts RCX ID = 1 Send acknowledge to computer Ignores Command Execute command End byte Stops accepting commands and start listening for a start byte. Start listening for a start byte Table 1 Broad outline of the protocol for the robot and control unit communication

23 Commands From the previously established broad outline of the protocol for the robot and control unit communication (See Table 1), a set of general protocol commands applicable to both the RCX controllers can be deduced. The set of general control commands that can be identified are listed in Table 2. Command Byte code Description CC_START 0xFF Start the command cycle (PC " RCX). CC_STOP 0xFE Stop the command cycle (PC " RCX). This protocol command is used to:! Acknowledge the receipt of a CC_START and the RCX ID (RCX CC_ACK 0xFD " PC).! Reply on the execution of a command (RCX " PC).! Acknowledge after receiving a CC_AYT (PC " RCX). CC_IDENTIFY 0xFC Identify an RCX (PC " RCX). This is poll from the control unit to check if an RCX is present. CC_RESET 0xFB Force CC_STOP on all RCX controllers (PC " RCX). CC_AYT 0xFA Are You There (RCX " PC). This is a poll from a RCX controller to check if there is still a communication channel with the control unit. Sending a CC_AYT should happen outside a command cycle. CC_NACK 0xF9 Not acknowledge that a command has been executed (properly) (RCX " PC). Table 2 General protocol commands. The CC_AYT command is designed to be used by the vehicle RCX controller to check if the last command, in particular commands responsible for movement of the robot, disrupted the communication between the RCX controller and the control unit. The remaining commands needed in the protocol are RCX controller specific. Recapitulating on section 4.2.2, this design uses two RCX controllers, namely the vehicle RCX and the sensor RCX. The vehicle RCX controller is responsible for controlling the vehicle motors, rotary sensors (for the vehicle motors) and the bump sensors. The sensor RCX controller is responsible for controlling the rotation motor, sensor and rotary sensor of the radar. The vehicle motors of the robot connected to the vehicle RCX can be divided in to two groups: movement and rotation (steering). The basic commands that can be identified for the movement of the robot are: move forward and move backward. The external control unit presented in section must be able to tune the degree of this movement if it wants to successfully steer the robot through its environment. It is convenient for electronic motors to measure the degree of movement in units of steps; therefore it would be convenient if the designed protocol supports this. Additionally, the RCX motors have a power level feature which can influence the speed of movement. To be on the safe side, the protocol should support the tuning of the power level of the vehicle motors that are responsible for the movement. Using this information the protocol commands for the movement motors of the vehicle RCX controller can be deduced. These commands are listed in Table 3. For the commands MC_M_STEP and MC_R_STEP an argument is needed that can have a value higher than 256. This cannot be sent in a single message, since a message is only one byte in length. It has to be sent in two bytes (a short). Because

24 a message cannot be sent with the value 0, the short has to be encoded such that none of the two bytes will ever have the value 0. To accomplish this, the data must be transformed such that it only uses 7 bits of the individual bytes. The remaining first bit is always set. With the total number of 14 bits, a value in the range of 0 to can be used. This is more than enough to encode the number of steps. To encode the 14 bits value into two messages the following algorithm is used: Short = Short * 1; Msg1 = (byte) Short + 1; Msg2 = (byte) (Short / ); To decode the two messages to a short the following formula is used: Short = (Msg1-1 + (Msg2-1) / 2) / 2; Command Byte code Description MC_M_FORWARD 0x11 Move forward with one step (PC " RCX). MC_M_BACKWARD 0x12 Move backward with one step (PC " RCX). MC_M_STEP 0x13 Set the degree of movement in units of steps. The next two messages contain the step size (a short) (PC " RCX) MC_M_POWER 0x14 Next message contains the power level for the movement motor (PC " RCX) Table 3 Protocol commands for the movement of the vehicle motors. The basic commands for the rotation of the vehicle motors to identify are rotate left and rotate right. The vehicle motors responsible for the rotation also need the support for tuning the degree of rotation and power level in the protocol, just like the movement motors. Using this information, the protocol commands identified for the rotation motors of the vehicle RCX controller can be deduced. These are shown in Table 4. Command Byte code Description MC_R_LEFT 0x01 Rotate left, 1 step (PC " RCX) MC_R_RIGHT 0x02 Rotate right, 1 step (PC " RCX) MC_R_STEP 0x03 Next two messages contain the step size (a short) (PC " RCX) MC_R_POWER 0x04 Next message contains the power level for the rotating motor (PC " RCX) Table 4 Protocol commands for the rotation of the vehilcle motors. The commands for the bump sensors are self-explanatory (See Table 5). Command Byte code Description MC_S_BLOCK1 0x21 Request status of touch sensor 1 (PC " RCX) MC_LOCKIR 2 0x41 Lock the IR receiver of the RCX controller to prevent receiving IR messages (PC " RCX). After a small time period the lock should be removed automatically. Table 5 Protocol commands for the bump sensors. 2 This closes the shutter, see

25 Since the datalog described in section is used to transfer the spatial data requires that the robot must be capable of scanning the environment autonomously, because this datalog is downloaded without using the command cycle. Therefore, the protocol for the sensor RCX controller only needs to have extra support for a command to start a scan of the environment. Because the robot does the scanning autonomously, there has to be no support in the protocol for measurement status queries from the external control unit. These status queries would have been necessary to give the external control unit the possibility to find out when to retrieve the spatial data from the datalog. To perform a scan, the internal controller fully controls the rotary motor and rotary sensor, without the interference of the external controller Therefore these components are outside the scope of this protocol. Further, for debug or other purposes, a protocol command requesting the current value of the sensor could be useful. Including this command the protocol commands for the sensor RCX controller are listed in Table 6: Command Byte code Description SC_SCAN 0x01 Start a (complete) scan of the robot s environment (PC " RCX). This will clear the current datalog and fill it with the sensor values during the scanning process. SC_READY 0x02 Query to check if the scan has been completed (PC " RCX). SC_ACK 0x03 The scan is ready (RCX " PC). SC_NACK 0x04 The scan is not ready (RCX " PC). SC_READ 0x05 Request to return the current sensor value (PC " RCX). Table 6 Protocol commands for the sensor RCX controller Command Cycle Previously the protocol for the communication between the internal and external control unit was described globally. A more detailed description of the protocol is given in this section. In particular the command cycle is described. The communication between the RCX controllers and the control unit consist out of a finite number of command cycles. A command cycle is basically a group of commands executed sequentially on one specific RCX controller. A command cycle in the RCX controller is started with a CC_START command sent by the external control unit to the RCX controller. The RCX controller then waits for the RCX ID sent by the external control unit. If the sent RCX ID equals the RCX ID assigned to the RCX controller, the RCX controller knows it is selected for the execution of commands. Otherwise, the RCX controller becomes idle until it receives the CC_STOP command. This command marks the end of a command cycle. If an RCX controller is selected, it sends a confirmation to the control unit in the form of a CC_ACK command. The RCX controller is then ready to receive and execute commands. If applicable it will return the optional value replies. An instance of the command cycle is visualized in the diagram below (See Figure 5)

26 Figure 5 Visualization of a command cycle. Each command cycle with an RCX can be terminated by the external control unit at any moment by sending the CC_RESET command to the RCX controllers. An RCX controller that is busy processing a command is an exception to this rule. After finishing the command, the RCX will handle the CC_RESET and then terminates the current command cycle. The control unit can take this into account by waiting with sending the CC_RESET command until it receives the acknowledgement of executing the command in the form of a CC_ACK The GIS to external control unit protocol In our design the control unit and the GIS are completely separated, that means that they can run on different machines. But they have to be connected to each other; this can be done with a network connection. And easy way to implement this is to use a TCP/IP socket using a basic protocol. The GIS gives the command to start scanning the area, which means the control unit has to listen for commands given by the GIS. The GIS will send requests on which the control unit will respond. The control unit will never initiate the communication. The protocol used has to be simple but must be able to contain everything we need. Java provides a class to send complete Java Objects over a socket, called an ObjectStream. Using the ObjectStream will allow us to transfer data without transformation. The protocol consists out of a command and, depending on the command, a number of arguments. The command will be a String object containing the command. The arguments are other java objects, the objects used depends on the command executed. Each command has a different response, there is one general response by the control unit, and this is the error response. It will be send when a request could not be full filled. Otherwise a response is send with the same value for the command as the request had. Requests are sent by the GIS, responses are sent by the control unit. The overview of the defined protocol can be found in appendix C

27 5. Robot design In this chapter, the robot s mechanical design is explained. For this purpose, first all the various considered design alternatives are evaluated in paragraph 5.1. These evaluations focus on whether the alternatives can be implemented using LEGO without too much effort. After the discussion of the alternatives the final design of the robot is explained in paragraph 5.2. The choices made during the design process are discussed and potential problems are stated. At the end of this chapter a short summary is given together with the picture of the final robot design. 5.1 Design alternatives As described before the goal of this project is to build a system that controls a robot to scout a certain area. The system must be able to construct a map with the data the robot provides. This paragraph gives a discussion on the various design alternatives that could provide a solution to the problem of building a robot that meets the requirements. The following requirements are discussed in this chapter:! As stated in the requirements the system must have the ability to position the robot with reasonable precision. This can be solved by measuring the actual position or by tracking the robot s movements. To meet these requirements, various position measurement and tracking techniques using various sensors are available. These are described in section ! The requirements state that the robot must have the ability to move around freely: it must have the means to move from its starting position to any location in the area if possible. The various types of steering mechanisms are evaluated in relation to the position tracking techniques that might be used. These considerations are described in section ! The robot must have the ability to collect spatial data about the environment with which the system can build the desired map. Possible techniques to perform these measurements are evaluated in section ! The robot must always avoid objects even if the spatial measurements did not detect them. Therefore some sort of proximity detection system must be designed. The various alternatives on how this can be achieved are described in section If sensors are mentioned in these sections they are not described in detail. Only the quantity, the sensor measures, is described. The various sensors available for these measurements are reviewed in section It is required that the robot is built using LEGO Mindstorms. So, for each alternative the possibilities with LEGO are evaluated as well. If an alternative is too hard to integrate with LEGO, it will simply not be useful for this project

28 5.1.1 Robot position measurement There are roughly two distinct groups of mobile robot position measurement techniques: relative and absolute position measurement. The relative measurement techniques, also called dead reckoning, use the robot s previous position and data about the robot s movements to calculate the robot s current position. In fact the robot s movements are tracked to keep track of the robot s position. In contrast, the absolute techniques potentially need no information about the robot s previous position. With those techniques the robot can always determine its current location using an external reference. In this section, the most common relative and absolute position measurement techniques are explained. Many of these techniques can be combined into a definitive solution. These techniques are described in more detail in [BORENST]. Possible sensors that can be used for these techniques can be found in [BOULETTE], [GASPERI] and [PHILO] Odometry Odometry is a relative position measurement technique and as such it uses the robot s previous position and data about its movements to calculate its current location. For odometry, the information collected about the robot s movements is typically the covered distance and the direction of the movement. This information is directly related to the revolutions the wheels and the motor axels make. So somehow, this must be measured. This will be discussed later on. Using simple difference equations, the current location can be calculated. However, if the data it has acquired about its movements contains errors, these errors are accumulated. This happens because this technique uses the previous position as a reference and any errors in this position will add to the result. So, with each position measurement the measured position will be less accurate. Especially, if the measurement of the direction of the movement is inaccurate, the results can be horrible. There are several ways to minimize these errors:! The data about the robot s movements must be collected with the highest precision possible. To make this possible the vehicle must move in a consistent way.! The robot must be calibrated to eliminate any systematic errors in the position measurement. Systematic errors in the measurements are typically caused by imperfections in the construction of the robot, like wheel diameter variations. Systematic errors do not occur by accident and they are likely to cause the same error in all measurements. So, by calibration of the measurements the results of systematic errors can be nullified.! The effect of non-systematic errors, like bumps or slipping wheels, should be minimized. It is not easy to detect these errors, because they occur in a random way. Yet there is a method available to do this

29 In this solution the vehicle comprises of two or more individual interconnected subvehicles, that each try to do odometry to determine their position. In this setup, differences between the position measurements of the different sub-vehicles are an indication for non-systematic errors. Using this data, a better approximation of the current position can be made. There is no need to say that this solution is quite cumbersome. In summary, the major advantage of odometry is that it is fast and inexpensive. It uses no external references, so the vehicle is completely autonomous. It is reliable in short intervals, but unfortunately it is quite unreliable after long periods of time without any external reference. So, usually odometry is combined with techniques that use some sort of reference to correct the errors. Actually, odometry then becomes a supplement to bridge gaps between the position measurements of the other technique. Most techniques using external references are not very fast, or it is not practical to do these measurements frequently. Odometry works very fast, so it can then be used to make the actual position available at all times. The biggest problems faced by using LEGO for building the robot will be the movement precision. To introduce as little errors in the odometric measurement as possible, the robot must move in a consistent way, without a lot of wheel slippage for instance. If no steering commands are given to the vehicle, it should drive in a straight line without bending off in some other direction. LEGO is not designed for this precision and such errors are likely to occur: most LEGO parts are made of plastic which gives beams the ability to bend and axles the ability to torque. But, if the design of the vehicle is sound, the number of errors can be limited. This will be discussed in a later section. Furthermore, the odometry technique needs some means of measuring (or knowing) the number of revolutions an axle or wheel makes. Usually, this is done by using step motors or rotary sensors. Step motors are motors that can rotate in steps of a known fixed angle. They are particularly useful for robotics. Yet, step motors are not very straightforward to control and are as such not fit to be used with LEGO. In the past LEGO offered a standard rotary sensor, which made it possible to distinguish sixteen rotary steps on an axle. With the suitable transmission, any number of steps in a single rotation could be recognized (providing that the rotation speed was within certain limits). Unfortunately, LEGO has ceased to produce these sensors, so a custom one has to be designed. This is explained in the sensor section (section 5.1.5) of this chapter Inertial Navigation Inertial navigation is, much like odometry, a relative position measurement technique. Only this technique does not use the measurements of revolutions on wheels or motor axles to determine the movements, but it measures the speed or acceleration of the vehicle using special sensors (like accelerometers or gyroscopes). These values are integrated (twice for acceleration and once for speed values) over time to obtain the distance traveled. Inertial navigation has similar problems as odometry. With odometry, the errors are accumulated with every measurement. And with inertial navigation, the errors are

30 accumulated during integration. These errors for inertial navigation can be solved in a similar way as odometry. So, inertial navigation is much like odometry, but only more complex and the sensors used are likely to be much more expensive (gyroscopes are not something you buy in the electronics shop next door). Furthermore, the same rules for reliability as for odometry apply. Technically, acceleration and speed sensors can be used quite well with the LEGO RCX. The sensor reading can be interfaced to the RCX by converting the measured value to an analog voltage which the RCX can use. Unfortunately there are no clear examples of how to build a working acceleration or speed sensor for LEGO. They can be purchased though Magnetic compasses The most influential measurement errors of the dead reckoning techniques described above are errors in the direction measurement. Especially if the distance traveled in that direction is long, the difference between real and measured position will be quite big, even with small measurement errors. This problem can be solved by using some sort of course/direction sensor. The solution that has been used for direction measurement for centuries is the magnetic compass. For dead reckoning, the differential direction (the direction change) is good enough, so there is no need to know where north actually is. These days there is a wide variety of sensors available that use the influence of the earth magnetic field to determine the orientation of the sensor. Each type uses a different physical effect related to the earth magnetic field and as such the precision and the cost vary, but they can all be used for the same task. Unfortunately, the sensor will not only be influenced by the horizontal component of the earth magnetic field (the one that points north ), but also by the vertical component. This unwanted measurement is eliminated if the sensor is held in a level attitude, but if the vehicle moves over uneven terrain, this will not always be true. To solve this, one could mount the sensor in a special way, so it always is in a level attitude using gravity. Furthermore, the effectiveness of magnetic compasses can be compromised in buildings with much electric wiring. This can seriously distort the earth magnetic field and as such the measured direction changes might not be accurate enough to enhance the dead reckoning approach. So, generally speaking, magnetic compass sensors would provide a useful and potentially inexpensive addition to the dead reckoning technique. But they might prove to be less useful when used inside. Quite a few people have built magnetic compass sensors for LEGO. Most circuit schematics are complex and need some sort of programmed CPU to work. Furthermore most LEGO compass sensor solutions require two sensor inputs of the RCX, which can be a big problem

31 Active beacons There are also position measurement techniques that use beacons at fixed and known locations to determine the position of the vehicle. These methods are commonly used not only for robots, but also for military and civilian purposes like ship or airplane navigation. Probably, the most commonly known technique is the Global Positioning System (GPS) that uses satellites as beacons. There are various position measurement techniques that use active beacons. The most common are explained here. Trilateration With trilateration the distance to the active beacons is measured. With those distances, the location of the vehicle can easily be determined. The distance to the beacons is usually determined by measuring the time-of-flight of a signal sent from a beacon to the vehicle receiver. This can also be done the other way around, with the transmitter on the vehicle and receivers on the beacons. GPS is an example of trilateration. Trilateration systems like GPS let the beacons send the value of their internal clock at regular intervals. The clocks of the individual beacons are synchronized to all the other beacons. So, the receiver can calculate the differences in time of flight and with this information and the location of the beacons it can calculate its position. Triangulation In contrast to trilateration triangulation uses the angle in which the beacon signal is received to calculate the position of the vehicle. Usually the receiver is mounted rotating on top of the vehicle and the receiver is constructed in such a way that it can only receive signals that originate in a straight line in front of it. So if during the rotation the receiver picks up a beacon signal the angle to that beacon is known. At least three beacons must be visible for triangulation to work The problem of reliably finding the location of the vehicle using these angles is not a trivial one. There are quite a few algorithms available, each with its own weaknesses. Results can be improved by combining these methods, but the calculation can become quite time-consuming. This technique can be implemented with LEGO, by using multiple infra-red towers as beacons and mounting one RCX rotating on top of the vehicle. Every time, when contact is established with one of the beacons, its angle with the vehicle is noted. This information can then be used for the calculation (or estimation) of the current position. Unfortunately, infra-red reflections from walls and other obstacles can severely compromise this solution Landmark navigation An alternative to using beacons is the use of landmarks. Landmarks are distinct features of or objects in the landscape the robot can easily recognize with its sensory input. These landmarks can be predefined or the robot can choose landmarks on the fly during the reconnaissance. These landmarks must be carefully chosen. They must be easy to recognize; they must somehow be distinguishable from the environment. Once landmarks are stored in the robot, the task of finding its position is reduced to reliably recognizing the landmarks and calculating the position using this data

32 Usually, this technique is used in conjunction with odometry. The robot is set loose at a certain position and from there the robot tries to find useful landmarks and stores them for future reference. If the odometry is precise enough within the range the landmarks are to be found, the exact location of the landmarks can be approximated quite well. This is very important since the robot will rely on them in the future. A nasty problem with landmarks is that they might move or disappear. In a real environment objects can change place and orientation. The requirements state that this will be possible and it is required that the robot handles this well. But, initially, the robot has no way of telling which object can or will move, so the chosen landmarks cannot always be trusted. There are two types of landmarks that can be used for this technique: natural and artificial landmarks. Natural landmarks are objects or features that are already in the environment and have not been designed specifically for robot navigation. In contrast, artificial landmarks are objects or features in the environment that are placed in the environment for robot navigation. Natural landmarks are usually much harder to recognize, because they were simply not put there for this purpose. The artificial landmarks are designed to distinguishable from other objects. Noting that landmark navigation is not always implemented using some sort of robot vision; artificial landmarks can also be designed for specific sensors. One could imagine a robot with a rotating laser on top of it that reads barcodes placed on the walls. It is clear that artificial landmarks require a lot less processing time and less complex algorithms to recognize. Furthermore, the artificial landmarks can bear additional information. The shape or some other feature (like the barcode) can give the robot other information. It can even be used to tell the robot the exact location of the landmark without using odometry (although this would not be a very easy implementation). In comparison to the active beacons method, this technique will require much more processing time. But, this technique is potentially much more flexible and, using natural landmarks, it does not need any modifications to the environment the robot is released in. This can be very important if the robot must have the ability to scout an area without any artificial external reference. Another disadvantage of landmark navigation in comparison to active beacons is that it usually is a lot less accurate. The robot must be very near the recognized object to determine its own location accurately. Since natural landmark navigation does not need any special extra hardware, besides the spatial measurement sensor, a robot can instantly be used for this technique, if it is at least capable of doing odometry and doing spatial measurements. Yet, the software controlling the robot and implementing this technique will become very complex. Artificial landmark navigation will not be useful for this project, since the requirements clearly state that the robot must scout an unknown environment. So, no prior modifications to the environment are allowed

33 Map based positioning A technique that is very similar to landmark navigation is map based positioning. With this technique, the position of the robot is determined by matching sensory data to a map stored in the robot s memory. This map can either be available before the reconnaissance, or it can be constructed on the fly during reconnaissance. For this project, of course only the second alternative is of interest; it is required that the environment is unknown. Like the landmark navigation technique, map based positioning does not require any special hardware, but does require a very complex controlling algorithm Conclusion Odometry is a fast and inexpensive position measurement technique. Most other position measurement techniques described here are too complex to be implemented with LEGO for this project. In a successive project landmark or map based positioning could be implemented. But for this project, only odometry is chosen to be used for position measurement. So, currently the robot will have to perform and measure its movements precisely in order to measure its position with any precision at all. Because the robot uses no external reference for the position measurement, it will eventually lose track of its actual position. The map will get distorted this way. To postpone this event as far away in the future as possible, the odometry errors will have to be minimized. To accomplish this, a LEGO vehicle has to be designed that moves in a consistent way and that can measure its movements with high precision. The alternatives to achieve this goal are explained in the next section Robot movement There are quite a number of different ways to give a vehicle like a robot the ability to move around on the floor freely. But, wheels (or tracks) are by far the most straightforward and simple way, especially when precise positioning (odometry) is an issue. So, this design will use wheels and other options are not discussed any further. The problem of moving a wheeled robot around can be a quite easy one, if movement precision is not an issue. But, as described in the previous section, the odometry technique is used for position measurement. Therefore the movement precision will be very important. To accomplish this precision the robot must have some means to measure its movements. A vehicle s movement can be described with two different parameters: the direction and the covered distance. So, for keeping track of its current position the system must know both these movement parameters either by guaranteeing that the movement commands by the system are performed in an exact way or by measuring the actual movement. Each of these movement parameters is measurable separately. The following sections describe how the mechanical construction of the robot can support these movement measurements [TODO] and how an acceptable movement precision can be accomplished

34 Covered distance On a wheeled vehicle, the number of revolutions one of its wheels makes is a measurement for the distance covered by this vehicle if the diameter of that wheel is known. So, finding the covered distance of the vehicle s movement would seem to be a very simple problem to solve. Yet, LEGO Mindstorms does not provide this much control over the motors. Therefore, the robot must measure its actual movements. To accomplish this, the number of revolutions on the wheel axles must be measured somehow. This problem will be discussed later when the rotary sensors are described Direction control (steering) All vehicles, that can move around freely, have some sort of steering mechanism to change direction. There are multiple common designs for such a steering mechanism. Two general types of steering mechanisms can be distinguished. First of all and most commonly a direction change on a wheeled vehicle can be accomplished by changing the direction of wheels. Another way of changing the course of a mobile vehicle is by driving wheels separately at different speeds. These alternatives are evaluated in the following sections. First, a common steering mechanism most people will understand is described: the one found on most cars. After that, less common steering mechanisms are explained. Car design A common example of a steering mechanism that uses wheel direction changes to steer, is the steering mechanism found on a car. It has two wheels fixed on an axle on the rear and two moveable wheels on the front. The wheels on the front move in (semi [FOOTNOTE]) parallel. If the front wheels are in alignment with the rear wheels, the vehicle moves forward in a straight line. If these wheels are turned left the vehicle will turn left and if they are turned right the vehicle will turn right. But, the actual direction the vehicle will follow after the steering depends on the angle the wheels are turned and the distance traveled during the time the wheels were turned (assuming this angle is fixed). If the wheels are not turned back to the aligned position, the vehicle will drive in circles. So, the wheels must be turned back in time to make the vehicle change to the desired direction. This type of vehicle cannot change its movement direction on the spot. It always has a minimum turn circle. The actual turn circle depends on the turn angle of the steered wheels. There are many variations on this car example, with another number of wheels or more steered wheels for example. But, if not all wheels steer simultaneously in the same direction at the same rate, the vehicle will have a turn circle. Most of these vehicles are not easy to make computer-controlled with high precision using odometry: to steer the vehicle precisely into the desired position not only the rotary angle of some steering axle is important, but also the time span in which the steering is performed. Hence, because there are two parameters that can influence the final direction, there are two parameters that can cause error in this direction

35 The synchrodrive An obvious solution to this problem would be to steer all wheels in the same direction simultaneously at the same rate. Such a drive mechanism is commonly called a synchrodrive. This way, the vehicle can change its movement direction without actually moving away from its current location, so such a vehicle does not have a turn circle. Furthermore, the orientation of the vehicle itself does not change during movement. This can be an advantage if some sort of line-of-sight communication method is used. Note that these advantages are only valid for and ideal vehicle in an ideal environment. Typically, due to wheel slippage and vehicle construction flaws, it is quite possible that the orientation or the position of the vehicle itself changes during steering. The fact, that in theory the orientation and the position of this vehicle do not change during steering, makes doing odometry much simpler. The orientation of the vehicle does not need to be recorded during movement and no complex models need to be built for the steering operation. It is just plain and simple moving forward and backward and changing the direction of movement. Although a synchrodrive can change direction during movement it is not a desirable thing to do when using odometry. Then the vehicle would have a turn circle and that makes calculations a lot more complicated. Actually, such a vehicle does not have a front or rear: independent of the orientation of the vehicle, it can move in any direction. But, one can distinguish front and rear on each wheel. Each wheel unit can be seen as a separate vehicle (with one wheel), but in order to move the entire vehicle in the desired direction all wheel units must move synchronously. Unfortunately, this type of construction also has some very nasty caveats. To make this steering mechanism work, the wheel axles must change direction synchronously. So, using a different motor for each steered wheel is not really a good option, because there would be no way to guarantee that all motors would run at the same speed. This means that a transmission must be designed to connect this single motor to all the steering wheels. Also this single motor will have to be relatively strong to cope with the number of wheels to steer and the complex and potentially heavy transmission. Especially in LEGO, a complex transmission design will introduce a lot of extra slack, which can compromise the movement precision

36 Figure 6 Schematic representation of a synchrodrive wheel unit Furthermore, the position of the wheels on their axles is of great importance for letting the synchrodrive work properly. If they are mounted precisely on the rotation point of the axle these wheels are not allowed to be driven during the steering process, because this would mean that the rotation point of the axle would move and hence the whole vehicle would move during steering. This way it would change direction in a circle, while it should stay on the same spot. But, if these wheels are mounted on the axle on a certain distance from the rotation point of that axle, it is supposed to be driven, but only at a certain speed and direction. Figure 6 shows the situation with one wheel on the rotating axle. This certain speed is such that the movement of the wheel does not interfere with the axle rotation: if it would drive to slow it would need to slip on the floor to let axle rotate at the desired speed and if it would drive to fast it would move the entire vehicle in a circular movement. So, the driving wheel should make the same angle as the wheel s axle with the rotation point while driving in its circle (the wheel path in figure 1), even if they would not be attached to each other. It is very important that all wheels turn at exactly the same speed during a forward or backward movement of the vehicle. If one wheel slips on the floor or somehow turns slower than the rest the whole vehicle would bend of in that wheel s general direction. So also the propulsion transmission should be driven by a single motor and to prevent wheel slippage on individual wheels the weight of the vehicle should be distributed as equally as possible. It can be desirable to mount more than one wheel on a single steered axle, because then the weight of the vehicle axle would be divided more equally over the axle. For

37 two wheels this would mean that they would be mounted on each side of the axle rotation point (on the same distance). Yet, during steering this would mean that the wheels would want to turn in opposite directions. This is of course not possible on the same axle. To solve this one could think of using a differential. A differential is a transmission that for example allows the rear wheels of a car to turn at different speeds. This is important when the car makes a turn, because then one rear wheel needs to travel a different distance than the other. Without a differential this would mean that one of the rear wheels would need to slip over the road in order to make the turn possible. The problem with this differential solution is exactly the same as the single wheel at the rotation point of the steered axle: it is not supposed to be driven during steering. A differential allows both wheels to turn at the same speed in opposite directions if it has no drive. If it is somehow driven the vehicle will move during steering. In conclusion, the synchrodrive potentially has a high level of precision. Furthermore it is easy to control and position in the environment. Yet, it is not easy constructed properly. Small errors in the design can have horrific side-effects. Also the fact that LEGO gear transmissions introduce much slack, will make the construction of this steering mechanism difficult. Separately driven wheels Another way to change the direction of a wheeled vehicle is to drive each wheel separately. Consider a two wheeled vehicle, with the two wheels in parallel and assume it does not tip over: if both wheels are driven at the same speed the vehicle moves in a straight line, but if one of those wheels is driven slower the vehicle will turn in the direction of that wheel. If both those wheels are always driven in the same direction the vehicle always has a turn circle. Even if one wheel entirely stops and the other keeps driving, the vehicle will drive a circle with a radius equal to the distance between the two wheels. But if during steering the wheels are turned in opposite directions with the same speed, the vehicle will change its heading on the same spot. So generally, driving in a straight line boils down to making both wheels drive at the same speed in the same direction and steering boils down to making both wheels drive at the same speed in opposite directions. Although this seems a very easy thing to accomplish, it is not really that simple. For such a design one could think of driving each wheel with a different motor. The problem is, that it is never guaranteed that both wheels will actually drive at the same speed. Motors are never entirely equal and their power supply tends to differ slightly as well. This way the robot would turn in some other direction during driving in a straight line and it would not stay on the same spot during steering. One way to solve this problem would obviously be to drive both wheels with the same motor. But, then a potentially complex transmission has to be designed that can switch one of the wheels in the opposite direction. This is not very feasible to build in LEGO. Currently, this alternative is only considered with two wheels. This will of course tip over and drag itself over the floor, which is not desirable. Adding a third wheel,

38 which is not in parallel to the other two, would solve the problem, but this wheel will interfere with the steering process. Two parallel wheels can drive a circle. Three wheels can only drive a circle, if the third wheel is on the correct angle to the vehicle, but if this is fixed the vehicle cannot drive in a straight line anymore. So, this wheel must be actively steered or be mounted loosely, so it can turn and follow the vehicle s movements. If it is mounted loosely to follow the vehicles movements, it will likely influence the movement precision of the vehicle. One could also think of using tracks (like a tank) to make the vehicle stable. The problem with this solution is that tracks have multiple contact points with the floor instead of the single one a normal wheel has (in fact a track can be seen as multiple wheels in line). If such a vehicle with tracks turns on the spot, the outer ends of the tracks will have to slip over the floor in order to make the turn possible at all. This cannot positively influence the vehicle s movement precision. Conclusion The vehicle itself is chosen to be a synchrodrive, because of the potentially higher movement precision and the relative ease of the odometry calculations for this type of vehicle. The alternative of driving each wheel separately needs a complex transmission to make it reliable and is not stable in its raw form with two wheels. Furthermore the LEGO RCX unit uses an infrared communication channel with the controlling computer. Because this is in essence a form of line-of-sight communication it is desirable to keep the vehicle on a steady orientation with the RCX pointing to the base tower. A synchrodrive is the only type of steering mechanism that satisfies these conditions Spatial data acquisition In the requirements is stated that the vehicle must be able to do spatial measurements on the environment in an efficient way without influencing the environment in any way. So somehow the robot must have the ability to look around itself without actually touching anything. So this looking around is not only necessary to build a map, but also to navigate through the environment Radar A common way of touchlessly detecting objects in the surrounding environment is by using a radar-like setup. This consists of some sort of sensor device that can measure the distance between itself and an object in a straight line in front of it. This sensor is rotated on top of the vehicle and on fixed angles it measures the distance. Using the knowledge of these angles and the measured distance a radial map-like representation of the environment can be built

39 Figure 7 The radar scan principle. A picture of this principle is shown in Figure 7. Situated in the centre of the circle is the distance sensor. The circle represents the maximal distance the sensor can measure. The dotted lines emerging from the sensor represent the distances measured by the sensor in a single scan. The sensor always rotates clockwise in this picture. Two objects are within the sensor s range. The radial map that can be constructed using the known angles and the measured distances is represented by the thick black dotted lines. With this number of measurements the map is not completely equal to the actual environment. The corner of the top right object is missing for example. It is clear that if more measurements are taken on more angles, this scan becomes more accurate, so smaller features of the environment are visible on the map. In Figure 7 the measurements to the end of the sensor s range are represented in the map as if the sensor were surrounded by a circular object. Yet it is best to discard these measurements, because in fact these distances can reach into infinity. So in those directions no object can be seen. This is especially important for the robot navigation, because the robot would otherwise think it is surrounded and cannot go anywhere

40 For an accurate map it is very important that the angles in which the measurements are taken are known exactly. So the sensor must be mounted and controlled such that it moves in discrete and fixed steps. If multiple scans are made on a sufficient number of distinct locations in the environment an accurate map of the environment can be constructed. For this reconstruction of course the positions of the robot during the scans should be known. In conclusion, the radar technique provides a simple way to collect spatial data of the environment. The only problem that still has to be solved is the rotary mechanism that enforces the sensor to rotate with discrete and fixed steps. One could solve this by using step motors to drive the sensor rotation. Yet, as described in section LEGO cannot be used to control these motors. Thus, the only method available to solve the problem is by using a rotary sensor. For stability one could add the requirement that the sensor s rotary position should not be able to be influenced by external factors. For instance, this means that it should not be possible to turn the radar by hand when the robot is running. This extra requirement is particularly useful to prevent robot vibrations from inappropriately rotating the sensor. Other alternatives The requirements of section describe that the robot must not influence the environment and must perform reconnaissance in an efficient way. Hence, an alternative that involves moving to every location to scout the environment is not feasible. Furthermore, such a technique would miss small environment features. For instance, if the robot cannot get into a tight corner, it has no way to tell what this corner looks like, so the system will not notice it. Other alternatives mostly involve some sort of image processing system that uses a camera to get a view on the environment. These solutions are far beyond the scope of this project. Therefore the radar technique will be used Proximity detection In the requirements is stated that the robot should never drive into objects in the environment, even if the spatial measurement sensor did not detect them (see section 3.1.2). Therefore a sensor needs to be designed that detects when an object is (almost) touched. The technique used in standard LEGO designs for this problem is using the LEGO touch sensor. The touch sensor has a small bulb shaped yellow button. The button of the touch sensor is pressed by a lever, because the touch sensor s button is too small to be used directly. This lever usually attached to a long beam to increase the touch surface. In many LEGO robot designs the beam is equipped with LEGO hoses for flexibility and increase of touch surface. These LEGO designs are built like bumpers. The advantage of this solution is that it is simple and straightforward. Yet, this solution is mechanical and touches the approached object. This way it can influence its environment, especially if the touched object is light and somewhat mobile. This

41 would violate the requirement that the robot should not influence its environment in a destructive or otherwise altering way (see section 3.1.2). Furthermore, these bumpers only detect object proximity at a certain height. One could think of building a cage around the entire robot to solve this problem. But, that might not be a feasible solution, because this could interfere with the radar. Another solution would be to use some sort of sensor that does not actually need to touch a surface to detect proximity. There are many of these sensors available and there are interface designs available to connect them to the RCX. Some examples of such sensors are radar, light or (ultra-) sound proximity sensors. The advantage of these sensors in relation to the earlier described bumpers is that they never actually touch anything. Therefore, the robot will not influence its environment if the sensors work properly. Because the vehicle to be built is uses a synchrodrive (see section ), all sides of the robot must be sensitive to proximity of objects in the environment. This is needed because a synchrodrive does not really have a front or rear. The vehicle can move entirely independent of its orientation in any direction. Hence, it can approach objects in any direction. Therefore, all sides of the robot will need to be equipped with sensors. Building multiple interfaces for the described proximity sensors is much extra work for little extra performance. And, due to the limited number of sensor ports the designed interface must interface all of these sensors to the same sensor port of the RCX. In contrast, the LEGO touch sensors are in essence just switches. And they can simply be connected in parallel to the same sensor port of the RCX, without the use of any kind of interface. Thus, the bumper solution is much easier to build. In summary, the standard LEGO solutions for proximity detection are much easier to implement than the proximity sensor solutions. Yet, the LEGO bumper will not always meet the requirements. For this project the requirement that the robot should not influence the environment is not that important. The proximity sensor will only be used as a last resort. Therefore a variation on a LEGO bumper is built for this robot that surrounds the entire robot Available sensors For the position measurement technique chosen in section 5.1.1, the spatial data acquisition technique chosen in section and for the bumper chosen in section variety of sensors is needed:! The odometry technique requires at least two rotary sensors.! For the radar a rotary sensor and a distance sensor are required! For the bumper a set of touch sensors is needed. The available sensors for these techniques are described in this section. And for each sensor an evaluation is made about how easily they can be interfaced with the LEGO RCX. This is an important consideration since that is a factor that determines the usefulness of a sensor

42 For the bumper a standard LEGO touch sensor is used. These are made to interface with the RCX and thus no discussion is needed whether they are useful or not. Only the rotary sensor and distance sensor alternatives are evaluated in this section Rotary sensors LEGO used to sell standard rotary sensors. These sensors could measure 16 steps in a single revolution. Yet, unfortunately these are no longer available. Thus, another rotary sensor has to be designed. On various web pages custom LEGO rotary sensor designs are described. Many of those use non-lego components. For instance some alternatives use variable resistors (potentiometers) to measure the rotary angle. Others use light beams or so called encoders to work. Encoders are devices that convert a rotary movement to electronic impulses. All these described alternatives involve external electrical parts that need some sort of interface to communicate with the RCX unit. The complexity of these interfaces varies, but for this project three of those are needed. This would add too much overhead to the project. Fortunately, rotary sensors can also be constructed entirely using LEGO parts. This can be achieved either by using LEGO light sensors or by using LEGO touch sensors. The alternative using LEGO light sensors uses the principle of a rotating object interrupting a light beam. The rotating object is mounted on the axle on which the rotary movement is measured. Ideally this rotating object is a disk with holes divided equally over the surface at a fixed radius. This way the light beam is interrupted at regular intervals. Yet, LEGO does not really have a part that satisfies these conditions. A less ideal solution will have to be found. Alternatively, a LEGO touch sensor can be used. For this solution also an object is mounted on the axle. While rotating on the axle it should press and release the button of the touch sensor at regular intervals. Note that this might introduce much extra friction into the system, depending on the implementation. This can be a severe problem if the driving motors are heavily loaded already. Yet, the touch sensor can add extra stability to the rotary mechanism involved, because it can force the axle in certain fixed positions. This project will use rotary sensors that entirely rely on LEGO parts. Which one of the two earlier described alternatives is used depends entirely on the situation. If low friction is important, the light sensor alternative is used, because the axle can rotate freely in this construction. Yet, if the stability of the rotary mechanism is more important the alternative using the LEGO touch sensors is used, because this design can fix the axle in certain positions. In paragraph 5.2 these considerations are explained further and used for the design of the vehicle and the radar Distance sensors For touchless distance measurement, there are few sensor alternatives available. All these sensors have in common that they all send out some sort of signal in a more or less straight line in front of them. The reflection from the nearest object is received and with the time-of-flight, signal intensity or signal angle information the

43 distance is calculated. This signal commonly has one of two forms: it is either a sound wave or a light beam. Ultrasonic In the sound group the ultrasonic distance sensor is most commonly built. The ultrasonic sensors are not sensitive for surface reflectivity or color. Yet, the ultrasonic sensors tend to detect objects that are not in straight line in front of the sensor. These supplemental sensitive areas are sometimes called the side lobes, because of the form these areas have when they are drawn [INSERT PICTURE]. The side lobes are more influential in short range measurements. For distant objects these are no real problem. The ultrasonic sensors use the time-of-flight of the transmitted signal to calculate the distance using the speed of sound. Unlike the speed of light, the speed of sound is not constant and tends to vary very little with the environment temperature. For normal applications this should not be a problem though. An ultrasonic sensor can usually measure in a range between 5 cm and a few meters. Although this sounds pretty impressive, the RCX unit can only measure with limited precision (see section 3.2). So, this would mean that this measurable range is divided in a relatively small number of units. Hence, the advantage of a larger measurement range will limit the possible measurement precision. It is always possible to use only a relatively small fraction of the measurable range though. Ultrasonic sensors are affordable and can be interfaced with LEGO, but they usually cannot be purchased as a fully assembled product. Otherwise they are quite expensive. They are usually presented as an electrical schematic that has to be built using an ultrasonic transmitter and a receiver. Unfortunately most schematics also use a small microcontroller, which is a computer processor that has to be programmed. This adds extra programming overhead to the project. Infrared There are good infrared distance sensors available. Using the same time-of-flight principle they emit an infrared light beam and they use the time-of-flight data and the speed of light (!) to calculate the measured distance. Sharp sells fully assembled infrared distance sensors that can measure in varying ranges. Some known measurement ranges are 3 to 40, 10 to 80 and 20 to 150 centimeters. These sensors can either produce a serial data signal or a distance related analog voltage. Using this last type of sensor a very simple interface can be built. This interface only needs to adjust the voltage range of the sensor to the measurable voltage range of the RCX unit s sensor ports. There are many interface designs for this sensor available, each with slightly different approaches. For instance some interfaces draw their power supply from a sensor input, others use a motor output. The infrared sensors nicely measure in a straight line and do not have big side lobes (refer to the datasheet in appendix D), so they are less influenced by objects next to the sensors measurement line

44 Sharp s sensors are specially designed not to be dependent on surface color or reflectivity. Unfortunately they did not entirely succeed in their goal, so these sensors will always suffer from inaccuracies due to varying surface properties. Furthermore it is unknown what happens if the sensors should measure the distance to a translucent glass window or a mirror. Laser A very expensive solution would be to buy a laser distance sensor. Most laser sensors use the angle of the received signal to calculate the distance. These sensors are very large and consume enormous amounts of power. Hence, these sensors are not useful for this project. Conclusion The sharp infrared distance sensor is used for this project, because of its overall ease of operation. The only real advantages the ultrasonic alternatives have over the infrared one are the fact that their measurements are not influenced by surface properties and that they can measure much larger ranges. For this project very large distance ranges are not very useful and moreover the RCX unit cannot measure these enormous distances with sufficient precision. The problem with the sharp sensors and surface properties is for this project not very important. This project is the initial step in the final goal described in section 3.1.1, therefore it is a big achievement if any spatial data is measured with sufficient precision. 5.2 Final design Now that the alternatives for the robot design have been evaluated, a design for the robot can be specified that satisfies the requirements. As described in the previous chapter, LEGO does not provide all the necessary means to fulfill this task An earlier in section explained example are the rotary sensors which cannot be delivered anymore, because they are no longer in production. So solutions have to be found for these problems. The vehicle can be divided into several parts:! The robot must be able to drive around freely. This task is performed by the robot vehicle part. This is discussed further in section ! The robot must be able to take measurements of the environment from which the system can construct a map. This is done using a rotating distance sensor on top op the vehicle. This is explained further in section ! The robot must be able to detect objects the sensor on top might have missed. This is solved by surrounding the robot with a bumper that senses touch from every direction at a fixed height. This is described in section In the last section (5.4) the final design product is shown that comprises of all the earlier described components The vehicle itself In section was concluded that the synchrodrive is best steering mechanism for this project, because of the potentially higher movement precision and the ease with which odometry can be performed

45 As stated in that section this type of vehicle is not easily constructed properly, so a lot of care has to be taken for the LEGO design. Especially, when LEGO gears are used more slack is introduced with every extra gear in the transmission, so generally it is desirable to limit the number of gears used to the absolute minimum. This section describes how this is attempted. The designed synchrodrive drives on four separately steered axles, with one wheel each. It is driven by one motor for steering and one motor for propulsion. The rotary movements of these two motors are transferred to the wheels and the steering mechanism using two transmissions. These transmissions are described in section Each steered axle is part of a vehicle component referred to as a wheel unit. A wheel unit supports the steered axle and it gives the steered axle the possibility to steer smoothly in relation to the vehicle. The workings of a wheel unit are described in the following section Wheel unit The lower part of the wheel unit, which supports the steered axle, can be steered by the steering transmission in any direction in relation to the vehicle. Because the steered axle is mounted in this lower part of the wheel unit, it will be steered into this direction as well. The upper part of the wheel unit is fixed into the vehicle and holds the lower part into place Wheel transmission Inside this wheel unit a drive transmission is mounted. This transmission is part of the total drive transmission of the synchrodrive and takes care of propagating the motor s propulsion through the wheel unit to the actual wheel. Figure 8 A stripped-down wheel unit

46 In Figure 8 a stripped down version of a single wheel unit is shown. To the left is the wheel with a rubber tire mounted on the steered wheel axle. What this picture does not show is that this axle is suspended through two holes on a turntable which makes it possible to change the heading of the wheel, but more will be explained about this below. The drive axle (which points upwards in this picture) goes exactly through the centre of this turntable. While this wheel unit s lower part is rotated to steer in some direction, the gear mounted on the wheel s axle travels around the gear on the drive axle (see Figure 8). Because during steering the vehicle is not supposed to be driven forward, the drive axle is blocked. But because the gear on the wheel axle is moved around this fixed gear it is still driven. This would be a huge problem if the wheel would be mounted straight under the drive axle, which in turn is situated exactly in the rotation point of the wheel unit. As stated in section this would result in the vehicle moving in circles during steering. Therefore the wheel is mounted on the axle at a certain distance from the axle rotation point. This position on the axle is very important since it determines the radius of the circular path the wheel should drive during steering. Only if the number of revolutions of the wheel in a single steering rotation is such that wheel travels a distance equal to the length of that circular path the synchrodrive will steer smoothly. This number of revolutions depends on the transmission from the drive axle to the steered wheel axle. In figure 2 the transmission from the gear on the drive axle to the crowned gear on the wheel axle is 16 to 24. So if the drive axle is blocked and the wheel axle is rotated once around this drive axle the wheel will rotate two third. In conclusion the length of the wheel s path should be equal to two third of the outline of the wheel. So, to make the synchrodrive work correctly this wheel must be positioned on the axle such that the wheel travels exactly this path during a full steering rotation. This can be hard to accomplish precisely. Furthermore this means that a small wheel will have to be situated close to the steered axle s rotation point where it can possibly interfere with the drive axle s gear. In Figure 8 the wheel actually touches this gear, but this should not cause too much trouble since two touching smooth plastic surfaces will no introduce that much friction. To avoid this, one could choose a larger wheel. Yet, all larger LEGO wheels also are much wider. Thus, if such a wheel is used the contact surface of the wheel with the floor will become bigger, possibly introducing more friction; especially during steering. And if the synchrodrive is not perfect wheel slippage will occur and this will be a much worse problem for a bigger wheel The turntable As indicated previously, a LEGO turntable is used to make steering and driving possible subsequently in the synchrodrive. This turntable is shown in Figure 9. The turntable consists of two parts which are depicted separately in this picture for clarity. In reality, these components are fitted onto each other. When combined, the top part can rotate in relation to the bottom part with low friction

47 The necessity for this LEGO part follows from the fact that a wheel unit must able to be steered while the wheel below must be able to be driven. This means that the drive transmission from motor to wheel should not be influenced by the steering. The turntable allows for a drive axle to be fitted through its center. While the turntable is steered the drive axle remains on a fixed position. And because the transmission from drive axle to wheel axle allows the wheel axle to rotate freely around the drive axle the turntable can be steered freely as well. Figure 9 A disassembled LEGO turntable. The top part of this turntable is in fact a very large gear, so the turntable can be driven by a motor. But, this way the top of the turntable will turn. For something like a crane, this is just fine, but for a synchrodrive where the lower part should turn this is not correct. Figure 10 The final wheel unit design

48 Final design The solution to this problem is to build the entire vehicle upside down. That is exactly what is done for this project. The resulting wheel unit design is shown in Figure 10. Again pointing upwards is the drive axle RCX control This section describes LEGO hardware that needs to be designed to support the RCX units. As described before in section 3.2, a LEGO RCX has 3 motor channels and 3 sensor channels available. For the synchrodrive, two separate motors are needed for the propulsion and the steering respectively. In order to measure its movements, the robot needs two rotary sensors for the propulsion and the steering. In section is explained that LEGO does not produce the original LEGO rotary sensors anymore, so an alternative has to be found. First, this custom design is explained. Furthermore, as shown in section 4.3.1, there are problems with the datalog download when two RCX units are used. Therefore the shutter is to be designed. This shutter is explained briefly at the end of this section Rotary sensors There are quite a few alternative rotary sensor designs using non LEGO parts available, but for this robot our own rotary sensor design is used. It uses only LEGO parts, so it is not hard to construct. In Figure 11 a model of the rotary sensor is shown. It consists of two original LEGO light sensors with in between them an axle with a light beam interrupting piece of LEGO. Each light sensor has a small red LED (light emitting diode) that emits a red light beam and a small foto-transistor that detects the incoming light. The idea of this rotary sensor (and many others) is that the rotating interrupter interrupts the crossing light beams at fixed angles in the rotation of its axle. This way can be detected how many revolutions the axle makes. This is the light sensor alternative as described in section This alternative is chosen because low friction is indeed very important in this situation. The designed synchrodrive uses complex transmissions that introduce much friction by themselves already. It is not very practical for our purposes that LEGO has designed the light sensor such that the emitter and receiver of the red light beam are situated in the same blue LEGO brick. Usually these sensors are used to detect surface colors or contrasts. Hence, they normally rely on a reflective surface or an external light source. So, for the design of this rotary sensor a mirror could be used to reflect the light beam

49 Figure 11 The improvised fully LEGO rotary sensor Yet, with a little experimentation was discovered that two light sensors on the same RCX sensor channel work perfectly. This way, both of them emit a light beam and both of them send back the measured light intensity. Thus, the necessity for a mirror-like surface is eliminated. The RCX reads a combined light intensity from both sensors. With a little calibration the program can now detect if the light beams in the rotary sensor are interrupted or not. This setup is shown in Figure 11. Because the LEGO RCX can only detect signal changes at a certain maximum rate, the maximum number of revolutions on the axle per unit of time is limited as well. This can usually be solved by adjusting this speed with an appropriate transmission. Note that it is important that when the robot starts driving around, the interrupter of the rotary sensor should be at some known position. This ensures the initially measured rotations of the sensor are correct. Otherwise the interrupter could be on the edge of breaking the light beam and as such a very small rotation could count as the first step. The vehicle would make mistakes this way in odometry before it even starts. That would not be very important if the number of interruptions in a single revolution would be high. This way the error would be small. But for this design there are only two interruptions in a single revolution, so the error caused by bad calibration could be as high as 180 degrees on the axle

50 Yet, this might be discarded by the fact that the transmission causes the wheels to drive or steer much slower than the axle of the rotary sensor turns. As such a single turn of the rotary sensor does not represent a big change in the movement of the robot, so this calibration might not be that important. But because of the speed limitations of the rotary sensor the maximum speed of the vehicle is limited as well and it could become quite slow. A compromise will have to be found that suits this robot best The shutter The shutter designed according to the description in section It is shown in Figure 12. Figure 12 The shutter The shutter is simply a slide door that is controlled by a motor connected to the third channel of the RCX (this connection is not shown in the picture). If the motor is turned on long enough, the shutter is opened or closed, depending on the movement direction. The actual time is not very critical, since the shutter is fully constrained and cannot move too far. The axle movement of the motor is transferred using a transmission from a sixteen tooth gear on the axle of the motor to a gear beam mounted on the shutter door. This can be seen clearly in Figure 12. This action is preformed on request of the external control unit. This is further explained in section of the RCX software design chapter Transmissions The wheel units described in the previous section are connected to the two motors with two separate transmissions. The individual transmissions are described in this section. These two transmissions are explained in section

51 The steering transmission The steering transmission is for this discussion regarded to consist of two separate parts. The first part of this transmission connects the four separate wheel units to each other and makes sure that they move synchronously. This part of the steering transmission will be referred to here as the synchronous part, because it enforces the synchronous steering of the wheel units. The second part of the steering transmission takes care of propagating the drive from the steering motor to the synchronous part of the steering transmission. Furthermore it connects the motor to the rotary sensor that gives the controller the ability to track the robot s movements. It is referred to here as the embedded part of the steering transmission, because it is imbedded in the inside of the robot. The synchronous part In Figure 13 the bottom of the synchrodrive is shown. This gives a view on the synchronous part of the steering transmission. The four wheel units are placed in a square. They are interconnected with three forty tooth gears. This way each wheel unit has to turn synchronously with the others. In the remaining position an axle is fitted through the bottom of the vehicle. This axle is the interconnecting axle between the embedded part and the synchronous part of the steering transmission. This axle is described again when the embedded part is discussed below. Note that these forty tooth gears will introduce slack into this system, so the wheel units will not turn entirely synchronous. Furthermore the path from the two wheel units that are further away from the interconnecting axle to that interconnecting axle will have more gears. Thus, those wheel units will have more slack to worry about

52 Figure 13 A bottom view of the synchrodrive In reality this will mean that a rotary movement coming from the interconnecting axle is not propagated to a rotary movement of all wheel units right away. First, this movement will have to compensate for the slack between the gears. If the axle is turned far enough the wheel units will follow the rotation. This is a very important consideration for the control software. Assuming it will steer the wheel units always in the same direction, the first steering operation will most likely be inaccurate, because the slack between the gears compensates for the first part of this steering. So, actually the wheel units will not turn as far as expected. The control software will have to correct this error, because else the robot would move in a false direction unnoticed. This can be done by calibration of the first steering operation. If all the wheel units are turned a little farther the first time the wheels are steered to compensate for the initial slack, the first steering operation will be correct as well. This extra angle can be determined experimentally during testing. Yet, if the controller steers the wheel units in both directions the slack problem will occur with every direction change. The control software has to correct this each time. Hence, it might be more feasible to make the wheel units turn in only one direction, because then only the first steering operation has to be corrected. Some steering operations will take more time this way though

53 Chains A solution which can correct some of these slack problems is to use a LEGO chain. If all wheel units are connected by surrounding them with a strong chain they are forced to rotate synchronously. The slack of this chain setup should be minimal, especially if the chain is tightened well. The chain can replace the three gears connecting the wheel units and it can be connected directly to the interconnecting axle. It can also be used as a supplement to the geared solution. This might cause too much friction though. We initially chose to use the option without the LEGO chain, simply because they were not available at that time. Near the end of the project, these chains finally became available and they were incorporated in the design. This consideration is explained more thoroughly during the testing in section In this final chain design we chose to use the supplemental alternative, in which the chain only supplements the function of the gears. This is done, because the chains are much weaker than expected and they tend to rupture if they are put under too much stress. Transmission The transmission from interconnecting axle to the first two wheel units shown in Figure 13 consists of a twenty-four tooth gear and two small eight tooth gears to cover the gap. A LEGO turntable s top part has fifty-six teeth, so the transmission is from twenty-four to fifty-six. Therefore the interconnecting axle will turn 2 1/3 as fast as the turntables. This transmission will be identical for the chain solution if the same twenty-four tooth gear is used on the interconnecting axle. The embedded part In Figure 14 the embedded part of the steering transmission is shown. It is embedded inside the robot along with the drive transmission. Neither the robot nor the drive transmission is shown. The axles seem to float in this picture, but in fact they are supported through holes in beams. In the centre of the picture the motor is shown. It is connected to the interconnecting axle to the right using a number of gear transmissions. Furthermore it is connected to the rotary sensor for the steering. The first gear transmission encountered when tracing the axles from the motor is a transmission from a 24 tooth crown gear to a 24 tooth gear. This introduces no speed change. Then a transmission from an eight tooth gear to a 24 tooth gear goes in the direction of the synchronous part of the system and a transmission from the same eight tooth gear to a sixteen tooth gear goes to the rotary sensor. So, the motor will rotate twice as fast as the rotary sensor and three times as fast as the axle going in the direction of the synchronous part

54 Figure 14 The embedded part of the steering transmission Following the axle in the direction of the synchronous part, another transmission is encountered. This transmission is consists of a forty tooth gear and a worm screw. The forty tooth gear is mounted on the interconnecting axle. The worm screw counts as a single tooth gear, so the motor will turn 3 * 40 = 120 times as fast as the interconnecting axle. The transmission from the interconnecting axle to the wheel units was 2 1/3, so the motor will have to make 120 * 2 1/3 = 280 revolutions to make the wheel units turn a full circle. This way the robot can steer with 280 steps in a full circle. Thus, the robot can steer with a precision of 360/280 = 1.28 degrees. This is more than enough to meet the requirement that the robot must be able to move freely. Testing will have to show how this behaves in reality. It is possible that, due to slack and other imperfections in the construction, the steps are not always equal. This would severely compromise the robot s movement consistency and precision The drive transmission The drive transmission is embedded inside the robot, so it cannot be seen from the outside. In Figure 15 the drive transmission is shown without the surrounding vehicle. The four twenty-four tooth gears are mounted on the drive axles of the wheel units. These are like the rest of the robot not shown in this picture. These four gears are driven by worm screws mounted pair wise on two long axles

55 The motor to the right drives these axles. The transmission from this motor to these axles is one-to-one, so these long axles turn at the same speed as the motor. The LEGO worm screws mounted on these axles count as a single tooth gear. So, the transmission from these long axles to the drive axles of the wheel units introduces a deceleration of twenty four in relation to the motor. The rotary sensor (see section 5.2.3) for the drive transmission is connected to one of the long axles. This connection is a transmission that makes the interrupter of the rotary sensor turn three times slower than the motor. This way the rotary sensor turns eight times slower than the drive axles of the wheel units. Thus, when a wheel unit s drive axle makes a single revolution the sensor will detect sixteen interruptions, because the sensor has two interruptions in a single revolution. Figure 15 The drive transmission As explained in section 5.2.2, a small part of the drive transmission resides inside the wheel units. This part of the transmission introduces a further deceleration of 1.5. Hence, if a wheel of the vehicle turns once, the rotary sensor will detect twentyfour interruptions. The circumference of such a wheel is the distance the vehicle will travel within twenty-four interruptions (steps). For the position measurement this distance must be known. This can be done by measuring a wheel s circumference. Also by letting the vehicle move a known number of steps, this distance can be determined when the distance the vehicle travels is measured

56 This is to be done during testing. Although this transmission is not nearly as crucial as the steering transmission for the movement precision, it is of uttermost importance that the distance traveled in a single step is always identical. Yet, this can be compromised by slack and other LEGO imperfections. Testing will prove if this transmission works the way it should The radar The radar is situated on top of the vehicle and gives the robot the ability to acquire spatial data from the environment. This radar comprises of two components. At first a radar needs some sort of distance sensor to measure the distance to the nearest object straight in front of the radar. Second, a radar needs a mechanism that rotates this sensor at discrete steps with fixed and known angles. This rotary mechanism is described first in section Thereafter, in section , a short description is given about the improvised turntable mechanism that gives the radar a smooth turn. Then the design of the sensor is described briefly in section And finally the placement of the RCX controller is explained in section The rotary mechanism In order to produce sufficiently accurate images of the environment the rotary mechanism must provide a sufficient number of steps in a single revolution. Furthermore it must be stable, so the sensor cannot move while measuring. In Figure 16 a stripped down version of the rotary mechanism of the LEGO radar is shown. The large grey brick to the left is the motor of this mechanism. It is connected with an axle with a worm screw in the middle. In the centre of the mechanism is a forty tooth gear. This gear is fixed on the robot through the axle pointing downwards, so it cannot move. So, if the motor drives the worm screw the entire mechanism will rotate around this forty tooth gear. To enforce the fixed steps and stability, a lever is mounted next to the central gear. This lever just barely touches the teeth of the gear. When the whole mechanism turns each time a tooth of the gear passes the edge of the lever the lever is pushed to the outside a little. In Figure 16 the edge of the lever can barely be seen, but it is the shorter third layer of LEGO plate in the lever. When the lever is pushed to the outside a touch sensor is activated. This gives the controlling RCX the possibility to detect that a tooth has just touched the lever. Because the touch sensor has a (very small) spring inside the lever will leap back if the gear s tooth has passed the lever s edge. The next tooth will push it outwards again. Note that the motor can only turn the whole mechanism in one direction: if the mechanism turns clockwise (the gear turns counterclockwise in respect to the lever) the lever will get blocked on one of the gear s teeth. Yet there is no reason why the sensor should be able to turn in both direction, so this poses no problem. The stability of this mechanism is provided by the worm screw and the lever. The worm screw ensures (theoretically) that the mechanism cannot be turned by external influences, but only by the motor. Lever locks it in fixed positions. The reason for this choice for stability was explained earlier in section For the radar the extra friction introduced by the touch sensor alternative is not that crucial, since the

57 transmission of the rotary mechanism is not nearly as complex as the transmissions in the synchrodrive. The RCX control software only needs to turn on the motor (in the correct direction) and wait until the sensor detects that it is touched. Then it should stop the motor and take a distance measurement. After that it should turn the motor on again until the next tooth passes the lever. Figure 16 The rotary mechanism of the radar Because this gear has only got forty teeth, forty steps can be distinguished. Depending on the desired precision this can be a bit too few. The number of steps can be doubled to eighty with a small adaptation in the RCX control algorithm. In the currently described algorithm, a measurement is only taken when the touch sensor gets pushed. Yet it is useful to also take a measurement when the touch sensor is released. This way a step is recorded when the lever is on top of a tooth and when it is with its edge between two teeth. The only problem with this concept is that it does not produce uniform angles. The angle from the bottom to the top of a tooth is larger than the angle from the top to the bottom, because the lever is mounted to the side of the central gear. If this is taken into account in the conversion software this should be no problem at all. It can be tedious to find the actual angles, but a good estimate will suffice for a reasonable precision Turntable During design of this radar the original LEGO turntables were not available, so an alternative turntable had to be designed. Unlike the turntables for the synchrodrive the turntable for the radar fortunately does not need to support a drive shaft. It is only required that it supports the weight of the rotary mechanism, the sensor and most importantly the quite heavy RCX, while the radar must have the ability to rotate. So, the turntable must provide a smooth surface on which the radar can rotate

58 A view on the bottom of the radar and the designed turntable is shown in figure 6 below. In essence it is a LEGO motorcycle wheel center (the white round thing in the middle) that functions as a kind of slide rail. This wheel centre is fixed on top of the robot. On top of this wheel centre the radar unit slides on four yellow LEGO plates. The friction caused by these four contact points is minimal because the contact surfaces are smooth and small. Figure 17 A bottom view of the radar unit s improvised turntable As can be seen in Figure 17 an axle is locked in the centre of this wheel. This axle is the one pointing down in Figure 16. The radar rotary mechanism uses this axle to push itself in its rotary movement. As explained before a gear is fitted on this axle to make this possible Distance sensor For the radar the infrared distance sensor from Sharp is used, as stated in section The most important reason this sensor was chosen is because it can be connected to the LEGO RCX without too much effort. As described that section s sensor design alternatives the only thing needed to interface this sensor to the RCX is a small interface. This interface takes care of the sensor s power supply and it converts the sensor voltage levels to voltage levels that are readable by the RCX. Other sensors need more complex interfaces often using a small microprocessor that needs to be programmed. On the internet in general two types of interfaces are available. These types differ in the method they use to supply the sensor with power. The sensor consumes about 33 ma during measurement and that is too much for a sensor input, which can deliver about 14 ma. A simple solution is to draw power from one of the motor outputs. These can supply more than enough power. Another solution found commonly on the internet is using a capacitor to store energy from the sensor input. If a measurement is done this energy is released. This way only a sensor port is necessary for the sensor to operate

59 Yet, due to differences in RCX units the designed capacity of the that capacitor is not always correct and this might cause problems with the sensor s power supply. To prevent this, the other alternative is used for this project. The fact that an extra motor channel is used poses no problem, because the radar RCX needs only one of the three motor channels for the rotary mechanism. Figure 18 Interface for reading a sharp distance sensor with the LEGO RCX. The schematic of the sensor interface circuit used is depicted in Figure 18. This interface was designed by Edwin Dertien and it is explained on his website [DERTIEN]. The upper part provides a stable power supply of the sensor. This is done using a 5 Volt voltage regulator. The led indicates if the power supply is correctly connected to the motor channel. If it is not the diode (D1) will block and the sensor will get no power supply. This diode is inserted to prevent users from misconnecting and destroying the interface. It is also important that the motors are turned on at full power. The lower part of the interface schematic comprises of a operational amplifier and a couple of resistors. The array of diodes to the right is a standard LEGO circuit design that allows the LEGO connector cables to be connected both ways. The lower part of the circuit adjusts the voltage levels of the sharp sensor to the voltage levels of the RCX sensor input. This part of the circuit uses the sensor power supply coming from the RCX. Furthermore the adjustment of the voltage levels can be controlled by tuning the setting of the potentiometers (R5 and R6). This is necessary to give the RCX the ability to use the full possible measurement range of

60 the sensor. This generally boils down to fiddling with the potentiometers until it is just right. The sharp sensor depicted in this schematic is the GP2D120. This sensor has a measurement range of 3 to 40 centimeters. This is not far enough for this project so this sensor is replaced with the GP2D12 version. This one has a measurement range of 10 to 80 centimeters, while the overall voltage levels it uses are identical to the GP2D120 version. The datasheet can be found in appendix D. Unfortunately the circuit design by Mr. Dertien contains a small bug. The ground voltage levels of the upper and lower parts of the circuit are not identical. If in the upper part the voltage produced by the sensor is relative to 0 Volt, that same voltage is relative to about 0.6 Volt in the lower part. Thus, a part of the sensors measurement is discarded this way. The solution is simple: connect the ground lines of the upper and lower parts of the circuit and the interface works well. All the RCX control software needs to do is to turn on the motor channel the sensors interface s power supply is connected to. If all is well the LED on the sensor s interface will light up. To read a measurement the RCX control software must set the connected sensor channel to the light sensor type. This way the adjustment part of the interface gets power supply as well. Then, the RCX control software can read the sensor values RCX Controller The RCX controlling the radar is situated on the radar itself, so it rotates along with the radar. This has the advantage that electrical cables from the two sensors and the motor will not interfere with the rotary movement. If this second RCX would be mounted on the vehicle itself the cables would have to be routed through the rotary mechanism, which prevents this mechanism from making continuous turns: it would have to move back and forth. Furthermore the cables could get tied up in gears or other moving parts of the rotary mechanism. This choice also has some disadvantages to consider. Unlike the RCX on the vehicle the radar RCX will not always point in the general direction of the base infrared tower anymore. So, (in theory) it would not always be able to communicate with the external controller, because the infrared signal requires a line-of-sight link. Therefore measurement values potentially cannot be sent instantly to the external controller. The measurements will have to be stored inside the RCX until the connection is established again. Fortunately the LEGO RCX has a very nice feature that solves this problem. This will be explained in the control chapter The bumper The task of the bumper is to detect if the robot has unexpectedly driven into an object. This is not supposed to happen, but this bumper is added as a last resort to meet the requirement that the robot must always avoid obstacles even if they were not recognized using the radar. As seen in the previous section the used distance sensor is not that reliable at all times. Translucent surfaces or highly reflective surfaces tend to be missed by the sensor. This way the robot could drive into a mirror for example without even noticing. Furthermore the robot s radar is mounted on a certain height. Less tall

61 objects are not seen this way. If for example a concrete brick obstructs the robot s path it will not be seen and the robot will drive into it and will actually continue driving. This way it could push the object away or overload its own motors. To detect objects the robot is equipped with a bumper that detects when the robot touches an object. Because a synchrodrive is used the bumper should surround the entire vehicle: during movement every side of the robot can become the front of the robot, because the orientation of this vehicle does not change while the movement direction does change. This demands that bumps must be detected in every direction and therefore the bumper sensor should surround the whole vehicle. The bumper designed for this robot to solve this problem is shown in figure 6. Only the bumper itself is shown. The robot is situated in the centre. The four sides of the robot each have their own touch sensor. Each touch sensor is pressed by a lever which stretches along the whole side of the robot. As shown the bumper levers are equipped with a large number of flexible LEGO hoses. The green hoses in the corners are installed to make bump detection possible in the corners of the bumper. If one of these hoses is driven into an object, it will be pushed towards the robot. The ends of this hose will push the outer ends of the two connected bumper levers against the robot as well. This way one or both of the sensors detects the touch, so the corners of the bumper are also sensitive. The other hoses are just present to give the bumper a more square form and to prevent influence by the robot on the environment: if the robot bumps into something, these hoses can bend a little to absorb the force of the impact. Ideally a touch sensor is pushed before the hose is pushed to its limit of flexibility. Note that light objects can still be pushed away before the robot notices their presence. Figure 7 shows how the touch is transferred from a lever to the actual LEGO touch sensor. It is a slightly modified version of a standard LEGO robot bumper design. This part of the bumper is connected to the robot with the black pins on the left. The robot is missing in this picture, but it is present to the left. The button of the touch sensor is the small yellow bulb that touches the lever. If lower arm of the lever is pushed towards the robot to the left the button is pushed and the robot detects the touch. The original LEGO design of this bumper has an elastic belt on the upper lever arm to make it leap back, so the button does not remain pushed. This is not necessary for the design presented here because the green corner hoses already enforce this kind of behavior

62 Figure 19 The bumper Figure 20 Touch sensor s lever

63 Note that this bumper design is not able to detect all objects the robot might encounter. This bumper only works if the object is present at a certain height. In fact this is much like the radar sensor. The problem is that, when an object is situated at a higher level than the bumper, but at a lower level than the sensor, it will not be detected. As described in section the only way to solve this problem mechanically would be to surround the robot in some sort of cage that somehow detects touches at all levels. Yet, this could interfere with the radar. For this project the designed bumper will suffice. The bumper is only needed as a fallback method if the radar sensor fails to detect an object and hence it does not serve much purpose most of the time. 5.3 Summary In this paragraph a summary is given of the design and the most important related design choices. And finally the complete fully assembled design of the robot is presented. The robot consists of three main parts:! The core of the robot is the vehicle itself. This part of the robot gives it the ability to move around freely through the environment. The vehicle part gives the control unit the ability position the robot precisely in the environment. The vehicle is positioned in the environment using odometry. Odometry is a position measurement technique that uses no external references, but relies on the measured wheel movements of the robot. Because no external reference is used, the measured position will eventually get inaccurate due to the accumulation of errors. Other position measurement techniques are too complex or expensive for this project, although landmark navigation can provide a useful extension for the future. The vehicle s steering mechanism is a synchrodrive: all the vehicles wheels are steered simultaneously at the same rate. It is chosen to be a synchrodrive, because:! A synchrodrive has a potentially higher movement precision. If built correctly, it can steer and move very accurate. Other alternatives have steering precision problems or they are hard to make reliable.! For a synchrodrive the implementation of odometry is very simple and straightforward, because it stays on the same location during steering. For most other alternatives, odometry is complex to implement, mostly because these do stay on the same spot during steering.! The orientation of a synchrodrive vehicle does not change. This is very useful, because this way the infrared transceiver of RCX motor controller mounted on top of the robot will always point in the same direction. If the initial orientation of the vehicle is correct, the RCX will always have a link with the base station The vehicle s movements are determined for odometry by reading two rotary sensors. LEGO does not sell the standard rotary sensors anymore, so a custom sensor is designed. These custom rotary sensors are entirely built with standard LEGO parts, because other alternatives would add too much overhead to the project. The custom sensors are built using LEGO light sensors, since this

64 introduces much less friction in the already heavily loaded transmissions of the vehicle. The vehicle consists of:! Two motors. These motors drive the steering and propulsion of the wheels respectively.! Four wheel units. These are the parts of the vehicle that support the wheels. Each wheel unit has one wheel that can be steered in the desired direction by rotating the lower part of the wheel unit. Furthermore its wheel can be driven using the drive axle.! Two separate transmissions. These transmissions transfer the drive from the two motors to the wheels for steering and driving respectively. They also connect the motors to their rotary sensors. Furthermore, the steering transmission enforces the synchronous movement of the wheels.! An RCX controller on top if it to control the motors using the rotary sensor data. Furthermore this RCX controller communicates with the external control part to receive commands.! For the required spatial measurements the radar is designed. It is situated on top of the robot. The technique chosen to measure the spatial features of the room is chosen to be a radar like setup. A radar consists of a rotating distance measurement sensor that measures the distance to the nearest object in a straight line in front of it at fixed angles. Thus, the radar consists of:! A distance sensor. This distance sensor is chosen to be a infrared distance sensor from Sharp. This sensor is easy to interface to the RCX control unit and it is very cheap in relation to the other alternatives. Furthermore the other alternatives are complex to build and would add too much overhead to the project.! A rotary mechanism that gives the controlling RCX the ability to rotate the sensor in fixed steps. Again, a custom rotary sensor is designed for this component. Yet, this time it is not the light sensor design. It uses a touch sensor and a lever that hold the radar in fixed positions.! An RCX control unit. This RCX unit controls the radar and communicates with the external control unit. It is situated on the radar itself to prevent cabling problems. Yet, this possibly prevents the RCX from always communicating with the base station, because it rotates along with the radar.! The bumper is needed to detect collisions with objects that the radar missed somehow. Other alternatives like proximity sensors are hard to implement and would add too much overhead to the project. The bumper is based on a standard LEGO design. It is built using touch sensors. Each touch sensor is pressed using a lever with beams and hoses that stretch around the whole vehicle. In the corners special green hoses make sure the bumper is also sensitive in the corners. The bumper must surround the entire vehicle, because a synchrodrive vehicle can move in any direction independent of the vehicles orientation, so it can collide with object in any direction

65 5.4 The final robot When all the designed components of the robot are put together the robot looks like depicted in Figure 21. The various components, except for the distance sensor, can be distinguished easily. The distance sensor is located behind the, with wings 3 equipped, radar. The bumper surrounds the vehicle. In this picture the wiring of the sensors and motors is missing. So actually, many cables are present on the real robot. The transceivers of the robot are facing forwards. As shown, the vehicle RCX transceiver can be obstructed by the shutter. In this picture only the external components of the two transmissions can be seen. These are the motor transmission gears of the drive transmission and the rotary sensors. Furthermore, it is shown that the robot is supported by the wheel units. Figure 21 The final robot 3 Unlike the belief of many, these wings do not serve any useful purpose. They, for instance, do not improve the infrared reception of the RCX, which would of course be ridiculous

66 6. RCX Software Following the discussion of the robot hardware design, the software design of the robot is discussed in this chapter. The software design of the robot involves the programming of the two RCX controllers described in section 4.2.2, namely the vehicle RCX controller and the sensor RCX controller. Simply stated, the purpose of the software running on the RCX controllers is communicating with the external control unit and executing the commands given by the external control unit. Thereby, these RCX controllers provide the interface to the robot for the control unit. As explained in section the software running on the RCX is referred to as the internal part of the control unit. The software running on the RCX controllers can be seen as the low-level part of the entire control unit. Recapping on the requirements of the system and the system components, we can summarize that the programming of the RCX controllers must facilitate: - robot movement control; - detection and handling of collisions with obstacles/objects; - performing spatial measurements on the environment; - performing communication with the control unit (See 4.3.1). To facilitate these requirements, a two-phased approach was used to develop the software for the two RCX controllers using the programming language NQC. In the first phase a programming framework was developed containing the basic functionality for both RCX controllers. The functionality of this framework is described using a state diagram. This state diagram is explained in the first paragraph (6.1). The design and construction of the framework will be discussed in paragraph 6.2. This is followed by the description of the programming environment of the RCX controllers in section 6.3. Finally, the second design phase of the RCX software is discussed for the vehicle RCX controller and the sensor RCX controller. The second phase implies the addition of the RCX specific code to the framework built in phase one. 6.1 The state diagram The basic behavior of the RCX controllers is defined by the protocol for the communication between the RCX controllers and the control unit. Both RCX controllers respond to a basic set of commands. Hence, for both RCX controllers the basic behavior is identical. The same protocol is used for both controllers with the exception of some specialized control commands. Therefore, basic behavior of the RCX controllers can be deduced from the protocol specification and visualized in a RCX state diagram. The RCX state diagram can be found in Appendix A. The initial state of the RCX controller state diagram is Start. Conform the communication protocol specified in section 4.3.2; a command cycle is started in the Start state when the RCX controller receives a CC_START command from the control

67 unit. This is represented in the state diagram by the transition from the Start state to the Authorization state. The Authorization state represents the first part of the protocol, namely the authentication. In this state, the RCX controller waits for the reception of the RCX ID to identify if it is selected for the execution of commands. If the RCX ID matches the ID of the RCX controller, it is selected by the control unit. Then, it will make the transition to the Active state. Otherwise, it will make the transition to the Idle state. In the Idle state the RCX controller waits until it receives a CC_STOP or a CC_RESET command. The CC_STOP command marks the end of the command cycle and the CC_RESET command causes the RCX to return to the Start state. The Active state represents the actual command cycle: the second part of the protocol. In the Active state, the RCX controller waits for RCX specific commands from the control unit. When the RCX receives a command, it will make the transition to the Process state to execute the command. After processing the command, the RCX returns to Active state to receive the next command. If the RCX receives a CC_STOP or CC_RESET command in the Active state, the current command cycle is ended and the RCX controller returns to the Start state. The extension of the Active state represented by the AreYouThere state will be explained in paragraph The support for the CC_IDENTIFY command is added in the Start and Active state. 6.2 Framework Using the RCX state diagram, we can construct a programming framework that can be used for both the vehicle and the sensor RCX controllers. In this section, we will describe the construction of this framework. For the complete source code of the framework, see the project disc Framework construction With the help of some simple techniques, this framework can be constructed by translating the RCX state diagram of the previous paragraph to NQC code. The first step is creating an NQC method for each state in the state diagram. It is convenient to give the method a name corresponding to the state. The next step is translating the state transitions to NQC code. The technique is to introduce a state variable. The general idea is that the state variable stores the current state. This can be done by declaring the state variable as an integer value and to give each state/method (state-method) an internal index (See Error! Reference source not found.). It is convenient to define the internal indexes of the states in the NQC code of the framework as macros for reasons of changeability and readability. The internal indexes can be defined as macros with the use of the #define statement. The naming convention used is ST_<method> with the method name in capitals. For the same reasons the protocol commands are defined as macros in the framework code. For

68 the protocol commands the same names are used as defined in protocol description of section Also a macro for the RCX ID is defined in the form of RCX_ID. The transition to another state then consists of changing the state variable to the correct internal index corresponding with the state. For this an overall command cycle algorithm, in for example the main task, is needed to check the state variable and call the correct state-method. For reasons of extendibility the command cycle algorithm is placed in a separate task named command_cycle and is started from the main task. The command cycle algorithm contains an infinite loop were in each loop the state variable is checked against the internal indexes. Using this index the corresponding state-method is called. After execution, the called state-method returns to the command cycle algorithm. With exception of the transition logic, the state diagram is directly translated to the framework. In the state diagram can be seen that each transition is triggered by a control message sent by the control unit. Before a state-method can begin processing the transition logic, a message needs to be received from the control unit. Because of its frequent usage, a special method is created for this purpose in the framework. The method is called Receive() and waits until it receives a message. On this event it stores the message in the message buffer. The message buffer is named message in the framework. After receiving a message from the control unit, the state-method can process the transition logic with the help of the message and execute the actions dictated by the logic. The translation of the transition logic to NQC code is straightforward and basically uses the same techniques as the command cycle algorithm. For the specific code of the transition logic in each state-method see the complete framework source code. There is one exception, namely for the transition from the Active state to the Process state. In the Active state, the RCX controller can receive a command which must be executed. If this happens, the command is stored in the message buffer and the Process state-method can immediately execute the command. There is no need to wait for a message from the control unit, because it was already received in the Active state The Panic state As mentioned earlier, the main task starts the separate task command_cycle. In order for the command cycle algorithm to work properly, the state variable needs to be initialized in the main task with the start state (ST_START). Due to the endless loop of the command cycle algorithm, the command cycle task (in principle) runs indefinitely. In the further development of the robot software this could become a problem. For instance, when specific requirements of the system (suddenly) do not apply and the software is unable to perform its task. This could happen when, for example, a communication breakdown between a RCX controller and the control unit occurs. Therefore a virtual end state is introduced to the framework named Panic (ST_PANIC). In reality this state does not exist in the

69 behavior of the RCX controllers and represents a state of system breakdown. In the Panic state the system shuts down all tasks by using the StopAllTasks() command. 6.3 Environment Before the discussion on RCX software design, a view of the (programming) environment of the RCX controllers must be established. More specifically, the environment of the RCX controllers is represented by the other components of the system, like sensors and motors. In this paragraph the various components will be identified and assigned to the communication channels of the appropriate RCX controller. These assignments are determined by the physical LEGO cable connection, but they are described here. The interaction of the two RCX controllers with their environment is shown graphically in section As stated in section 4.2.2, the software running on the RCX controllers must facilitate communication with the external control unit. Thereby the external control unit is automatically identified as part of the environment of both the RCX controllers. According to section the components of the robot we can identify as part of the environment of the vehicle RCX are:! Movement motor: The motor responsible for turning the wheels of the robot.! Rotation motor: The motor responsible for rotating the wheels of the robot.! Shutter motor: The motor responsible for closing or opening the shutter of the infrared port (see sections and ).! Movement sensor: The sensor capable of detecting the degree of movement of the robot.! Rotation sensor: The sensor capable of detecting the degree of rotation of the robot.! Bump sensors: The sensors capable of detecting a collision with obstacles/objects in the path of the robot. The components of the robot that can be identified as part of the environment of the sensor RCX are: Sensor power supply (motor channel): The power output to the distance measurement sensor. Rotation motor: The motor responsible for rotating the platform with the sensor. Sensor: The distance measurement sensor. Rotation sensor: The sensor capable of detecting the degree of rotation of the platform with the sensor. As mentioned in the protocol descriptions of paragraph 4.3, the communication between the RCX controllers and the external control unit is facilitated by means of infrared signals and the SendMessage command of the LEGO firmware. Consequently, the communication channel to the control unit exists by default for the RCX controllers: this part of the environment does not need to be assigned to any specific NQC reference. Yet, in order to build a correct program, the other assignments must be specified to be connected to one of the three motor output channels (OUT_1, OUT_2 and OUT_3)

70 or one of the three sensor channels (SENSOR_1, SENSOR_2 and SENSOR_3) on the RCX. The components were assigned to the following RCX controller channels, see Table 7. Sub-system External RCX Channel Internal NQC Channel NQC Reference Movement motor 1 0 OUT_A Rotation motor 2 1 OUT_B Shutter motor 3 2 OUT_C Movement sensor 1 0 SENSOR_1 Rotation sensor 2 1 SENSOR_2 Bump sensors 3 2 SENSOR_3 Table 7 Channel assignments to sub-system components for the motor RCX controller Sub-system External RCX Channel Internal NQC Channel NQC Reference Sensor Power Supply 1 0 OUT_A Rotation motor 2 1 OUT_B Sensor 1 0 SENSOR_1 Rotation sensor 2 1 SENSOR_2 Table 8 Channel assignments to robot sub-systems for the sensor RCX controller The complete (programming) environment of the RCX controllers is now identified and specified. The environment is visualized in the figure below (See Figure 22). ROBOT Movement motor (OUT_A) RCX Motor Rotation motor (OUT_B) Shutter motor (OUT_C) Movement sensor (SENSOR_1) Rotation sensor (SENSOR_2) Control Unit Bump Sensors (SENSOR_3) RCX Sensor Sensor Power Supply (OUT_A) Rotation motor (OUT_B) Sensor (SENSOR_1) Rotation sensor (SENSOR_2) Figure 22 The environment of the RCX modules

71 6.4 RCX vehicle control In this section the software design of the vehicle RCX controller is discussed. This RCX controller controls the vehicle part of the robot. Refer to the robot design section for more information about the mechanical construction of the vehicle. The software was developed on top of the framework discussed in the previous sections. The added functionality in the framework for the motor RCX is described by explaining the used techniques and design choices. First, a few remarks are made on the initialization of the environment in the software. This is followed by the discussion of an important aspect of the vehicle RCX software, namely motor control. Then, the implementation of the developed methods is described for the motor control. And finally, the collision handling is explained. In this paragraph is referred to many NQC commands and functions. Read the NQC programmers guide for more information [NQCPG]. Furthermore, definitions of the variables explained here can be found in the source code on the project disc Initializing the environment In order to use the various sensors in the software, we need to initialize and define them properly. The rotation sensors described in section must be setup as light sensors and the bump sensors must be setup as a touch sensor. The RCX uses slightly different methods to communicate with the different types of sensors. Hence, they must be initialized in separate ways. A light sensor requires a power supply for instance and a touch sensor does not. So, if the light sensor were configured as a touch sensor it would not work. The communication protocol defined in section defines the actions taken by the motors in units of steps. The external controller can change the default number of steps taken for a movement command. This default value is stored for future reference. This is done by storing the values in a variable. For the movement motor the integer variable called mm_step is used and for the rotation motor the variable mr_step is used. The abbreviation mm stands for motor movement and mr for motor rotation. These step variables are initialized with 10 steps. The protocol description also mentions the support for the adjustment of the power level of the motors. Again, to store these values for future reference we create the integer variables mm_step and mr_step. The power level variables are initialized with the maximum level: seven Robot positioning The movements of the robot are electronically driven by motors; therefore motor control is an important aspect of the RCX software. The motor RCX controller is responsible for controlling the movement and rotation of the robot. The most basic NQC commands for controlling the output channels connected to the motors are On(), OnFwd(), OnRev() and Off(). These commands turn a motor on, on

72 forwards, on backwards and off respectively. Using the Wait() command the programmer can instruct the robot to wait for a certain amount of time. Using these and other (not mentioned) commands, a motor control algorithm can be developed. The most basic method of driving the robot (See Code 1) is only turning a motor on during a specific amount of time, without actually measuring its movement. An example in NQC of this algorithm is shown below: OnFwd(OUT_A); // Turn the motor on. Wait(100); // Wait for 1 second. Off(OUT_A); // Turn the motor off. Code 1 Basic motor control method Unfortunately, it is not sufficient to turn on motors for a specific amount of time for the movement precision required in section The speed at which a motor turns its axle depends entirely on the battery level. Thus, the robot would travel or steer less far if the batteries go low. As described in the robot design (paragraph 5.2), custom made rotary sensors are designed to solve this problem. These rotary sensors give the control software ability to read how far the motor s axle has turned. As described in section the rotary sensors of the vehicle consist of a set of LEGO light sensors facing each other. A rotating piece of LEGO interrupts the light beam between these sensors at regular intervals in the rotation. The algorithm described earlier can be adapted by replacing the wait command with a wait algorithm capable of monitoring the actions of the motor with the use of the rotation sensors. When the desired rotation is performed by the motor, the algorithm will stop waiting. The new algorithm is shown in the source code example below: int i = 0; On(OUT_A); while(i < 10) { // Move until the interrupter frees the light beam. until(sensor_2 > 80); // Move until the interrupter breaks the light beam. until(sensor_2 <= 80); } i++; Off(OUT_A); Code 2 Advanced motor control algorithm using a rotary sensor First, the algorithm waits until the interrupter stops breaking the light beam. Experimentally is determined that the sensor gives a raw value greater than eighty if is the light beam is not crossed. Otherwise it is much smaller than eighty. Thus, after that the algorithm waits until the light beam is crossed again. In this example the motor rotates 10 steps, because this sequence of crossing and releasing the light beam is repeated ten times

73 6.4.3 Motor routines The developed motor control algorithm is used in the set of routines with the motor control functionality in the RCX software. It is chosen to divide the functionality in two groups of routines for reasons explained later. The first group contains the routines responsible for the motor setup. For the movement motor the self-explanatory routines are mm_forward() and mm_backward(), which set the robot s movement direction to forward and backward respectively. The routines for the rotation motor are mr_left() and mr_right(), for right and left steering respectively. Thus, these routines tell the system how it should move before any movement is actually performed. After the motor setup, the actual movement can be performed using the second group of routines. These are responsible for turning the motors on, using the developed algorithms from the previous section. For the movement motor the routine is defined as mm_on() and for the rotation motor as mr_on(). This proves that by splitting the functionality into two groups, the functionality of mm_on() and mr_on() only has to be defined once. Otherwise two routines for each motor would have been necessary. It is possible to move the functionality of the motor setup routines to the Process state described in paragraph 6.1 and 6.2. Then, the routines from the second group are called directly. But, it is desirable to keep motor control code separated from the command cycle code. All the motor routines have a parameter step. This parameter indicates the amount of steps the motor must turn. Usually this will be the number of steps defined by the value of previously defined values mm_step and mr_step, for the movement motor and the rotation motor respectively Motor and sensor initialization In this section the motor and sensor initialization issues are explained. There are two issues that have to be handled in the motor initialization part of the RCX software:! In section is explained that the designed transmissions are subject to slack problems. To solve this, the motors must initially be turned a while longer to compensate for this slack. The number of steps that the motors need to be turned extra initially is determined experimentally. And this depends on which of the two transmissions in the vehicle is involved.! The rotary sensors have to be initialized as well, according to section This is needed to increase the reliability and the precision of the sensors. To initialize a sensors the motor corresponding to that sensor is first turned on. Then, the RCX software waits until the interrupter of that sensor breaks the light beam. And, after an experimentally determined number of milliseconds, it turns the motor off again. This leaves the rotary sensor s interrupter in a well known position. If the motor would be turned of immediately the sensor s interrupter would be on the edge of breaking or releasing the light beam. This way, if the sensor bounces

74 back a little, the light beam is freed again. This is just the problem that is to be prevented. In the motor RCX code, these algorithms are facilitated by the initialize() routine, which is called from the main task Are You There? The robot can lose its communication with the tower of the external controller. This can happen for various reasons. A common reason is that the robot is moved outside the range of the tower or behind an obstacle that is obstructing the communication. The robot must have the ability to detect this. Otherwise, it would get stuck somewhere. That would violate the requirement that the robot should never get stuck described in section In the form of the CC_AYT command, the communication protocol has the support for checking the status of the communication channel between the RCX controller and the control unit. Then, if it detects that the communication has broken down it can take appropriate actions: it will drive the robot back to the previous location, if the communication breakdown was caused by a movement command. If the RCX controller does not receive commands from the control unit, it becomes idle in the current state of the command cycle. Recapitulating, the status of the communication channel is only of concern to the RCX controller if it has executed a movement command: if the communication is broken for some other reason, the robot cannot repair the communication by driving back anyway. Therefore the motor RCX software should only use the AYT feature of the protocol after the execution of a movement command. This is visualized by the AYT extension of the Active state in the RCX state diagram described in paragraph 6.1 and displayed in appendix A. The transition to the AreYouThere state is made by sending the CC_AYT command. In this state, the RCX controller waits for the CC_ACK command from the control unit. This is an acknowledgement to the Are you there? question of the RCX. The RCX controller returns to Active state, if the CC_ACK is received from the control unit. If this never happens within the time-out period, the RCX controller will make the transition to the virtual Panic state described earlier in paragraph 6.1. To register the fact that a movement command was executed, a variable named communication is used. The default value of the variable is zero, meaning the communication does not have to be tested. Otherwise, the communication variable will have a value of one. If the communication test fails, the RCX controller has the option of recovery left (See section 6.4.7). In case a recovery is not possible, the software can only panic and shutdown all tasks. The actual task of sending the CC_AYT and waiting for an acknowledging reply from the control unit is performed by a routine named SendAYT(). To inform the Active state-method on the status of the communication channel, it uses that communication variable

75 6.4.6 Bump Sensors The requirements in section state that the robot must be able to detect and avoid objects/obstacles at all times, even if these objects were missed by the radar. The actual detection of these missed obstacles is the responsibility of the bump sensors. The control software must respond appropriately in this event. The software needs to monitor the state of the bump sensors at all times, because a collision can occur any moment (during a movement). To facilitate this, a new task is defined for the bump sensors named bump_sensors and is started from the main task after the initialization of the environment. The bump_sensors task can be implemented using two different methods. The first method uses the standard loop statements of NQC like the while- and untilstatement. The statements are used to create a infinite loop structure where in each loop the status of the bump sensors is checked. This technique is called polling and it puts a heavy load on the processor of the RCX. The second method uses the configurable events of version 2.0 of the LEGO RCX firmware. An event is handled much like an interrupt: the software is notified about the event. For information on the precise use of events in the NQC programming langue, refer to the NQC programming guide [NQCPG]. From the beginning the software support for the bump sensors was available, but unfortunately the hardware was not. The software for the first prototype of the robot used the first method to monitor the bump sensors, but was never tested and discontinued for the next prototype. In the beginning the use of events appeared to be too complex for a simple task as monitoring a sensor. Thereby it would create unwanted complexity in the software. In fact, the opposite proved to be true and the software for the next and latest prototype uses events to monitor the bump sensors. In case the bump sensors are triggered, the bump_sensors task needs to stop the command cycle task because no new commands can be executed until the current situation is resolved. This can be done by performing some sort of recovery (See the next section: Recovery). Hereafter the command cycle task can be restarted in the same state using the unchanged state variable Recovery As mentioned in the previous two sections, some sort of recovery is needed in situations where the robot has collided with an object or has lost the communication link with the external control unit. In case of a collision, the bump sensors send a CC_NACK. This reports that the robot has failed to execute the movement. The communication link is probably still active and the CC_NACK is received by the external control unit. The internal control unit is in a position to control the robot and to determine the next action the robot must take. In the situation the robot has lost communication with the control unit, it is certain that in the previous position of the robot the communication channel was still active

76 Therefore the robot has a recovery mechanism capable of moving the robot to the previous position, in the hope of restoring the communication. This beholds that the robot must be able to rollback the previous movement command or the current movement command in case of a collision. It is possible to keep track of the command history and rollback multiple commands. This includes all movement and rotation commands to keep a consistent command history. In our opinion this is not very useful. If after the first recovery step the communication is not restored, there is a high probability that the communication is also broken in all the other previous steps. For that reason we chose to only keep track of the last executed movement command. The only commands capable of creating a collision or breaking the communication with the control unit are movement commands, so the only commands to track are MC_M_FORWARD and MC_M_BACKWARD. To keep track of these movement commands, we store the given command in the variable named history_state. For these commands the corresponding macros are used to store the movement command history in the history_state variable. The value of history_state is zero, if there is no history available. Because only movement commands are tracked, the history needs to be cleared when a rotation command is executed, else the recovery would be inconsistent due to the change of the course of the robot. To perform the actual rollback a routine Recovery() is defined. The algorithm of this routine beholds checking the history_state variable. If the history_state variable equals MC_M_FORWARD or MC_M_BACKWARD the opposite movement command is executed, thereby performing a rollback. For example, if the history is a forward movement command, a backward movement command is performed. For all other values of the history_state variable obviously no actions are performed. To have a complete and accurate recovery, one more problem has to be handled: if the robot collides with an obstacle during the execution of a movement command, the recovery needs to know how far the execution of the command had progressed. Otherwise, the recovery would execute the opposite command on the full distance. This way, the robot would move back too far after a successful recovery. This new position is unknown to the external control unit. Thus, to handle this problem, the software needs to keep track of the number of executed movement steps in order to perform an accurate recovery. For this reason, the variable history_steps was created. When a recovery is needed, the correct motor routine can be called with the history_step variable as parameter Shutter As described in section 4.3.1, a communication obstruction, referred to as the shutter, is used to make the download of the datalog possible, if multiple RCX units are used. The control of the shutter is handled by a set of three routines, namely shutter_open(), shutter_close() and the overall routine shutter(). The implementation is straightforward. The main routine shutter() closes the shutter by using shutter_close() and opens it again after a pre-defined period of time (SHUTTER_TIME) with the shutter_close() routine

77 The closing and opening of the shutter is realized by turning the shutter motor on for 500ms using the OnFor() command with a motor power level of 2. This time was established by experimentation. 6.5 RCX sensor control In this section the design of the software for the sensor RCX controller is described. Many of the concepts and implementation techniques used in the motor RCX software are applicable to the sensor RCX controller. For the discussion of these subjects, refer to paragraph 6.4. In this paragraph the initialization of the environment is discussed, followed by a description of the scanning task of the sensor RCX software Initializing the environment The environment of the sensor RCX controller predominantly exists out of sensors. Like the sensors of the motor RCX controller, these sensors have to be initialized to be of any use. As explained in section the distance sensor needs to be configured as a light sensor with the sensor s mode set to raw. The rotary sensor for the rotation motor needs to be configured as a touch sensor with the sensor mode set to percent. These sensor modes of the two sensors were chosen for convenience. To provide power to the distance measurement sensor, the power output to the sensor always needs to be on. Thus, at the initialization the output of motor A (see paragraph 6.3) is turned on at full power Scanning the environment The description of the protocol described in section makes clear that the task of scanning the environment is a autonomous task of the robot. For this reason the scanning of the environment has to be performed in a separate task we will call scan from now on. The task of scan is, in a nutshell, to read the value of the sensor after each rotation step of the radar and to store the values in the data log (see section 4.3.1). To perform this task, first a datalog must be created with enough entries to store the sensor data. In case a data log exists, the old datalog is cleared and a new one is created. It is convenient to create a data log with a size equal to the number of rotation steps. Due to the properties of the robot hardware described in section 5.2.5, the rotation circle of the platform consists of 80 steps. This constant is stored in the macro MS_STEPS. To perform the scan, an algorithm is needed with a total of eighty loop iterations, where in each loop the value of the sensor is read and added to the datalog. After each measurement the rotation of the radar is increased with one step. But, due to the nature of the radar s rotary mechanism described in section 5.2.5, a step is made on the up- and down-flank of the sensor s signal, instead of only on the up- or down-flank. Therefore an algorithm is needed where each loop consists out of two sensor readings and two rotation steps. The number loops made by this algorithm is equal to the half of MS_STEPS, because in each loop two scan steps are taken and two measurements are done

78 To increase the precision of the measurements made by the sensor, it is possible to take the average of a number of sensor readings for each rotation step. As described in the tests of paragraph the sensor measurements tend to bounce a little. The disadvantage of this solution is a slower scanning algorithm. We chose to take the average of a set of five measurements for more precision. This way a reasonable scanning speed is still achieved. After each scan, an AYT signal is sent to the external control unit to check the communication. This indicates to the external control unit that the scan is complete. It replies with an acknowledgement. Thereafter the scan is complete. If acknowledgement is not received, the scan task shuts down all tasks of the sensor RCX software. Conform to the communication protocol described in section 4.3.2, the command cycle must be able to respond to questions from control unit on the status of the scanning process (SC_READY protocol command). However, NQC does not provide the capability to determine if the scan task is active or inactive, so a variable is declared containing the status of the scan task. The integer variable created for this task is identified as scanning. The variable has the value zero if the scan task is inactive and one if the scan task is active. The same applies for the SC_VALUE protocol command. During the scanning each value read from the sensor is stored in a global integer variable named value. 6.6 Summary For the construction of the basic framework needed for the two RCX control units, a state diagram is used. The states are translated to methods in the framework. The transition logic of the state diagram is represented by the implementation of these methods. One of these methods implements the RCX controller specific command cycle. After establishing which external objects are connected to which channels on each RCX controller, the command cycle part of the RCX software is implemented. For the vehicle RCX, this implementation controls the movement of the vehicle s motors. The motor s movements are regulated by reading the related rotary sensors. This is done by detecting the interruptions of the light beam of such a rotary sensor. Simply turning the motors on for certain amounts of time is not reliable enough for the desired precision, since the batteries would influence the actual rotary movement of the motors. The rotary sensors of the vehicle RCX are initialized to make sure they start in a well defined state. Furthermore, the motors are initially turned a few steps forward to compensate for the slack in the LEGO construction. A special protocol command is used to check the communication after a vehicle movement. This is necessary, because the robot might move out of range. The RCX autonomously takes appropriate actions to resolve the situation. This action is called recovery

79 Recovery is also necessary if the robot has bumped into some object. This is detected by the vehicle RCX control software. This software is notified of the event by an RCX 2.0 feature called event, which is much like a processor interrupt. Another method is polling, which is more complex to implement and causes extra load on the processor of the RCX. The shutter of the vehicle is used to disrupt the communication of the vehicle RCX. This is done on command of the external control unit and is used to prevent the vehicle RCX from corrupting download of the datalog from the sensor RCX. This Sensor RCX performs scans of the environment. Such a scan is an atomic operation in relation to the external control unit: no interference from the external control unit is needed. After the scan is completed, the external control unit is notified of this completion event by an acknowledgement

80 7. External Control Unit In this chapter the external control unit is designed. It was mentioned earlier in section The external control unit consists of a three major components:! The RCX Communication. This part of the system communicates with the two RCX units (described in and programmed in chapter 6). It consists of the following subsystems:! Movement control. This subsystem controls the vehicle RCX.! Sensor control. This subsystem controls the sensor RCX.! The GIS Communication. This component handles the communication with the GIS database system (described in section 4.2.3)! Motion Planning. This subsystem implements the movement strategy required in section These three components have to be linked to each other in the system. The RCX Communication and Motion planning components are closely connected to each other. The GIS Communication component is more like an outside observer that redirects communication to the GIS when needed. The following dataflow diagram gives a rough overview of the external control unit linked to the RCX controllers on the robot. Figure 23 - Control Unit Dataflow Diagram

81 In the following paragraph, the RCX Communication component is explained and thereafter in paragraph 7.2 the GIS Communication subsystem is designed. Finally, in paragraph 7.3, the movement planning algorithm is specified to support the explanation of the Movement Planning component. 7.1 RCX Communication The RCX Communication subsystem of the external controller implements the low level LEGO protocol to communicate with the RCX via the infrared tower. This uses the method chosen in section This subsystem should also contain the functionality to retrieve the datalog (described in section 4.3.1) from an RCX, because the distance measurements are stored in this datalog of the RCX. As explained at the beginning of this chapter, the RCX Communication component has two sub components: Movement Control and Sensor Control. These implement the RCX dependent commands, like move forward or create distance scan The SendMessage command The SendMessage command (op-code) has an argument, which is one byte long; a single SendMessage call will be 9 bytes long when encoded to into the LEGO protocol [PROUDFOOT]. The package that is sent will have the following composition 4 : Byte Meaning Header SendMessage op-code SendMessage value Checksum Hex. 0x55 0x77 0x00 0xF7 0x08 Value Complement Value Complement At the end of the package the checksum of the body is added. The checksum is calculated by calculating the sum of all the bytes in the package body 5. When this message is sent to the LEGO infrared tower, the tower itself will echo this exact same package back to the computer. This does not mean the package was successfully sent to RCX, it just means the infrared tower is still working. When an RCX sends a message back to the infrared tower it will be constructed the same way. The echo of the infrared Tower will always be first on the input buffer of the computer, so before a reply of the RCX can be read the echoed packet should be read first. The communication via infrared is unreliable, the data send or received could become corrupted or could get completely lost. Corrupted data is completely discarded by the RCX. In the protocol designed in section 4.3.2, there should always be a reply from the RCX on an issued command. Thus, a timeout needs to be added 4 In fact, the body of the LEGO package will be doubled in size: after every byte its complement will be added. But, for the sake of clarity, we will assume all the data is in the normal format from here on. 5 The complements described in footnote 4 are not included

82 to the system. When this time runs out the command must be resent until a reply is received. Additionally, there should be a limit to the number of retries; the robot could be lost completely, e.g. empty battery or out of sight. Not all commands in the protocol for the RCX and the external control unit (section 4.3.2) behave in the same way. On most commands, the RCX will only send the mandatory acknowledge response. The acknowledge response is sent right after the command has been processed. In contrast, the MC_M_FORWARD, MC_M_BACKWARD and SC_SCAN commands will also send an AreYouThere request after the RCX has completed the requested command. The control unit must respond with an acknowledgement to this request. Thus, there are two types of commands. The acknowledge command is sent after the processing of the requested command, because there could be a problem during execution of the command. When a problem occurs, a negative acknowledgement is sent. In other words, it is unknown whether a command has been received until the command was executed by the robot. But, because the robot will ignore everything while executing the command, it is safe to resend the request command until we receive a response The datalog As described earlier in section the retrieval of the datalog relies entirely on the RCX op-codes that can retrieve that datalog. The package sent to the infrared tower will have the same format as the package for the SendMessage op-code. Of course, the op-code and the op-code arguments are different. The op-code to request the datalog is the hexadecimal value 0xA4. This op-code has two arguments which both occupy 2 bytes (in big endian byte order). The first argument is the start offset to retrieve and the second argument is the number of elements to retrieve (n). The RCX will respond with a package containing the op-code with a hexadecimal value of 0x62 and n*3 bytes of data (where n is the number of elements that where requested). Thus, each element uses 3 bytes: the first byte is the source of the datalog entry (sensor1 in our case, but it could also be a timer) and the last 2 bytes contain the actual datalog value (in big endian byte order). The last byte of the package contains the checksum. The first entry of the datalog will always be the total size of the datalog (including that entry). This datalog entry has the type identified by the hexadecimal value 0xFF. In section was explained that the RCX will always respond to these op-codes even if its datalog is empty. Because this project used two RCX controllers, this will always result in data corruption of the first part of the retrieval. Only one of the used RCX controllers needs the datalog feature. The datalog feature of the other RCX should be disabled or crippled when this datalog is retrieved. The datalog contains much more data than the normal communication; thus the communication session takes longer and the chance that the data gets corrupted is much higher. When the datalog retrieval has failed, the whole datalog retrieval cycle has to be performed again

83 7.1.3 Power saving The LEGO infrared tower has a power saving feature that closes the communication when no data was been sent or received within a couple of seconds. That means that while no commands are sent to either RCX a bogus command should be broadcasted, just to make sure the power saving does not get into effect. There is a special op-code for this, called the Alive op-code. Every 5 seconds an Alive op-code should be sent. The reply of the RCX on this op-code doesn t matter and should be flushed from the input buffer of the external control unit. If no data is sent at least every 5 seconds, the communication will stop working without any notification. Data can still be written to the infrared tower, but nothing will be sent or received by the tower. 7.2 GIS Communication The GIS Communication subsystem is a simple TCP/IP server that accepts the commands sent by the GIS as defined by the protocol in section This protocol comprises of nothing more than setting some variables in the external control unit and starting/stopping the motion planning process. Every distance scan made during the motion planning must be forwarded to the GIS. The GIS Communication subsystem needs to run in a separate thread, otherwise the communication can get stalled when the rest of the system is performing a task. The GIS Communication server should always be reachable for the GIS, otherwise it is impossible to abort the current scan for instance. 7.3 Motion Planning In this paragraph the movement strategy required in section is explained for the designed robot. In that paragraph is also stated that the environment has straight corners and the walls have flat surfaces. Recapitulating, the robot has two generic requirements: It must take spatial measurement of its current surrounding area It must move through the unknown territory. These two functions have to be combined such that the robot can create a map of the unknown territory. A set of spatial measurements on the environment is now on referred to as a scan. For the designed robot this is a list of distance measurements performed by the distance sensor on the robot (see section 5.2.5). The sensor will rotate during the scan; this will result in a set of distance measurements of the area around the robot. In the here described motion planning algorithm, a scan will be performed after the robot has moved a predefined distance, or when an important movement decision (e.g. whether to turn, to move away from or to move closer to an object) has just been made

84 Since the territory is (required to be) unknown, the initial location and rotation of the robot are not set. So the start position is defined to be on the (x,y) coordinates (0,0) and the robot is assumed to be facing forward. This initial location is referred to as the origin. Facing forward means that the wheels are turned correctly such that a forward command will result in the robot moving in the same direction as the distance sensor is facing (the x-axis). From this point on the robot can start moving through the territory. The robot should never move more than the distance sensor can possibly measure. And the robot should also keep a safe distance between itself and the objects nearby. This distance is called the inner border. To keep matters easy, the robot will only make turns of 90 degrees and only in one direction (we chose to turn clockwise). When the robot also turns the other way around the slack problems introduced in section cause problems. This slack would have to be compensated for each time the direction of the rotation is changed. This is not beneficial for the movement precision and therefore we chose to make the robot only turn its wheels in one direction. During the motion planning it is important to know what part of the area is already known; hence, a history of the analyzed area has to be kept. Depending on the used motion planning method, this information might be used to decide how to turn. Several strategies exist that solve the problem of how to move through an unknown area while mapping it. Two common, and easy to implement, methods are: follow the wall and sweeping or a combination of the two Follow the wall Following a wall basically boils down to: picking a wall and following it without losing contact. At the beginning a scan is performed to find the nearby walls. A wall should be within a given distance from the robot to be detected as a wall; this distance is called the outer border. The outer border should be greater than the inner border described earlier and smaller than the maximum distance the sensor can measure. When no wall is detected the robot will move forward and scan again (Figure 24, point 1), until a wall is found. 1) When a wall is found in front of the robot (Figure 24, point 2, 5), the robot will move forward until the inner border is reached. After another scan, the robot will decide which way to turn. If there is no wall to the right of the robot it will turn right. Otherwise if there s no wall to the left the robot will turn left. If there are walls at both sides the robot will turn around completely. After the robot moves forward one more time it will go back to the beginning of the routine. 2) When a wall is found on either side, the robot will move closer to that wall, until it reaches the inner border. After this the robot will keep moving forward. If the robot gets too close (meaning the distance to the wall is less than the inner border) or too far away from the wall, it will correct itself. This way, the distance to the wall is at least the inner border. When the followed wall is lost (no wall was detected within the outer border limit), the robot will assume it passed the corner of the wall (Figure 24, point 3). The robot will turn to the direction the wall used to be; so if the wall

85 was still there, the robot would face the wall. Then, the robot moves forward one more time and returns to the beginning of the routine (Figure 24, point 4). The flowchart that schematically depicts this algorithm is show in Figure 25. The process begins in the start state. Figure 24 Follow the wall Figure 25 Follow the wall flowchart

86 This method has two major drawbacks:! The center of the room might never be analyzed.! The robot may stray far away from the origin (for example when it enters a long hallway). This is a problem, because it could take a very long time to create a map of a room Sweeping The sweeping technique is even more basic than following the wall. During sweeping, a map of the room is created row by row. The robot keeps moving forward and makes a scan after every movement (Figure 26, point 1). If the robot detects a wall in front of it, it will move to the next row, turn around and start at the beginning of the routine again (figure Figure 26, point 2 and 3). Initially it doesn t matter if the robot moves to the next row above or below the origin, as long as it is the same direction every time. If it is not possible to move to the next row the robot should move back to the origin row (the first row that was analyzed), and start analyzing the rows in the other direction. Figure 26 Sweeping

87 Figure 27 Sweeping flowchart The flowchart of the algorithm described here is presented in Figure 27. The process starts at the start state. The sweeping method has problems with junctions in the room; the robot does not really know if it passed a junction or not. The result is that the other side of the junction is not analyzed. So, at the end of the sweeping the robot has to go back to these junctions and scan the other (unknown) side. Just like with the follow the wall technique, sweeping also has problems with long hallways where the robot could stray away The Combination When the robots follows the walls the system will sooner have an idea of outline of the room, but there will be much unknown territory in the middle of the room. With sweeping this is the other way around; it takes longer to get an outline, but at least is known how the center of the scanned area looks like. Both methods have the problem that the robot can stray away from the origin, with the result that it takes a very long time before the system has a map of the room. A solution to this problem is to define a maximum absolute distance from the origin. When the robot reaches this maximum distance, it should try not to move any further away from the origin. For example, with sweeping the robot will stop moving

88 forward and it will go to the next row and turn around. It can do this until the next row is also too far way. Using the maximum distance will make sure the robot doesn t wander off. It is a logical choice to combine the wall following and sweeping techniques using the maximum distance threshold. This will yield an overall good performance when creating a map. In this combination, the robot starts with the sweeping technique. When a new wall is detected, the robot will switch to the follow the wall technique. Every time in the follow the wall method where it should return to the beginning of the follow the wall algorithm, it will return to the beginning of the combined algorithm. This way, it switches to the sweeping technique again. So, basically when the robot is not following a wall, it is making a sweep of the close environment. When the robot is following a wall and it reaches the maximum distance threshold, it will stop moving forward and turn to the side in the direction of the origin. This way, if the robot moves forward the distance to the origin will become smaller (Figure 28, point 1). Figure 28 Combined This combined method will not create a map of the whole area either; there is a chance that the center of the map is still unknown. The missing parts can be checked after the original route has been completed using the same method. Also, the rest of the environment, outside the maximum distance, remains unknown. This would violate the requirement stated in section But, this can be solved by analyzing these unknown areas in additional runs. This way, at the end the maps of the individual runs can be combined to create a map of the whole area

89 Figure 29 Smaller maximum distance This flowchart of combined algorithm described here is shown in Figure 30. Start / Enter sweeping mode Enter sweeping mode Turn 90 degrees towards the origin No Can move forward? Move one step to the side and turn around Enter sweeping mode No Yes Rotate right No Wall to Right? Yes yes Yes Move forward Can turn 90 degrees toward the origin? Rotate left No Wall to left? In sweeping mode? No Too far away? Yes Turn around Yes No Near wall? Yes Enter wall follow mode Figure 30 Combined flowchart No

90 7.4 Conclusion The external control unit comprises of three subsystems: the RCX Communication, the GIS Communication and the Motion Planning. For the implementation of the RCX communication using the SendMessage and datalog communication techniques, the most important thing to keep in mind is to check for data corruption and miscommunication. Since, the infrared communication channel used is unreliable. Using both retries and timeouts, a satisfactory result can be achieved. There are several methods for motion planning. Two common methods are follow the wall and sweeping. The follow the wall algorithm follows the walls and will potentially never scan the entire room, since it sticks to the walls. The sweeping technique sweeps through the room in adjacent rows. Using the sweeping technique, it can take a vast amount of time before the system knows the contour of the room and junctions are a problem. For both techniques applies that the robot can stray far away from the initial position, called the origin. This situation will for instance occur when the robot enters a large hallway. A logical solution is to combine both techniques into a solution that combines the best of both worlds. To support this combination, both algorithms are altered such that the robot will not stray away from the origin. This way, a much better performance can be achieved. A more advanced motion planning technique would add too much overhead to this project

91 8. Testing 8.1 Robot To verify the mechanical design of the robot specified in chapter 5, the robot has been tested. For these mechanical tests, the three basic functions of this robot were checked. First of all, the synchrodrive was tested for its stability and precision. The radar was tested by mounting it on a small LEGO construction instead of on the robot. Finally, the bumper design was tested by actually mounting it on the vehicle part of robot and letting it drive around. This paragraph describes these tests of the mechanical design of the robot. The following sections explain the tests and the results for the basic functionalities of the robot The synchrodrive To test the synchrodrive for its precision, the robot is placed on a whiteboard and equipped with a whiteboard marker. This whiteboard marker is mounted firmly on the side of the robot with its tip touching the whiteboard. For this test, a small program is written that steers the robot such that it continuously drives a line (back and forth), a triangle, a square, a hexagon or even a circle. This way the robot was transformed to a mobile plotter. This way could also be determined with high precision how far the robot would drive forward in a single step of the rotary sensor. The test succeeds if the robot stays on the virtually same path. There are several things that can go wrong. If the robot steers inaccurately, it will drive in the wrong direction and divert further from its path with each steering operation. If the robot drives forward or backward inaccurately, it will not drive far enough or too far. Furthermore, the robot should stay on the same position during the steering operation. If it does not, it will draw tiny arcs or circles in the corners of the picture. Figure 31 Robot plots

92 In these tests was established that the robot drives cm in a single rotary step. With this information the control unit s odometry algorithm was configured (see sections and for more information on odometry and why this distance is important) Initial testing During testing, the robot was not very accurate and somewhat inconsistent. The first tests were done by drawing squares. Initially, the robot would draw a square of which the beginning and end would not entirely meet. By tuning the construction a little, this was improved significantly. But, if it was left alone, it would draw a picture like Figure 31 B. By fiddling a little with the position of the wheels it would produce either a picture like Figure 31 A, or a combination of both. Note that for alternative B the robot would eventually drive off the whiteboard (and off the table). The initial faults made by the robot were sometimes multiple centimeters and, especially for alternative B, the errors got ever worse. The other drawings, like circles and hexagons, showed very similar results. Even something as simple as driving a straight line back and forth did not work very nice. The cause for malfunction A seems to be the fact that the steered angles of the squares were not always entirely ninety degrees. The corners of the squares were sometimes single degrees off. The cause for alternative B seems to be that the distances driven were not always entirely correct. The striking thing with both malfunctions is that they seem to be very consistent in occurrence. The malfunction in B causes the vehicle to divert in a single fixed direction, while malfunction A draws a nice Spiro graph picture. The direction of the deviation of alternative B was changed if the initial orientation of the robot was altered. So, it seems that the problem is related to the robot and not to the environment influencing the robot s movements. The table could have been sloping for instance. During testing was noticed that the vehicle did not entirely stay on the same position during steering. That was not very visible in the corners of the squares though. The pen moved only fractions of millimeters. Although this was never tested a suspicion arose that the angle faults might be caused by the vehicle changing its orientation a little during steering. Note that this also could have been caused by the pen mounted on the side of the robot. Especially alternative B is very suspicious, since the malfunction seems to be related to one side of the robot. A pen, drawing on the whiteboard, will always cause a little friction; hence it could significantly influence the robot s movement and give the test inaccurate results. To test the influence of the pen, the pen was removed and the robot was set loose on the whiteboard again. Every time the robot was steering in a corner a small dot was drawn on the location of the robot. This position was chosen to be the tip of a LEGO axle pointing down to the whiteboard. Every time the robot was steering a dot was drawn exactly below this axle. The results were identical to the results of the previous tests: the robot was still inaccurate. A big problem with this vehicle design is the fact that it is very heavy. The turntables of the wheel units are having a hard time to turn when they have to carry the weight

93 of the vehicle. If the robot is lifted from the floor the wheels steered with no problem at all. This cannot be solved by more or bigger wheels, simply because the problem lies in the contact surface between the top and the bottom part of the turntable. It is possible that the wheel s drive during the steering, as explained in the robot design of section 5.2.2, is inaccurate making those wheels slip over the floor. But that would suggest that the robot would make horrible pictures with circles in the corners. Although these earlier described arcs or circles in the drawing were not visible, it was evident that the robot did move a little during steering. This could be seen if the tip of the pen was closely watched. Furthermore, the vehicle has more trouble steering when it is not on a smooth surface like the whiteboard. The robot has great difficulty steering on carpet. If the wheels slip, the steering would indeed be severely compromised on carpet. So, this would support the theory that the wheels do slip a little. Another, possible more plausible, theory is that the slack in the LEGO transmissions cause the inaccuracies. During testing it caught the eye that the turntables turn in a very stepwise manner. They do not turn smoothly. The earlier described excessive weight of the vehicle puts a very high load on the turntables. Due to the slack in the transmission, the motor first has to turn its axle a while to cross a certain threshold before the turntable actually turns. Once this threshold is crossed the turntable suddenly leaps forward and the motor has to cross the threshold again to make the turntable turn further This causes the stepwise turntable behavior. The described leaps could cause the turntable to turn just a little too far. Or, if the leap did not take place before the robot starts moving again, the turntable will not have moved far enough. This would explain the inaccuracies Chains Near the deadline of this project the LEGO chains described in section were finally available. And it was thought that these would provide (a partial) solution to the problem. However, these chains proved to be very weak: they would break if only a little stress was put on them. Therefore the gears were not replaced by the chains, but the chains were added as a supplement. This obviously added some more friction to the synchronous part of the steering transmission described in section The resulting accuracy was much better, but still not very convincing: the vehicle would still deviate from its path by multiple centimeters if it was left alone driving for a while Conclusion In conclusion, the vehicle s synchrodrive steering mechanism is not as precise as expected in the design. Apparently still too many gears were used (causing too much slack) and the vehicle is too heavy. Furthermore the chains did not provide the solution that was expected. In future versions, this robot s weight needs to be reduced or the weight must be divided more equally (over possibly more turntables). Furthermore, the transmissions need to be revised to limit the slack problems

94 For this project the vehicle should be helped a little by putting it in a friendly environment, with a hard and smooth floor, to reduce the occurrence of precision problems. This way, it should still be possible to get moderate results The Radar The radar s rotary mechanism was tested separately from the radar sensor. The distance sensor s tests are explained first and after that the rotary mechanism is evaluated. Finally, the complete radar is used to scan a square room. At the end of this paragraph the complete radar is evaluated The distance sensor The sensor was calibrated before testing. This calibration involves tuning the potentiometers such that the measurement range is maximal. During testing and calibration, the radar was mounted on a platform separate from the robot. During testing, the characteristic of the sensor was determined. For testing and calibration, the sensor was placed next to a ruler. A white surfaced box was placed before the sensor. The distance from the sensor to the box was varied from 10 to 80 centimeters, because this is official range of the sensor. If the potentiometers on the sensor s interface board are not tuned well, this range will not be achieved. So, before the actual testing the sensor was tuned such that the full range was measurable. Refer to the radar design section for more information on the distance sensor and how to tune it. As described in section the sensor uses infra-red light and this might cause problems with translucent or mirror-like surfaces. During this project there was insufficient time to put this to the test. Hence, the behavior of the sensor for these conditions is unknown Characteristic A special program was written for calibration and testing. This program reads the sensor value and displays it on the display of the RCX controller. A later version takes the average of multiple measurements. This is done, because the measurements tend to bounce a little. The display of the sensor will actually display a raw sensor value, instead of the actual distance. The link between those values is nonlinear. Therefore, it was necessary to determine the characteristic of the sensor. The results are shown in Table 9. Note that these results depend highly on the calibration. If the sensor is recalibrated, these measurements have to be repeated. Distance Value Distance Value

95 Table 9: The distance versus the raw sensor values. The results were plotted in a graph. This graph is displayed in Figure 32. It is clear that the correspondence between the distance and the raw sensor value is nonlinear. Therefore a conversion function was defined in the control software. Actually, a mathematical approximation was derived for this characteristic. Another option would be to write a function that makes linear approximations between the measured values. This might be an even better solution, because it is quite hard to derive a mathematical expression for this characteristic. As can be seen in Figure 32, the characteristic seems to be exponential in form. This was confirmed when the mathematical approximation was defined. Figure 32 Sensor characteristic Note that according to this characteristic the sensor is more accurate on short distances. So, it might even be necessary to discard measurements above 75 cm, because the precision gets very low. Furthermore, the sensor is not very stable in this range. The measured values tend to bounce severely in this range Sensor stability During testing, it caught the eye that the sensor did not measure the same value continuously. The sensor value mostly kept bouncing within a range of 8 values around the actual value. This especially occurred on great distances. Therefore, it was necessary to take measurement averages to stabilize the measurements. Furthermore, there seems to be a small error in the interface design. Probably, the wrong voltage regulator was chosen to build this interface. This regulator enforces a supply voltage of the sensor of about 5 Volts. But, it needs a minimum of 7 volts to

96 keep this voltage upright. If the batteries of the RCX go low, this boundary is crossed, causing very strange measurement errors: beyond 80 cm it does not measure 80 cm but a distance much closer to the sensor, like 50 cm. This renders a large range of the sensor useless when the batteries are low. That can be solved by connecting an external battery to the sensor interface, so it is no longer dependent on the power supply from the RCX. Or, simply by replacing the voltage regulator by a version that does accept a power supply of less than 7 Volts. In summary, the sensor works correctly and produces stable enough measurement values for scanning an environment. This is put further to the test in section The rotary mechanism To test the rotary mechanism of the radar, a small program was written that continuously rotates the radar. With each detected step (tooth on the central gear), a beep is produced. This way it could be heard when the touch sensor missed a step. Missing a step is of course not allowed. So, initially the lever of the rotary mechanism (see 5.2.5) was connected to the central fixed axle with a small rubber belt, so it would always leap back and would not stick on the touch sensor. Unfortunately, in many cases this caused the lever not to touch the touch sensor at all anymore. But, if this rubber belt is left away the sensor leaps back with the small spring inside the touch sensor (as described in the design). The sensor is fairly stable. It cannot be influenced much by external factors: moving it by hand is very difficult. Yet, if the 80 step algorithm described earlier in section is used and the lever of the mechanism is on top of a tooth, it can sometimes be turned a little. The fact that the RCX unit mounted on the radar itself and is rotated along with the rest of the radar seems to be no problem at all. As long as this RCX is pointing in the general direction of the base tower, if a scan is complete, all is well. Even if it is not pointing towards the base tower the RCX is mostly still able to communicate, presumably due to reflections on the walls. After the testing, it was concluded that this rotary mechanism works good enough. A disadvantage is the speed limitation: if the motor drives the radar too fast around, it will start missing steps. In the next section the rotary mechanism is tested along with the sensor to produce the desired scans of the environment Scanning To let the radar actually scan something, a small program is written that makes a single scan of the environment by rotating the radar and measuring the distance with each step. The measured values were available to the external computer by means of the datalog explained earlier in the design in section For these tests, the radar was placed in a square room and a scans were made. An external program produced pictures of these scans. This scan is shown Figure 32. The lines emerging from the center are the measured distances from the core of the radar to the walls. The ends of these lines are connected to each other. This gives the radar s interpretation of the shape of the room. The square dotted line indicates what this shape should have looked like

97 Striking is the fact that the sensor measures very large erroneous distances in the corners. These are displayed as huge spikes in the corners of the displayed room. It looks like the infrared signal is lost in those corners, making the sensor think it is measuring infinity. This will eventually not be a very big problem. It should be quite easy to filter these errors in a later design. Figure 33 A typical radar scan of a square room This square room was built using boxes of a more or less consistent color. To test the radar s problems with reflectivity and color differences, boxes with pictures were used. This time a triangle shaped room was scanned. As an extra test, a small gap was left in one of the walls for the radar to peek out. The result is shown in Figure 34. Again, the lines emerging from the center describe the measured distances and the dotted line describes the contour of the actual room. The gap in one of the walls can be seen nicely. Only this time the scan is a bit ugly. None of the walls is really displayed as a smooth surface. This indicates that the sensor is not very insensitive for color and/or reflectivity variations. And gain, the corners are represented by large spikes in the picture. But, in general the form of the room can be recognized clearly. Especially, if multiple scans are made close enough to each other, the twodimensional shape of the room should be recognized accurate enough

98 Figure 34 The scan of a triangular room with colorful walls Conclusion The chosen distance sensor can reliably measure the distances. The interface design and implementation have a few glitches though, but these are easy to solve or eliminate form the results. Furthermore, it was avoided to let the sensor scan glass or mirror-like surfaces, so for such environments the results might not meet the requirements. The rotary mechanism is fairly stable and rotates the sensor in a consistent way at fixed steps in a full circle. Unfortunately the precision (the number of steps on a full circle) of this design cannot be increased any further in a reliable way, because of the lever-on-gear mechanism used in the design (see section 5.2.5). If this proves to be necessary in the future, another design has to be found. Although the radar sometimes produces less accurate scans, it can scan the environment with sufficient precision: the main features of the environment are visible in the scans and most distortions can be filtered out in the GIS of successive projects The bumper The bumper was tested by releasing the robot equipped with the bumper inside a fairly large square room. Again, a small program is written that orders the robot to drive in a straight line until an obstacle is encountered. Then, it drives the robot back a little and makes it steer 45 degrees to the right and drive again. This way, the robot will drive through the room and follow the walls. The robot was released in a square room and the robot does not change its orientation in relation to the room during movement. Therefore, every time the same

99 four points of the bumper would be touched by the walls. Hence, two separate test sessions were held. The first session was dedicated to testing the side levers with the purple hoses of the robot and the second session was dedicated to testing the green corner hoses (see section 5.2.6). The initial orientation of the robot determines which are tested. During design, it was very clear that the side levers would work, since this is based on a proven LEGO design. Yet, the corner hoses are designed specifically for this project and were thus never tested before. Miraculously the test succeeded instantly and the robot followed the walls nicely in both tests. These tests were performed in a time span of about an hour. Thus, the bumper works well and is sensitive all around. And if it is used correctly by the controller, it will prevent the robot from driving into objects that it did not see with the radar, as required in the requirements of section Conclusion As explained in section of this paragraph the vehicle part of the robot is not as accurate as required (see section 3.1.2). No solutions were found that could solve these problems within this project. Successive project will have to find different solutions for the transmissions and the weight distribution on the vehicle. Since, these seem to be very important factors in the occurring problems. Although the vehicle is not as accurate as required, it has still got potential to produce useful data for map construction. Yet, the robot will have to be helped a little by adjusting the type of floor. Also, it should not be used to scan rooms in long periods of time, since the produced maps would be very inaccurate. Fortunately, the design of the radar and bumper are successful within the boundaries set in the requirements of section in the analysis of chapter 3. The radar produces accurate enough scans of the environment, although the sensor of the radar has small inherent, design and implementation problems. And bumper gives the control software the ability to detect objects missed by the radar as required. In conclusion, the produced mechanical design is sufficient to solve the problem of this project, if the robot is helped a little with the properties of the environment. 8.2 RCX Software In the following section we will highlight some important points of the testing phase for the internal control unit represented by the RCX software (see section 4.2.2), beginning with the test method used for the framework of the RCX controller software, followed by the discussion of a problem detected during the testing of the bump sensors Framework As a foundation of the software for the RCX controllers, the framework needed to be thoroughly tested. In particular, the to NQC code translated state diagram was tested. By using the NQC command SetUserDisplay() provided by version 2.0 of the LEGO RCX firmware, it was possible to display the state variable or any other

100 variable on the display of the RCX controller. This display was described earlier in section 3.2. Conform the protocol, series of commands were sent to the RCX controller in various tests. This was done using the NQC command line tool, since the external control unit software was still under development. Each action taken by the framework software could be tested step by step. Thereby the entire framework was completely tested Bump Recovery As seen in section 8.1.3, the bump sensors seemed to work without any problems. However, a problem was detected with the recovery algorithm described in section After a collision of the robot with an object in the environment, there appeared to be a minor position change between the original position of the robot before the execution of the movement command and the position after a recovery on this movement command. Clearly, there should be no position change between the position of the robot before and after a recovery. Due to this problem, the robot does not comply with the requirement that the control software must be able to position the robot with accurate enough precision to produce a sufficiently detailed map: the actual position is a few centimeters ahead of the position tracked by the system, so the precision is lost after recovery. The reason for this problem is the resistance the robot gets from the object. The resistance is present the whole time the bump sensors touch the object and will increase when the robot is still moving into the object until the moment the bump sensor software is triggered and halts the robot. The force the robot gets from the resistance of the object creates slack in the precision of the robot movements, resulting in a position change after a recovery. Possibly, even the wheels slip on the floor while, making the robot think it has driven much further. So, maybe the bumper is not that perfect as thought before or again the slack or wheel slippage problems of the vehicle (see section 8.1.1) show up. To attempt to eliminate the problem in the software, a test was created to determine the average position change. The robot was positioned on a whiteboard twenty centimeters in front of a large object, in this case a heavy rectangular cardboard box. The driving direction of the robot was set straight in the direction of box. The situation is schematically visualized in the figure below (Figure 35Figure 35)

101 Figure 35 The test situation for the bumper recovery algoritm A simple test program was created to let the robot drive into the obstacle. Thereby the bump sensor software was triggered to execute a recovery. When the robot finished the recovery, the end position was marked on the board. This test was carried out five times, after which the average position change could be found by calculating the average of the distances between the start position and the five end points. The result was a negative position change of approximately 0.7 centimeters. Meaning that the execution of the recovery in regard to the movement steps, needs an adjustment by the addition of movement steps to let the end position of the robot after a recovery approximate the start position. As established in section 8.1.1, the robot has a displacement of about cm in each movement step, therefore a correction of 3 or 4 steps is needed. By experiment was established that a correction of 4 points gives the best result. The correction is implemented in the motor RCX controller software, by declaring a macro with the name RECOVERY_CORRECTION with the value of four. The recovery then simply calls the motor routines with a number of steps equal to the history_step variable plus the RECOVERY_CORRECTION. Note that this solution does not entirely solve the problem: the determined correction value will not always be the same, due to different floor properties for instance. Furthermore, the correction can only be applied in discrete units of centimeters and the actual fault cannot precisely be represented this way. However, as described in section 5.1.4, the bumper is needed as a last resort. Thus, it will normally not occur that the robot drives into an object. This possible inaccuracy will therefore not be considered a problem Are You There? During testing, it unfortunately happened quite often that the robot lost communication with the external control unit. As appeared later, the batteries of the tower were failing. However, it was never confirmed with certainty that the batteries of the tower were causing all this trouble. These problems are also reported in the control unit testing paragraph of this chapter (paragraph 8.3)

102 This did present an intensive test opportunity of the AreYouThere algorithm though. Because the robot always assumes the movement is the cause of the connection failure (see 6.4.5), it moves back to the position before the command. This worked just fine, but of course this did not resolve the connection failures. Thus, the robot ended in the Panic state described in paragraph 6.1. This behavior was also tested explicitly with exactly the same results Conclusion Apart from the problems with the bumper described in section 8.2.2, no real problems were found during the testing of the internal control unit represented by the RCX software. The bumper problems were resolved by calibration. Although this might still cause inaccuracies, the major part of this problem is solved and the recovery works good enough. The AreYouThere algorithm of the internal control unit was tested quite often and it worked good in combination with the recovery. 8.3 Control Unit The most important part of the Control Unit is the communication with the RCX. This has been implemented first and tested before any of the other parts of the control unit were implemented. In addition to the RCX Communication, the Motion Planning has also been tested more or less thoroughly RCX Communication In this section the communication with the RCX is tested The SendMessage command Command reception The first thing tested was to see if the encoding of a command in the LEGO protocol was successfully implemented. For this test, the RCX was programmed to display the value of the message received on the LCD display (see section 3.2). The message was also sent back to the IR Tower. A program was started on the computer that will send random messages to the RCX and read the received message. Both the sent message and the received message were displayed on the screen. This test was successful, because both the sent and the received message where the same. However, there was also much garbage data amongst the computer input. The garbage data often contained reoccurring parts. However, the garbage data reception was not consistent; sometimes no garbage data was received for a long time. Communication performance

103 The next thing tested was the performance of the communication. For this test, the RCX was programmed to produce a beep sound every time a new message was received. Then, a program was started that sent out messages to the RCX. After that, the RCX controller was placed at various locations to test if the communication was still intact. It was tested if there was communication when the RCX was facing the infrared tower, when a small obstacle was in between the RCX and infrared tower and when a large obstacle was in between the RCX and infrared tower. The result of this test was that the communication was successful, when there was either a direct or indirect route from the infrared tower to the RCX controller. The indirect route was assumed to be a line from the infrared tower via one or more walls or objects to the RCX controller. When the RCX was completely surrounded by obstructions (boxes), the communication finally failed. In conclusion, the communication from the infrared tower to the RCX is very good. However, during these tests the batteries of the infrared tower were still good. Later these batteries started to fail and the communication got much worse. Unfortunately this problem was discovered very late in the project. Backwards communication performance An additional test was performed to test the communication from the RCX controller back to the external control unit. This test however, failed more often. The indirect route often failed to send back data to the PC, but it was still better than we had assumed before. Multicasting In the next test, the behavior of the external controller was tested when it was communicating with more than one RCX controller at the same time. Both RCX units were programmed with the same program that would just reply the received message. A program was started on the computer that would send a random message, read the input and display both on the screen. The results of this test were as foreseen in section 4.3.1: both RCX units successfully received the message, but when they both sent the reply at the same time the data became corrupted. This happens because the infrared signals from both RCX units interfere with each other. Therefore, bidirectional communication with two RCX units at the same time is not possible. Protocol Now that the communication channel required for the communication between the RCX controller and the external control unit is tested, the protocol described in section can be tested. This test was split up in two parts. Single RCX unit The first part was to test the protocol with only one RCX to check if the protocol was correctly implemented on both the RCX controller and the external control unit. The RCX was programmed with the vehicle RCX code and a motor was hooked up to the motor connector for the movement. The external control unit program running on the PC would then perform the following:

104 1. Fetch the RCX ID 2. Open the command cycle 3. Set the movement step to Move the motor forward 5. Move the motor backward 6. Set the movement step to Move the motor forward 8. Stop the command cycle 9. Wait for a few seconds and return to point 1 This test was successful, since the second time (step 6) the motor turned twice as long. The Alive op-code The previous test was repeated later with the step sizes twice has high. However, this failed: when the step size was set to 400 and the motor was done rotating, the communication was broken. There was no data on the input. After some experimentation was found that the connection kept breaking, when there was no communication for 6 or 7 seconds. The connection worked perfectly again when the serial port to the IR Tower was closed and opened again. But, opening and closing the serial port each time adds too much overhead. A different solution is to make sure the communication is never silent for more than 6 or 7 seconds. There is a standard LEGO Mindstorms op-code that has no effect on the programming in RCX. This is the Alive op-code, the RCX will just respond to this op-code without changing anything in the execution of the RCX programming. A timer was added to send an Alive op-code every 5 seconds. Thereafter the test was performed again. This time the test was successful. Multicasting protocol The next test to test the communication protocol was to test the protocol with two RCX control units. Both RCX units were programmed with the vehicle RCX code, but for the second RCX the RCX ID (see section 4.3.2) was changed. That means there were two RCX units that performed the same actions but listened to different IDs. The program on the computer was changed to perform the following: 1. Open the command cycle for RCX #1 2. Set the movement step to Move the motor forward 4. Stop the command cycle 5. Open the command cycle for RCX #2 6. Set the movement step to Move the motor forward 8. Stop the command cycle 9. Return to step 1 This test was also successful: both motors moved forward after each other. The test was performed again with the movement step set to 400, to test if the communication would break down. This test was also successful: both motors moved forward four times as long

105 The datalog Now that the most important part of the communication is tested, the second part can be tested: retrieving a datalog from an RCX controller (see section 4.3.1). Initial tests An RCX was programmed to create a datalog with a certain well defined number of random entries. Then, the program on the external computer was executed to download the datalog from the RCX controller. The random entries were then extracted from the data in the datalog. The datalog download code was not completely correct. But, after fixing the programming mistakes, the test was successful. The download had the same number of entries as the previously chosen number that programmed in the RCX. After this test, another test was done with an empty datalog. The result was as expected, but not what was hopped for. Calling the datalog download op-codes on an empty data log returns the first entry of the datalog. This entry represents the total size of the datalog. This behavior causes problems, when we two active RCX units are used: both RCX units send their datalog if requested, even if on of those is empty. These messages will interfere as described before. An RCX will always respond to a datalog request if it is turned on. Multicast datalog failure Because of the problems expected with the datalog (see section 4.3.1), when there is more than one active RCX, a test was done to confirm the suspicion that data corruption occurs with two active RCX units. The same setup as in the previous test was used, with the exception that another RCX unit was turned on. This second RCX only needed the LEGO firmware installed. The same program on the computer was executed and the result was as expected. Both RCX units responded to the datalog download op-codes and this resulted in data corruption. The shutter To fix this problem a shutter is designed in section To test this shutter, the other RCX was programmed with the new command to close the shutter. The first RCX still was still programmed to create a datalog. The program on the computer was modified such that it first opens a command cycle on the second RCX and then requests to close the shutter. Thereafter the datalog could be downloaded from the first RCX. This test was successful; when the shutter was in place the datalog could successfully be downloaded from the first RCX, without corruption. However, it was noticed that the second RCX was still receiving and sending data, while the shutter was in place. But luckily, the data it sent was never received by the computer Motion planning To test the motion planning, the complete robot was set loose inside a small area. There, it could drive around. The used area was a square room with a floor with a flat and hard surface. In fact the whiteboard of section was used as the floor of this room. It was laid flat on the ground to help the robot a little (see section

106 for the problems solved this way). For this test to succeed, the robot should find a wall and keep driving around in the room without actually bumping into a wall. The robot was placed at various initial locations to test the start of the motion planning. There were few problems in the motion planning code. A small error occurred, when the distance between the robot and the wall was a little bit more than the inner border (see chapter 7). The result was that the robot tried to move 0 steps forward and tried to repeat that over and over again. Another threshold was added to the inner border. This defined the minimum distance from the inner border before the robot should move closer to the wall. Another small error in the code was found in the algorithm that moves the robot away from the wall. Because of a calculation error, the robot tried to move away a negative number of steps. But our protocol doesn t handle negative steps, so instead the number was wrapped into an unsigned integer. This yielded an enormous amount of steps, so the robot would move a very long distance away from the placer it should have stopped. This was fixed by using the absolute value. We couldn t perform much more testing than a few steps because for a still unknown reason the communication broke at random points and random locations. This was often the case after a scan was just made; the robot was waiting for a reply on its AreYouThere request. But the computer never received this request. And adding extra retry cycles to the RCX unit s code did not solve this. When the program of the RCX and the motion planning software on the external computer where restarted everything worked fine, until it broke again. Sometimes the robot performed 10 scans before the communication failed, and sometimes less. The number of scans that where made successfully did not seem to be related to the battery power of the RCX. However, it did seem to be related to the batteries of the tower that were failing. This was never tested thoroughly, so this cannot be confirmed with certainty. Because of this mysterious behavior the motion planning could not be improved much. Therefore the design is limited to a basic form of motion planning. This motion planning only implements the follow the wall method as discussed in chapter Conclusion As expected in the design of the protocol in paragraph 4.3 the communication with an RCX is only possible with one RCX responding at the time. In the tests described in this paragraph was confirmed that, when multiple RCX units are responding, unrecoverable data corruption will occur. The reception of the RCX controller is very good, much better than the transmission. A direct line of sight is not required for successful communication from external control unit to the internal control unit on the RCX units. For the transmission of data from the RCX to the external computer the line of sight communication is for some reason more important. In order to keep the communication line open, and prevent the infrared tower from breaking the connection for power-save purposes, an RCX command (op-code) has to be sent at least once per 5 seconds

107 To download the datalog an obstruction in front of the infrared transmitter of the RCX unit, which is not supposed to participate in the communication, is sufficient to stop the data corruption described in section that would otherwise occur. The follow the wall method as implemented in the basic motion planning routine works as it should. Unfortunately, the exception is that the communication sometimes fails completely at random times. The source of the communication failure remains unknown, although the batteries of the tower are suspected. However, this could never be proven with certainty. 8.4 Visualization In this paragraph a description is given of the visualization techniques used during the testing phase. First, the program that was used to visualize the retrieved spatial data is described. This is followed by the discussion of the system s front-end Visual spatial data representation To visually support the tests of the radar of section , a small program named Generate-Datalog-Picture was written in the programming language Perl. During the tests, the external control unit collected the scanned data and stored it in a datalog file. Using this file, the Perl program was able to generate an image of the scans. Two examples are presented in Figure 33 and Figure 34 of paragraph The actual design and workings of the program are outside the scope of this report. For this we refer you to the project disc with the complete source code of the program Control unit front-end As mentioned in introduction, the project assignment changed during the project. Therefore, the GIS was no longer part of the system under development. This way, no graphical front-end or visualization would be available for the system at the end of the project, because the visualization was planned to be part of the GIS development task. This would become a problem when the total system needed to be tested. For that reason, a so called quick and dirty front-end was constructed for the control unit. This beholds a simple graphical interface, giving the user the possibility to control the system. Through graphical components in the interface, the user is able to send the protocol commands specified in section to the RCX controllers. Also, the visualization techniques used in the test program described in the previous section, were integrated in the front-end, making it possible to view the result of a radar scan directly after completion

108 As with the data visualization program, the design of the control unit front-end is outside the scope of this report. The complete source code is available on the project disc Scanned environments Since the communication using the infrared towers got worse near the end of the project, ever more communication problems occurred. As described in section this caused severe problems and no real useable scans were produced. However, in tests done before the construction of the front-end, a map of a square environment was produced using the earlier described Perl program. This scan is shown in Figure 36. Figure 36 Complete scan Conclusion Although no real visualization subsystem was designed for this project, still some visual results were produced in this project. This way, the system could be tested with acceptable results

Using lejos communication for games

Using lejos communication for games Using lejos communication for games Timo Paukku Dinnesen (timo@daimi.au.dk) University of Aarhus Aabogade 34, 8200 Aarhus N, Denmark June 3, 2005 1 Contents 1 Introduction 3 2 Game communication 3 3 The

More information

EV3 Programming Workshop for FLL Coaches

EV3 Programming Workshop for FLL Coaches EV3 Programming Workshop for FLL Coaches Tony Ayad 2017 Outline This workshop is intended for FLL coaches who are interested in learning about Mindstorms EV3 programming language. Programming EV3 Controller

More information

what is an algorithm? analysis of algorithms classic algorithm example: search

what is an algorithm? analysis of algorithms classic algorithm example: search event-driven programming algorithms event-driven programming conditional execution robots and agents resources: cc3.12/cis1.0 computing: nature, power and limits robotics applications fall 2007 lecture

More information

Robolab. Table of Contents. St. Mary s School, Panama. Robotics. Ch. 5: Robolab, by: Ernesto E. Angulo J.

Robolab. Table of Contents. St. Mary s School, Panama. Robotics. Ch. 5: Robolab, by: Ernesto E. Angulo J. Robolab 5 Table of Contents Objectives...2 Starting the program...2 Programming...3 Downloading...8 Tools...9 Icons...9 Loops and jumps...11 Multiple tasks...12 Timers...12 Variables...14 Sensors...15

More information

Part A: Monitoring the Rotational Sensors of the Motor

Part A: Monitoring the Rotational Sensors of the Motor LEGO MINDSTORMS NXT Lab 1 This lab session is an introduction to the use of motors and rotational sensors for the Lego Mindstorm NXT. The first few parts of this exercise will introduce the use of the

More information

Styx-on-a-Brick. Chris Locke Vita Nuova June 2000

Styx-on-a-Brick. Chris Locke Vita Nuova June 2000 Background Styx-on-a-Brick Chris Locke chris@vitanuova.com Vita Nuova June 2000 The aim of the Vita-Nuova styx-on-a-brick project was to demonstrate the simplicity of the Styx protocol and the ease with

More information

Team Project: A Surveillant Robot System

Team Project: A Surveillant Robot System Team Project: A Surveillant Robot System Status Report : 04/05/2005 Little Red Team Chankyu Park (Michael) Seonah Lee (Sarah) Qingyuan Shi (Lisa) Chengzhou Li JunMei Li Kai Lin Agenda Problems Team meeting

More information

logic table of contents: squarebot logic subsystem 7.1 parts & assembly concepts to understand 7 subsystems interfaces 7 logic subsystem inventory 7

logic table of contents: squarebot logic subsystem 7.1 parts & assembly concepts to understand 7 subsystems interfaces 7 logic subsystem inventory 7 logic table of contents: squarebot logic subsystem 7.1 parts & assembly concepts to understand 7 subsystems interfaces 7 logic subsystem inventory 7 7 1 The Vex Micro Controller coordinates the flow of

More information

Simple Error-Correcting Communication Protocol for RCX

Simple Error-Correcting Communication Protocol for RCX Simple Error-Correcting Communication Protocol for RCX Pavel Petrovic, ppetrovic @t acm.org Department of Computer and Information Science Norwegian University of Science and Technology Trondheim, Norway.

More information

SPARTAN ROBOTICS FRC 971

SPARTAN ROBOTICS FRC 971 SPARTAN ROBOTICS FRC 971 Controls Documentation 2015 Design Goals Create a reliable and effective system for controlling and debugging robot code that provides greater flexibility and higher performance

More information

Introduction to Lab 2

Introduction to Lab 2 Introduction to Lab 2 Programming in RTOS using LEGO Mindstorms Martin Stigge 9. November 2009 Martin Stigge Lab 2: LEGO 9. November 2009 1 / 20 Lab 2:

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No # 09 Lecture No # 40 This is lecture forty of the course on

More information

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2 Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2 Ruler Graph: Analyze your graph 1. Examine the shape formed by the connected dots. i. Does the connected graph create

More information

6.001 Notes: Section 8.1

6.001 Notes: Section 8.1 6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything

More information

Autonomous Parking. LEGOeducation.com/MINDSTORMS. Duration Minutes. Learning Objectives Students will: Di culty Beginner

Autonomous Parking. LEGOeducation.com/MINDSTORMS. Duration Minutes. Learning Objectives Students will: Di culty Beginner Autonomous Parking Design cars that can park themselves safely without driver intervention. Learning Objectives Students will: Understand that algorithms are capable of carrying out a series of instructions

More information

Robotics Study Material School Level 1 Semester 2

Robotics Study Material School Level 1 Semester 2 Robotics Study Material School Level 1 Semester 2 Contents UNIT-3... 4 NXT-PROGRAMMING... 4 CHAPTER-1... 5 NXT- PROGRAMMING... 5 CHAPTER-2... 6 NXT-BRICK PROGRAMMING... 6 A. Multiple choice questions:...

More information

Introduction to Robotics using Lego Mindstorms EV3

Introduction to Robotics using Lego Mindstorms EV3 Introduction to Robotics using Lego Mindstorms EV3 Facebook.com/roboticsgateway @roboticsgateway Robotics using EV3 Are we ready to go Roboticists? Does each group have at least one laptop? Do you have

More information

AlteraBot Self-Test User Manual A Hardware Evaluation Guide for ECE 2031

AlteraBot Self-Test User Manual A Hardware Evaluation Guide for ECE 2031 AlteraBot Self-Test User Manual A Hardware Evaluation Guide for ECE 2031 Prepared For ECE 2031 Students Prepared By Shane Connelly May 2, 2005 1 Table of Contents Section 1: Scope 3 Introduction 3 System

More information

Chapter 19 Assembly Modeling with the TETRIX by Pitsco Building System Autodesk Inventor

Chapter 19 Assembly Modeling with the TETRIX by Pitsco Building System Autodesk Inventor Tools for Design Using AutoCAD and Autodesk Inventor 19-1 Chapter 19 Assembly Modeling with the TETRIX by Pitsco Building System Autodesk Inventor Create and Use Subassemblies in Assemblies Creating an

More information

Operating system. Hardware

Operating system. Hardware Chapter 1.2 System Software 1.2.(a) Operating Systems An operating system is a set of programs designed to run in the background on a computer system, giving an environment in which application software

More information

2Control NXT FAQ For the latest version of this document please go to > support

2Control NXT FAQ For the latest version of this document please go to  > support 2Control NXT FAQ For the latest version of this document please go to www.2simple.com > support Common Questions Q: Can I connect 2Control to the NXT brick without using a USB cable? A: No, 2Control requires

More information

Robotics Adventure Book Scouter manual STEM 1

Robotics Adventure Book Scouter manual STEM 1 Robotics Robotics Adventure Book Scouter Manual Robotics Adventure Book Scouter manual STEM 1 A word with our Scouters: This activity is designed around a space exploration theme. Your Scouts will learn

More information

FRAUNHOFER INSTITUTE FOR PHOTONIC MICROSYSTEMS IPMS. Li-Fi Optical Wireless Communication

FRAUNHOFER INSTITUTE FOR PHOTONIC MICROSYSTEMS IPMS. Li-Fi Optical Wireless Communication FRAUNHOFER INSTITUTE FOR PHOTONIC MICROSYSTEMS IPMS Li-Fi Optical Wireless Communication Optical Wireless Data Transfer over a Distance of Several Meters. Li-Fi Optical Wireless Communication Light as

More information

Contents Introduction... 1 Specifications... 4 System Architecture... 8 Implementation Results Conclusions... 22

Contents Introduction... 1 Specifications... 4 System Architecture... 8 Implementation Results Conclusions... 22 Lior Wehrli Mindstorms Admin Framework Semester Project SA-2004-05 Winter Semester 2003/2004 Tutor: Philipp Blum Supervisor: Prof. Dr. Lothar Thiele 13.2.2004 25 II Contents 1 Introduction...1 1.1 Task

More information

Mudd Rover. Design Document Version 0.2. Ron Coleman Josef Brks Schenker Akshay Kumar Athena Ledakis Justin Titi

Mudd Rover. Design Document Version 0.2. Ron Coleman Josef Brks Schenker Akshay Kumar Athena Ledakis Justin Titi Mudd Rover Design Document Version 0.2 Ron Coleman Josef Brks Schenker Akshay Kumar Athena Ledakis Justin Titi 1. Overview The basic idea behind Mudd Rover is to create an autonomous robot with an onboard

More information

such a manner that we are able to understand, grasp and grapple with the problem at hand in a more organized fashion.

such a manner that we are able to understand, grasp and grapple with the problem at hand in a more organized fashion. Programming and Data Structure Dr.P.P.Chakraborty Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture 32 Conclusions Hello everybody. Today, we come to the

More information

(Refer Slide Time 00:01:09)

(Refer Slide Time 00:01:09) Computer Organization Part I Prof. S. Raman Department of Computer Science & Engineering Indian Institute of Technology Lecture 3 Introduction to System: Hardware In the previous lecture I said that I

More information

Robotics (Kinematics) Winter 1393 Bonab University

Robotics (Kinematics) Winter 1393 Bonab University Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

lab A.3: introduction to RoboLab vocabulary materials cc30.03 Brooklyn College, CUNY c 2006 Name: RoboLab communication tower canvas icon

lab A.3: introduction to RoboLab vocabulary materials cc30.03 Brooklyn College, CUNY c 2006 Name: RoboLab communication tower canvas icon cc30.03 Brooklyn College, CUNY c 2006 lab A.3: introduction to RoboLab Name: vocabulary RoboLab communication tower canvas icon drag-and-drop function palette tools palette program algorithm syntax error

More information

SOLIDWORKS: Lesson 1 - Basics and Modeling. Introduction to Robotics

SOLIDWORKS: Lesson 1 - Basics and Modeling. Introduction to Robotics SOLIDWORKS: Lesson 1 - Basics and Modeling Fundamentals Introduction to Robotics SolidWorks SolidWorks is a 3D solid modeling package which allows users to develop full solid models in a simulated environment

More information

(Refer Slide Time: 2:20)

(Refer Slide Time: 2:20) Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture-15 Error Detection and Correction Hello viewers welcome to today s lecture

More information

USB. Bluetooth. Display. IO connectors. Sound. Main CPU Atmel ARM7 JTAG. IO Processor Atmel AVR JTAG. Introduction to the Lego NXT

USB. Bluetooth. Display. IO connectors. Sound. Main CPU Atmel ARM7 JTAG. IO Processor Atmel AVR JTAG. Introduction to the Lego NXT Introduction to the Lego NXT What is Lego Mindstorm? Andreas Sandberg A kit containing: A Lego NXT computer 3 motors Touch sensor Light sensor Sound sensor Ultrasonic range

More information

ROBOLAB Tutorial MAE 1170, Fall 2009

ROBOLAB Tutorial MAE 1170, Fall 2009 ROBOLAB Tutorial MAE 1170, Fall 2009 (I) Starting Out We will be using ROBOLAB 2.5, a GUI-based programming system, to program robots built using the Lego Mindstorms Kit. The brain of the robot is a microprocessor

More information

Master-Blaster Robot Proposal

Master-Blaster Robot Proposal Computer Science 148: Final Project Proposal Master-Blaster Robot Proposal Adam Kenney and Peter Woo (Group 8), April 11, 2004 1. Overview This document describes a system of two robots that will be constructed

More information

LEGO mindstorm robots

LEGO mindstorm robots LEGO mindstorm robots Peter Marwedel Informatik 12 TU Dortmund Germany Lego Mindstorm components motor 3 output ports (A, B, C) 1 USB port for software upload 4 input ports (1, 2, 3, 4) for connecting

More information

AUTONOMOUS CONTROL OF AN OMNI-DIRECTIONAL MOBILE ROBOT

AUTONOMOUS CONTROL OF AN OMNI-DIRECTIONAL MOBILE ROBOT Projects, Vol. 11, 2004 ISSN 1172-8426 Printed in New Zealand. All rights reserved. 2004 College of Sciences, Massey University AUTONOMOUS CONTROL OF AN OMNI-DIRECTIONAL MOBILE ROBOT C. J. Duncan Abstract:

More information

Chapter 3: Operating-System Structures

Chapter 3: Operating-System Structures Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System Design and Implementation System Generation 3.1

More information

contents in detail introduction...xxi 1 LEGO and robots: a great combination the EV3 programming environment... 5

contents in detail introduction...xxi 1 LEGO and robots: a great combination the EV3 programming environment... 5 contents in detail introduction...xxi who this book is for...xxi prerequisites...xxi what to expect from this book...xxi how best to use this book...xxiii 1 LEGO and robots: a great combination... 1 LEGO

More information

If Statements, For Loops, Functions

If Statements, For Loops, Functions Fundamentals of Programming If Statements, For Loops, Functions Table of Contents Hello World Types of Variables Integers and Floats String Boolean Relational Operators Lists Conditionals If and Else Statements

More information

This session will provide an overview of the research resources and strategies that can be used when conducting business research.

This session will provide an overview of the research resources and strategies that can be used when conducting business research. Welcome! This session will provide an overview of the research resources and strategies that can be used when conducting business research. Many of these research tips will also be applicable to courses

More information

GoldSim: Using Simulation to Move Beyond the Limitations of Spreadsheet Models

GoldSim: Using Simulation to Move Beyond the Limitations of Spreadsheet Models GoldSim: Using Simulation to Move Beyond the Limitations of Spreadsheet Models White Paper Abstract While spreadsheets are appropriate for many types of applications, due to a number of inherent limitations

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

Part II: Creating Visio Drawings

Part II: Creating Visio Drawings 128 Part II: Creating Visio Drawings Figure 5-3: Use any of five alignment styles where appropriate. Figure 5-4: Vertical alignment places your text at the top, bottom, or middle of a text block. You could

More information

Educational Fusion. Implementing a Production Quality User Interface With JFC

Educational Fusion. Implementing a Production Quality User Interface With JFC Educational Fusion Implementing a Production Quality User Interface With JFC Kevin Kennedy Prof. Seth Teller 6.199 May 1999 Abstract Educational Fusion is a online algorithmic teaching program implemented

More information

EEL 5666C FALL Robot Name: DogBot. Author: Valerie Serluco. Date: December 08, Instructor(s): Dr. Arroyo. Dr. Schwartz. TA(s): Andrew Gray

EEL 5666C FALL Robot Name: DogBot. Author: Valerie Serluco. Date: December 08, Instructor(s): Dr. Arroyo. Dr. Schwartz. TA(s): Andrew Gray EEL 5666C FALL 2015 Robot Name: DogBot Author: Valerie Serluco Date: December 08, 2015 Instructor(s): Dr. Arroyo Dr. Schwartz TA(s): Andrew Gray Jacob Easterling INTRODUCTION ABSTRACT One of the fun things

More information

Simulation of the pass through the labyrinth as a method of the algorithm development thinking

Simulation of the pass through the labyrinth as a method of the algorithm development thinking Simulation of the pass through the labyrinth as a method of the algorithm development thinking LIBOR MITROVIC, STEPAN HUBALOVSKY Department of Informatics University of Hradec Kralove Rokitanskeho 62,

More information

Computer-based systems will be increasingly embedded in many of

Computer-based systems will be increasingly embedded in many of Programming Ubiquitous and Mobile Computing Applications with TOTA Middleware Marco Mamei, Franco Zambonelli, and Letizia Leonardi Universita di Modena e Reggio Emilia Tuples on the Air (TOTA) facilitates

More information

Use of the application program

Use of the application program Use of the application program Product family: Product type: Manufacturer: Name: Order no.: Name: Order no.: Name: Order no.: Name: Order no.: Name: Order no.: Name: Order no.: Name: Order no.: Name: Order

More information

Technical Disclosure Commons

Technical Disclosure Commons Technical Disclosure Commons Defensive Publications Series October 06, 2017 Computer vision ring Nicholas Jonas Barron Webster Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Business Processes for Managing Engineering Documents & Related Data

Business Processes for Managing Engineering Documents & Related Data Business Processes for Managing Engineering Documents & Related Data The essence of good information management in engineering is Prevention of Mistakes Clarity, Accuracy and Efficiency in Searching and

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT ABSTRACT

DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT ABSTRACT DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT NSF Summer Undergraduate Fellowship in Sensor Technologies Jiong Shen (EECS) - University of California, Berkeley Advisor: Professor Dan Lee ABSTRACT

More information

Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture - 04 Introduction to Programming Language Concepts

More information

Unified Computers & Programming Installation Instructions for Windows

Unified Computers & Programming Installation Instructions for Windows Unified Computers & Programming Installation Instructions for Windows IMPORTANT INSTALLATION NOTE: The USB IR Tower has been known to cause Windows computers to crash if plugged in for prolonged periods.

More information

Pack Manager Program System Design Document

Pack Manager Program System Design Document PACK MANAGER PROGRAM SYSTEM DESIGN DOCUMENT 1 Pack Manager Program System Design Document Latest Revision: 26 March 2014 Prepared by: Naing Htet Abstract This document describes the design of the software

More information

Instructions For Constructing A Braitenberg Vehicle 2 Robot From LEGO Mindstorms Components

Instructions For Constructing A Braitenberg Vehicle 2 Robot From LEGO Mindstorms Components Instructions For Constructing A Braitenberg Vehicle 2 Robot From LEGO Mindstorms Components Michael R.W. Dawson, Biological Computation Project, University of Alberta, Edmonton, Alberta September, 2003

More information

InfoTag KE28xx Communications for 186 CPU Firmware Version 4

InfoTag KE28xx Communications for 186 CPU Firmware Version 4 InfoTag KE28xx Communications for 186 CPU Firmware Version 4 *KE28xx models include: KE2800, KE2852, KE2853, KE2856 This document applies to printer firmware versions 4.x only. Note that changes made to

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Cursor Design Considerations For the Pointer-based Television

Cursor Design Considerations For the Pointer-based Television Hillcrest Labs Design Note Cursor Design Considerations For the Pointer-based Television Designing the cursor for a pointing-based television must consider factors that differ from the implementation of

More information

CS 1567 Intermediate Programming and System Design Using a Mobile Robot Aibo Lab3 Localization and Path Planning

CS 1567 Intermediate Programming and System Design Using a Mobile Robot Aibo Lab3 Localization and Path Planning CS 1567 Intermediate Programming and System Design Using a Mobile Robot Aibo Lab3 Localization and Path Planning In this lab we will create an artificial landscape in the Aibo pen. The landscape has two

More information

Modern Robotics Inc. Sensor Documentation

Modern Robotics Inc. Sensor Documentation Sensor Documentation Version 1.0.1 September 9, 2016 Contents 1. Document Control... 3 2. Introduction... 4 3. Three-Wire Analog & Digital Sensors... 5 3.1. Program Control Button (45-2002)... 6 3.2. Optical

More information

A CALCULATOR BASED ANTENNA ANALYZER

A CALCULATOR BASED ANTENNA ANALYZER A CALCULATOR BASED ANTENNA ANALYZER by Don Stephens ABSTRACT Automated antenna testing has become economical with the MI Technologies Series 2080 Antenna Analyzer. Since its introduction last year, new

More information

BASELINE GENERAL PRACTICE SECURITY CHECKLIST Guide

BASELINE GENERAL PRACTICE SECURITY CHECKLIST Guide BASELINE GENERAL PRACTICE SECURITY CHECKLIST Guide Last Updated 8 March 2016 Contents Introduction... 2 1 Key point of contact... 2 2 Third Part IT Specialists... 2 3 Acceptable use of Information...

More information

Unit 1, Lesson 1: Moving in the Plane

Unit 1, Lesson 1: Moving in the Plane Unit 1, Lesson 1: Moving in the Plane Let s describe ways figures can move in the plane. 1.1: Which One Doesn t Belong: Diagrams Which one doesn t belong? 1.2: Triangle Square Dance m.openup.org/1/8-1-1-2

More information

Alongside this is AVB, an IEEE standards based technology that could stand on its own or underpin many of the existing networked audio protocols.

Alongside this is AVB, an IEEE standards based technology that could stand on its own or underpin many of the existing networked audio protocols. AES67 and AES70 The complete industry solution for audio and control Over the past three decades the audio industry has taken a number of steps to move into the digital age. Some argue that the digital

More information

ROBOLAB Reference Guide

ROBOLAB Reference Guide ROBOLAB Reference Guide Version 1.2 2 Preface: Getting Help with ROBOLAB ROBOLAB is a growing application for which users can receive support in many different ways. Embedded in ROBOLAB are context help

More information

Reform: A Domain Specific Language

Reform: A Domain Specific Language Reform: A Domain Specific Language Dustin Graves October 5, 2007 Overview Scripting language Monitors and manages data streams Network, File, RS-232, etc Reformats and redirects data Contains keywords

More information

Everything You Always Wanted To Know About Programming Behaviors But Were Afraid To Ask

Everything You Always Wanted To Know About Programming Behaviors But Were Afraid To Ask Everything You Always Wanted To Know About Programming Behaviors But Were Afraid To Ask By Kevin Harrelson Machine Intelligence Lab University of Florida Spring, 1995 Overview Programming multiple behaviors

More information

Project from Real-Time Systems Lego Mindstorms EV3

Project from Real-Time Systems Lego Mindstorms EV3 Project from Real-Time Systems March 13, 2017 Lego Mindstorms manufactured by LEGO, http://mindstorms.lego.com extension of LEGO Technic line history: RCX, 1998 NXT, 2006; NXT 2.0, 2009 EV3, 2013 why LEGO?

More information

What Is a Program? Pre-Quiz

What Is a Program? Pre-Quiz What Is a Program? What Is a Program? Pre-Quiz 1. What is a program? 2. What is an algorithm? Give an example. 2 What Is a Program? Pre-Quiz Answers 1. What is a program? A program is a sequence of instructions

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

Tic-Tac-LEGO: An Investigation into Coordinated Robotic Control

Tic-Tac-LEGO: An Investigation into Coordinated Robotic Control Tic-Tac-LEGO: An Investigation into Coordinated Robotic Control Ruben Vuittonet and Jeff Gray Department of Computer and Information Sciences University of Alabama at Birmingham Birmingham, AL 35294 USA

More information

Chapter 18 Assembly Modeling with the LEGO MINDSTORMS NXT Set Autodesk Inventor

Chapter 18 Assembly Modeling with the LEGO MINDSTORMS NXT Set Autodesk Inventor Tools for Design Using AutoCAD and Autodesk Inventor 18-1 Chapter 18 Assembly Modeling with the LEGO MINDSTORMS NXT Set Autodesk Inventor Creating an Assembly Using Parts from the LEGO MINDSTORMS NXT Set

More information

Lab 4: Interrupts and Realtime

Lab 4: Interrupts and Realtime Lab 4: Interrupts and Realtime Overview At this point, we have learned the basics of how to write kernel driver module, and we wrote a driver kernel module for the LCD+shift register. Writing kernel driver

More information

OS and Computer Architecture. Chapter 3: Operating-System Structures. Common System Components. Process Management

OS and Computer Architecture. Chapter 3: Operating-System Structures. Common System Components. Process Management Last class: OS and Architecture OS and Computer Architecture OS Service Protection Interrupts System Calls IO Scheduling Synchronization Virtual Memory Hardware Support Kernel/User Mode Protected Instructions

More information

20 reasons why the Silex PTE adds value to your collaboration environment

20 reasons why the Silex PTE adds value to your collaboration environment 20 reasons why the Silex PTE adds value to your collaboration environment The Panoramic Telepresence Experience (PTE) from UC innovator SilexPro is a unique product concept with multiple benefits in terms

More information

CPU ARCHITECTURE. QUESTION 1 Explain how the width of the data bus and system clock speed affect the performance of a computer system.

CPU ARCHITECTURE. QUESTION 1 Explain how the width of the data bus and system clock speed affect the performance of a computer system. CPU ARCHITECTURE QUESTION 1 Explain how the width of the data bus and system clock speed affect the performance of a computer system. ANSWER 1 Data Bus Width the width of the data bus determines the number

More information

EEL 4924: Senior Design. 27 January Project Design Report: Voice Controlled RC Device

EEL 4924: Senior Design. 27 January Project Design Report: Voice Controlled RC Device EEL 4924: Senior Design 27 January 2009 Project Design Report: Voice Controlled RC Device Team VR: Name: Name: Kyle Stevenson Email: chrisdo@ufl.edu Email: relakyle@ufl.edu Phone: 8135271966 Phone: 8132051287

More information

AS AUTOMAATIO- JA SYSTEEMITEKNIIKAN PROJEKTITYÖT CEILBOT FINAL REPORT

AS AUTOMAATIO- JA SYSTEEMITEKNIIKAN PROJEKTITYÖT CEILBOT FINAL REPORT AS-0.3200 AUTOMAATIO- JA SYSTEEMITEKNIIKAN PROJEKTITYÖT CEILBOT FINAL REPORT Jaakko Hirvelä GENERAL The goal of the Ceilbot-project is to design a fully autonomous service robot moving in a roof instead

More information

Snowflake Numbers. A look at the Collatz Conjecture in a recreational manner. By Sir Charles W. Shults III

Snowflake Numbers. A look at the Collatz Conjecture in a recreational manner. By Sir Charles W. Shults III Snowflake Numbers A look at the Collatz Conjecture in a recreational manner By Sir Charles W. Shults III For many people, mathematics is something dry and of little interest. I ran across the mention of

More information

How Secured2 Uses Beyond Encryption Security to Protect Your Data

How Secured2 Uses Beyond Encryption Security to Protect Your Data Secured2 Beyond Encryption How Secured2 Uses Beyond Encryption Security to Protect Your Data Secured2 Beyond Encryption Whitepaper Document Date: 06.21.2017 Document Classification: Website Location: Document

More information

6.001 Notes: Section 6.1

6.001 Notes: Section 6.1 6.001 Notes: Section 6.1 Slide 6.1.1 When we first starting talking about Scheme expressions, you may recall we said that (almost) every Scheme expression had three components, a syntax (legal ways of

More information

Case study of Wireless Technologies in Industrial Applications

Case study of Wireless Technologies in Industrial Applications International Journal of Scientific and Research Publications, Volume 7, Issue 1, January 2017 257 Case study of Wireless Technologies in Industrial Applications Rahul Hanumanth Rao Computer Information

More information

Technical Specification for Educational Robots

Technical Specification for Educational Robots Technical Specification for Educational Robots 1. Introduction The e-yantra project, sponsored by MHRD, aims to start a robotic revolution in the country through the deployment of low-cost educational

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

Module 3: Operating-System Structures. Common System Components

Module 3: Operating-System Structures. Common System Components Module 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System Design and Implementation System Generation 3.1 Common

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Android Spybot. ECE Capstone Project

Android Spybot. ECE Capstone Project Android Spybot ECE Capstone Project Erik Bruckner - bajisci@eden.rutgers.edu Jason Kelch - jkelch@eden.rutgers.edu Sam Chang - schang2@eden.rutgers.edu 5/6/2014 1 Table of Contents Introduction...3 Objective...3

More information

LEGO Mindstorm EV3 Robots

LEGO Mindstorm EV3 Robots LEGO Mindstorm EV3 Robots Jian-Jia Chen Informatik 12 TU Dortmund Germany LEGO Mindstorm EV3 Robot - 2 - LEGO Mindstorm EV3 Components - 3 - LEGO Mindstorm EV3 Components motor 4 input ports (1, 2, 3,

More information

Part A: Monitoring the Touch Sensor and Ultrasonic Sensor

Part A: Monitoring the Touch Sensor and Ultrasonic Sensor LEGO MINDSTORMS NXT Lab 2 This lab introduces the touch sensor and ultrasonic sensor which are part of the Lego Mindstorms NXT kit. The ultrasonic sensor will be inspected to gain an understanding of its

More information

LEGO Plus. Session 1420

LEGO Plus. Session 1420 Session 1420 LEGO Plus Jerry M. Hatfield, Electrical Engineering John T. Tester, Mechanical Engineering College of Engineering and Natural Sciences Northern Arizona University Introduction The LEGO Mindstorms

More information

Chapter 8 Fault Tolerance

Chapter 8 Fault Tolerance DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 8 Fault Tolerance 1 Fault Tolerance Basic Concepts Being fault tolerant is strongly related to

More information

GROUP 17 COLLEGE OF ENGINEERING & COMPUTER SCIENCE. Senior Design I Professor: Dr. Samuel Richie UNIVERSITY OF CENTRAL FLORIDA

GROUP 17 COLLEGE OF ENGINEERING & COMPUTER SCIENCE. Senior Design I Professor: Dr. Samuel Richie UNIVERSITY OF CENTRAL FLORIDA GROUP 17 COLLEGE OF ENGINEERING & COMPUTER SCIENCE Senior Design I Professor: Dr. Samuel Richie UNIVERSITY OF CENTRAL FLORIDA Anh Loan Nguyen John E. Van Sickle Jordan Acedera Christopher Spalding December

More information

FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE

FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE David C. Hay Essential Strategies, Inc In the buzzword sweepstakes of 1997, the clear winner has to be Data Warehouse. A host of technologies and techniques

More information

Blackfin Online Learning & Development

Blackfin Online Learning & Development Presentation Title: Multimedia Starter Kit Presenter Name: George Stephan Chapter 1: Introduction Sub-chapter 1a: Overview Chapter 2: Blackfin Starter Kits Sub-chapter 2a: What is a Starter Kit? Sub-chapter

More information

WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution

WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution Tervela helps companies move large volumes of sensitive data safely and securely over network distances great and small. We have been

More information

Not For Sale. Glossary

Not For Sale. Glossary Glossary Actor A sprite and the role it plays as it interacts with another sprite on the stage. Animated GIF A graphic made up of two or more frames, each of which is displayed as an automated sequence

More information

MODELS OF DISTRIBUTED SYSTEMS

MODELS OF DISTRIBUTED SYSTEMS Distributed Systems Fö 2/3-1 Distributed Systems Fö 2/3-2 MODELS OF DISTRIBUTED SYSTEMS Basic Elements 1. Architectural Models 2. Interaction Models Resources in a distributed system are shared between

More information