Recycling Robotics Garbage collecting Robot Application - Project Work Definitions and Design

Size: px
Start display at page:

Download "Recycling Robotics Garbage collecting Robot Application - Project Work Definitions and Design"

Transcription

1 Recycling Robotics Garbage collecting Robot Application - Project Work Definitions and Design Jari Saarinen 10. Sep Table of Contents Recycling Robotics Garbage collecting Robot Application - Project Work Definitions and Design...1 Scope of the document...2 Platform...2 Motion control and odometry...2 Laser Range Finder SICK LMS Camera...5 Localization...6 Histogram method...6 Correlation...6 Movement in different coordinate systems...7 Dealing with uncertain world...8 Path Planning...8 Mission Planning...9 FSR2010 Specific...9 Environment...9 Task & Strategy...9 Some phases...10 References...12

2 Scope of the document This document contains a brief introduction to the areas, which are required in the Field- and Service robotics course. There are not too much details, only a few ideas and key words to follow. Remember to keep open mind and be free to search alternative solutions!! Platform Motion control and odometry 1 J2B2 is a differential driven robot. This means that the robot is driven by controlling the velocities of the two wheels of the robot. The robot provides an odometry (estimated relative position), which is integrated from wheel encoders. Figure 1. J2B2 frame of references and sensors The point in the middle of the two wheels, is the one that is controlled and with respect the odometry is given. The robot is commanded through API by giving a forward velocity v and angular velocity w. The relation between the individual wheel speeds and (v,w) is: v= v r +v l r,w= v r v l 2r where v r is the velocity of the right wheel and v l is the velocity of left wheel, and r is the radius of the robot. The odometry is integrated from the wheel encoders. Given a time step dt and distances dr and dl, meaning the distance travelled by right and left wheels, the position is updated as:, 1 Visit to learn more about differential driven system

3 dl= dr+dl 2 x k+1 =x k +dl cos a k y k+ 1 =y k +dl sin a k a k+1 =a k dr dl 2 where (x,y,a), represents the position and heading of the robot in odometry frame. The odometry is computed relatively, it is not respect to anything, but the position and heading of the robot since it was last reseted. Odometry is also prune to the errors. Each time the odometry is integrated an error is also accumulated. The odometry is exremely sensitive to slipping (which occurs while driving to wall, for example)., Word about mechanics and its effects to control The robot ha a tooth belt, which transforms the motion from the DC-motor to wheels. The tooth belt will be renewed before the course starts. However, this is a wearable part and every time you try to drive through the wall (indicated by very strong sound coming from the belt going over the teeth of the pulley), you wear off small bits from the belt. At some point the belt starts loosing its grip, which causes a significant change into the behavior of the robot. This means that it is time to change the belt again we are not happy to do that, but we will, given that someone reports this. To avoid wearing off the belt, just don't try to run through the walls! Trajectory control Given that you have a trajectory to follow, the trajectory control tries to keep the robot on that trajectory. In usual case the control is separated into parts, where different criterias will determine whether to use turning only, or combination of turning and forward velocity. A very simple controller for trajectory control The key to any control is to formulate the problem as a control problem (however trivial this may sound this has not always been the case for project groups :) ). Probably the simplest controller tries to control the central point of the robot to follow a set of points, one-by-one. More precisely, there is a goal point g=(x1,y1) and the robot is at position p=(x,y,a), and the robot should move to the goal. A very simple controller is to compute the angle-to-target and use this as an error signal for the controller: dx=x 1 x,dy=y 1 y e=atan2 dy,dx a Most often the controller controls only the angular velocity, while the forward velocity is kept constant giving the motion control command: v=v set,w=p 1 e Basically, this formula yields to tuning of one parameter P1. However, this simple formula will not apply for all the situations, neither will result into the smoothest possible behaviour.

4 Also, note that the resulting velocity for individual wheel (see formula) cannot exceed the maximum velocity (which is set to 0.5m/s for J2B2). Some alternative approaches J2B2 is a differential driven and the relationship between the turn radius, forward velocity and angular velocity is v=rw Given a set of points the robot needs to travel through, it is possible to fit a circle using two trajectory points and the robots position. The resulting circle, goes through the robot, and provides the control signal for angular velocity, with additional P-controller to control heading of the robot to follow the tangent of the circle. Usually there are some additional criterias for the control. As an example FSR2009 winning team used four modes: Driving a straight line, modified driving (while carrying payload) straight line, circle drive and turning on a spot. Each of the modes were used according to the situation. For the straight line driving the team used PP-controller, which gives for signal for the angular velocity according to: P1*distance to the line P2*heading error to the line For the curve driving there was an constant for the angular velocity (given by the above equation). Hakenberg [1] used another type of controller, which worked very well, at least with moderate speeds. Laser Range Finder SICK LMS291 The robot is equipped with a laser range finder, which provides relatively accurate range image. The output of the laser is 181 rangle-angle (r,a) pairs from right (-PI/2 with respect to robot heading) to left (+PI/2) every one degree. In general the accuracy of the range measurement is around 1cm (see biases for exceptions). The transformation is sensor frame of reference from polar coordinates to Cartesian ones is straight forward: x i =r i cos a i y i =r i sin a i This equation provides the local points of the obstacles the laser sees, with respect to the laser. Given that the laser is in the position (xl,yl,al), in the world, the points with respect to world frame of reference is given by: x i =r i cos a i +a l +x l y i =r i sin a i +a l +y l Now, the laser frame and the odometry frame are not the same. If you want to localize with the laser and control the body, you must have relation between the two frames of reference. The laser frame and the odometry (or control-) frames are aligned with translation in x-direction by amount of L (approx

5 8cm). Given this the relation between two frames can simply be calculated as follows: dx=lcos a o,dy=lsin a o x l =x o +dx y l =y o +dy a l =a o, Bias The laser range finder has a feature, that in short distances it has quite significant bias in the reported distances. While this may not be critical in most cases, e.g. in mapping (and some cases in localization) this may be a problem. Given that you will face the problem, the only solution is to calibrate the sensor by manually measuring the actual distances and reported distances and making a function to correct the bias. Camera 2 The robot has an USB WEB can attached to a pan-tilt unit. The camera provides 640x480 (To be verified) color image. The image in general is a high quality one, but the camera has rolling shutter, which will reduce the image quality while in motion. 3D projection from image to ground coordinates To transform image coordinates to the ground plane in 3D world coordinates, you need to use perspective projection: [ k x ]=M[ 3D ximage y image k y 3D k 1 ] x3d and y3d are coordinates on a predefined plane in 3D world (ground plane), and k is a scaling factor. Therefore you need to scale the resulting vector with k to get the actual x3d and y3d coordinates. You can also perform the whole procedure with OpenCV command cvperspectivetransform. (Google: OpenCV documentation, Perspective projection ). First, though, you need to determine the matrix M with calibration procedure. You need four calibration points, for which you need to know the corresponding coordinates in the image and in the world coordinates. M can be solved with Matlab, or by using OpenCV function cvgetperspectivetransform. The M matrix depends on camera's orientation related to the ground. Therefore, you can only use this procedure with one predefined camera orientation. To make this work with a limited set of orientations, you can calculate a few M matrices for the corresponding orientations. To make this work with any orientation, you need to make a mathematical model for M. This is more difficult procedure, and out of focus of this course. It is not nearly always necessary, if you have 2 OpenCV is an extensive image processing library

6 designed the system so that you only work with one or just a few camera orientations when doing the coordinate transform. But if you decide to create the complete model, there are number of options. The most correct way to calculate M is to create a model for coordinate transforms by using for example Denavit-Hartenberg parametrization or a similar model for the geometry of the system. Another way (though sometimes unreliable) is to model the behavior of the components of M by approximating them with polynomials that depend on pan and tilt angles. Localization Localization is a process to determine ones position with respect to known frame of reference. In many cases the reference is a map (which is the case in FSR course). In general localization is an computationally intensive process, which is rather difficult. However, for the course there are several issues that simplifies the approach: The map and the environment is static (at least if the robot itself does not move objects or walls) The map is polygonal, rather small (3-by-4 meters aside) And the map is known, at least roughly. The initial position is known In general case (and note for FSR2010, that these would not apply in real world) not many of the above assumptions would apply. However, given the assumptions the localization problem is known as continuous map matching problem. This means that given any time instance the robots position and orientation is approximately known, and you need to make the map matching only in a limited area. Furthermore, the position is expected to be done using laser range finder (you can just as well use camera if you wish). A good handbook to the topic is [2] (especially pp ), which gives further references to follow. You can also find some introductory text and terminology from [4 pp , 50-62] Histogram method One method (rather good one) for positioning in polygonal environment is so called histogram matching. Given that the walls form 90 degree angle, the angle histogram correlation method [see e.g. [2], pp ] can solve the global orientation of the robot from one measurement (or it finds 2 options, however knowing roughly he current orientation one can determine the correct one). The method can also be used for determining the x, and y translations separately. For the heading, one can also use the line representations directly, by segmenting the scan into lines, determining which line in the scan belongs to what line in map, and computing the difference in angles. You may also find some ideas from [3]. Correlation Correlation based methods (see e.g. [2] pp 188) compute the correlation value for the a set of poses by testing how well the current measurement fits into the tested pose. In a simplest manner, one may create a search grid (e.g. +-10cm in x and y, +- 5degrees in heading) around the expected pose, transform the current measurement the tested pose, and compute the correlation value for that. There

7 are several options for the correlation function: In grid map, one can compute the cell which the measurement hits. If the map cell is occupied, then raise the correlation value. Correlation value may be the sum of distances to nearest neighbor (which is a point in map). This may be used with line map or with grid map You may also use combination of angle histogram and correlation, which has some benefits in performance. For further reading see [2 pp. 188] and [4, pp the title is laser dead-reckoning, but the methods can easily be used with map matching]. If you are interested you may also take a look at the probabilistic localization [4, pp ] Movement in different coordinate systems. Often needed feature, you have measured movement in one coordinate system and you need to know where you are at in another frame (e.g. you have a world frame of reference, which you update and you have odometry, which you can use as initial guess) To put this more mathematically, you have two position measurements p1=(x1,y1,a1) and p2 = (x2,y2,a2) in one frame and you want to know what is the position c2 = (x4,y4,a4) in other frame, knowing that c1=(x3,y3,a3) corresponded to position p1. First compute the differential movement in P-frame:

8 dx o =x 2 x 1 dy o =y 2 y 1 dl= dx o 2 +dy o 2 da o =atan2 dy,dx dx=dl cos da o a 1 dy=dl sin da o a 1 da=a 2 a 1 The computed differential movement tells how much the robot has moved with respect to p1 (referrer to above figure and sorry for the slightly different notation). Thus, the differential movement from c1 to c2 is the same and c2 can be computed from: l= dx 2 +dy 2 da c =atan2 dx,dy x 4 =x 3 +lcos a 3 +da c y 4 =y 3 +lsin a 3 +da c a 4 =a 3 +da Dealing with uncertain world In reality, the map that you have for localization as well as the measurement has errors. As a result, the position reported by the localization has also errors. It may well be that the continuous localization results some +-5cm error, producing high-frequency jumping of the pose, which causes difficulties e.g. for motion control. As a solution, it is possible to filter the result. The odometry provided by the robot is relatively accurate in a short distance (given you are not trying to drive through the wall again). Thus, combing odometry and localization results often yields to better solution than using just one of them. This process in called sensor fusion and there is quite standard solution called Kalman Filtering for it. You can find a simple example from course web pages [5], which has been developed for J2B2. Furthermore, this topic is more covered on a course (AS Estimation and Sensor Fusion Methods) Path Planning Path planning solves the problem of finding from the current location to goal location given a map. There are numerous path planning algorithms (if you are interested, take a look at LaValles' book [6]). Jarvis and Bryne, has developed a simple method for grid map based planning, called distance transform path planning. The references for this is hard get, however McKerrow's book [7] introduces the method (scanned version available at course web site).

9 The algorithm basically fills a grid iteratively and results a distance-to-goal grid, which can then be used to find the shortest path from start to goal. An example is below. s 1 2 1e e e e e e6 1e e g Note that it is worthwhile to expand the obstacles, so that the planned path does not go closer than robots diameter of the obstacles (and also that after the robot has a manipulator, it cannot be considered as a point in expanded obstacle space). Mission Planning FSR-task is well defined task, which is usually quite easily decomposed into logical states with logical state transition rules. This is a manual system design issue, which should be done carefully, so that the result can be implemented into a well defined Finite-State-Machine. Two nice examples can be found from the course web pages ([8 and 9]). FSR2010 Specific Environment The environment will be approximately 4m x 3m polygonal environment. There will be three slots represented by icons, which are the recycling bins. In the environment there will be five types of objects: 1. Obstacles, visible to laser and for the camera You should not hit the obstacles 2. Useful objects, removable objects, which are not trash, might not be visible to laser. You may move these, however you should not recycle these 3. Three types of trash: 1. Paper 2. General waste 3. Metal (or aluminum cans) Task & Strategy The task is to collect the trash from the confined area, put them into correct trash bins and do this with minimal operator load and with high performance.

10 The first the team needs to do is to select a strategy for the mission: How much operator will be loaded. The options in general are: Direct teleoperation The robot is just a tool for the operator, who will continuously operate it and do all the decisions for the robot Operator assisted teleoperation : Parts of the task is automated. Some examples: The operator may point an object, the robot drives to it and the operator handles the manipulation, after which the robot drives towards the trash bins autonomously and requests operator to select the correct trash bin. The robot automatically searches an object, and then requests operator to manipulate the object, after which the robot drives towards the trash bins autonomously and requests operator to select the correct trash bin. Semi-autonomy : The task is configured by the operator after which the robot performs the task autonomously (and requesting help in case of unexpected event). The configuration is done by pointing objects, giving also the classification for them, The robot may even assist in configuration phase (e.g. drives to a location which provides unmarked obstacles). In general, the operator load is reduced as the autonomy level is increased (except, if your system requests help from the operator continuously). The selected strategy will greatly influence your design needs. The teleoperation does not require much, in fact the course software packet is almost everything you would need (however, taking this direction will not result in a good position in the finals). The goal is to make money for the operator and for the company. This means that one operator to be effective one, he/she should (in principle) be able to control several robots simultaneously (this will pay better both for the operator and for the company providing the service). Keeping this in mind the only way to go is for increased automation. Some phases Task Configuration When the mission starts, the operator receives information through the robots sensors (camera, laser etc). Using this information, the operator should be able to define: What and where is garbage Classify the garbage Identify the recycling bins and assign correct classes to correct bins. Any of the above items can also be done by the robot autonomously (if there is will and skill). Note also, that the configuration phase cannot be static, as the robot will not see the whole area from one spot (how could the robot assist the operator, or will the configuration be done every time the robot has moved a trash (or set of them)) Task Execution After the robot has needed parameters (e.g. pick item i1 from location p1 and put it to bin b1), the robot should execute the mission.

11 1. For this the robot requires to move from current location to target without hitting anything solid or valuable 2. The robot (or operator?) should approach the object and grasp it (and store it to some location?) 3. The robot should navigate from its current location to trash bin 4. The robot should approach the correct trash bin and place the object there. 5. Go back to 1 if more tasks specified request more tasks from operator or if all tasks are completed finish. Task supervision / Sliding autonomy In a perfect world the operator uses an intuitive user interface to specify the task for the robot, and then sits back and watches TV while the robot does the job. However, there is slight change that something goes wrong while the robot executes its task. The question is how much the operator should pay attention to the task execution while the robot performs, in order to be able to take over the control if something is about to go wrong? If the operator needs his/her full attention all the time, then we are talking about supervisory control and the operator is hardly able to do anything else but to control a single robot. Sliding autonomy refers that the robot may have several levels of automation. It may be in autonomous mode for quite some time, but the trick is to be able to change the mode according to the situation. In our case this would probably mean something like requesting a confirmation, if some detection fails, or if the robot detects an error situation, it requests help from the operator (e.g. it looses once picked trash). You need to consider also how you request help. Given that the operator is watching TV and robot requests help, the operator might not be aware what has happened. Thus the system should provide functions for creating the situation awareness for the operator (e.g. playback feature).

12 References [1] Jan Hakenbergs site, which reports his work on the FSR2007, in a very descriptive manner - [2] Where am I?" Systems and Methods for Mobile Robot Positioning, J. Borenstein(1), H. R. Everett, and L. Feng, Edited and compiled by J. Borenstein, March 1996, the book is available online [3] AS Paikannus- ja Navigointi Excercises: /viikkoharjoitukset [4] A Sensor-Based Personal Navigation System and its Application for Incorporating Humans into a Human-Robot Team, Jari Saarinen, [5] EKF example in course webpages: ProjectWork/odoEkf.pdf [6] Steven LaValle, Planning Algorithms, 2006, [7] McKerrow, Phillip John, Introduction to Robotics pp [8] [9]

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology Basics of Localization, Mapping and SLAM Jari Saarinen Aalto University Department of Automation and systems Technology Content Introduction to Problem (s) Localization A few basic equations Dead Reckoning

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Project 1 : Dead Reckoning and Tracking

Project 1 : Dead Reckoning and Tracking CS3630 Spring 2012 Project 1 : Dead Reckoning and Tracking Group : Wayward Sons Sameer Ansari, David Bernal, Tommy Kazenstein 2/8/2012 Wayward Sons CS3630 Spring 12 Project 1 Page 2 of 12 CS 3630 (Spring

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Encoder applications. I Most common use case: Combination with motors

Encoder applications. I Most common use case: Combination with motors 3.5 Rotation / Motion - Encoder applications 64-424 Intelligent Robotics Encoder applications I Most common use case: Combination with motors I Used to measure relative rotation angle, rotational direction

More information

THE UNIVERSITY OF AUCKLAND

THE UNIVERSITY OF AUCKLAND THE UNIVERSITY OF AUCKLAND FIRST SEMESTER, 2002 Campus: Tamaki COMPUTER SCIENCE Intelligent Active Vision (Time allowed: TWO hours) NOTE: Attempt questions A, B, C, D, E, and F. This is an open book examination.

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM Jorma Selkäinaho, Aarne Halme and Janne Paanajärvi Automation Technology Laboratory, Helsinki University of Technology, Espoo,

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning Sören Schwertfeger / 师泽仁 ShanghaiTech University ShanghaiTech University - SIST - 10.05.2017

More information

Segway RMP Experiments at Georgia Tech

Segway RMP Experiments at Georgia Tech Segway RMP Experiments at Georgia Tech DARPA MARS Segway Workshop September 23, 2003 Tom Collins Major activities Interface to MissionLab Teleoperation experiments Use of laser range finder for terrain

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Project Proposal By: Beach Wlaznik Advisors: Dr. Huggins Dr. Stewart December 7, 2006 I. Introduction The objective of the Autonomous Vehicle Navigation

More information

Research in Computational Differential Geomet

Research in Computational Differential Geomet Research in Computational Differential Geometry November 5, 2014 Approximations Often we have a series of approximations which we think are getting close to looking like some shape. Approximations Often

More information

Navigation methods and systems

Navigation methods and systems Navigation methods and systems Navigare necesse est Content: Navigation of mobile robots a short overview Maps Motion Planning SLAM (Simultaneous Localization and Mapping) Navigation of mobile robots a

More information

Use Parametric notation. Interpret the effect that T has on the graph as motion.

Use Parametric notation. Interpret the effect that T has on the graph as motion. Learning Objectives Parametric Functions Lesson 3: Go Speed Racer! Level: Algebra 2 Time required: 90 minutes One of the main ideas of the previous lesson is that the control variable t does not appear

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Kinematics, Kinematics Chains CS 685

Kinematics, Kinematics Chains CS 685 Kinematics, Kinematics Chains CS 685 Previously Representation of rigid body motion Two different interpretations - as transformations between different coord. frames - as operators acting on a rigid body

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z Basic Linear Algebra Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ 1 5 ] 7 9 3 11 Often matrices are used to describe in a simpler way a series of linear equations.

More information

Introduction to Mobile Robotics

Introduction to Mobile Robotics Introduction to Mobile Robotics Olivier Aycard Associate Professor University of Grenoble Laboratoire d Informatique de Grenoble http://membres-liglab.imag.fr/aycard olivier. 1/22 What is a robot? Robot

More information

A Simple Introduction to Omni Roller Robots (3rd April 2015)

A Simple Introduction to Omni Roller Robots (3rd April 2015) A Simple Introduction to Omni Roller Robots (3rd April 2015) Omni wheels have rollers all the way round the tread so they can slip laterally as well as drive in the direction of a regular wheel. The three-wheeled

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

Practical Robotics (PRAC)

Practical Robotics (PRAC) Practical Robotics (PRAC) A Mobile Robot Navigation System (1) - Sensor and Kinematic Modelling Nick Pears University of York, Department of Computer Science December 17, 2014 nep (UoY CS) PRAC Practical

More information

arxiv: v1 [cs.ro] 2 Sep 2017

arxiv: v1 [cs.ro] 2 Sep 2017 arxiv:1709.00525v1 [cs.ro] 2 Sep 2017 Sensor Network Based Collision-Free Navigation and Map Building for Mobile Robots Hang Li Abstract Safe robot navigation is a fundamental research field for autonomous

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Improving autonomous orchard vehicle trajectory tracking performance via slippage compensation

Improving autonomous orchard vehicle trajectory tracking performance via slippage compensation Improving autonomous orchard vehicle trajectory tracking performance via slippage compensation Dr. Gokhan BAYAR Mechanical Engineering Department of Bulent Ecevit University Zonguldak, Turkey This study

More information

ROBOTICS AND AUTONOMOUS SYSTEMS

ROBOTICS AND AUTONOMOUS SYSTEMS ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 6 PERCEPTION/ODOMETRY comp329-2013-parsons-lect06 2/43 Today We ll talk about perception and

More information

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano tel:

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano   tel: Marcello Restelli Dipartimento di Elettronica e Informazione Politecnico di Milano email: restelli@elet.polimi.it tel: 02 2399 3470 Path Planning Robotica for Computer Engineering students A.A. 2006/2007

More information

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100 ME 597/747 Autonomous Mobile Robots Mid Term Exam Duration: 2 hour Total Marks: 100 Instructions: Read the exam carefully before starting. Equations are at the back, but they are NOT necessarily valid

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Terry Payne Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Rigid Body Transformations

Rigid Body Transformations F1/10 th Racing Rigid Body Transformations (Or How Different sensors see the same world) By, Paritosh Kelkar Mapping the surroundings Specify destination and generate path to goal The colored cells represent

More information

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET After reading through the Introduction, Purpose and Principles sections of the lab manual (and skimming through the procedures), answer the following

More information

Kinematics of Wheeled Robots

Kinematics of Wheeled Robots CSE 390/MEAM 40 Kinematics of Wheeled Robots Professor Vijay Kumar Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania September 16, 006 1 Introduction In this chapter,

More information

CS283: Robotics Fall 2016: Sensors

CS283: Robotics Fall 2016: Sensors CS283: Robotics Fall 2016: Sensors Sören Schwertfeger / 师泽仁 ShanghaiTech University Robotics ShanghaiTech University - SIST - 23.09.2016 2 REVIEW TRANSFORMS Robotics ShanghaiTech University - SIST - 23.09.2016

More information

Build and Test Plan: IGV Team

Build and Test Plan: IGV Team Build and Test Plan: IGV Team 2/6/2008 William Burke Donaldson Diego Gonzales David Mustain Ray Laser Range Finder Week 3 Jan 29 The laser range finder will be set-up in the lab and connected to the computer

More information

EE631 Cooperating Autonomous Mobile Robots

EE631 Cooperating Autonomous Mobile Robots EE631 Cooperating Autonomous Mobile Robots Lecture: Multi-Robot Motion Planning Prof. Yi Guo ECE Department Plan Introduction Premises and Problem Statement A Multi-Robot Motion Planning Algorithm Implementation

More information

R f da (where da denotes the differential of area dxdy (or dydx)

R f da (where da denotes the differential of area dxdy (or dydx) Math 28H Topics for the second exam (Technically, everything covered on the first exam, plus) Constrained Optimization: Lagrange Multipliers Most optimization problems that arise naturally are not unconstrained;

More information

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment Planning and Navigation Where am I going? How do I get there?? Localization "Position" Global Map Cognition Environment Model Local Map Perception Real World Environment Path Motion Control Competencies

More information

EV3 Programming Workshop for FLL Coaches

EV3 Programming Workshop for FLL Coaches EV3 Programming Workshop for FLL Coaches Tony Ayad 2017 Outline This workshop is intended for FLL coaches who are interested in learning about Mindstorms EV3 programming language. Programming EV3 Controller

More information

MTRX4700: Experimental Robotics

MTRX4700: Experimental Robotics Stefan B. Williams April, 2013 MTR4700: Experimental Robotics Assignment 3 Note: This assignment contributes 10% towards your final mark. This assignment is due on Friday, May 10 th during Week 9 before

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Cinematica dei Robot Mobili su Ruote. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Cinematica dei Robot Mobili su Ruote. Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Cinematica dei Robot Mobili su Ruote Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Riferimenti bibliografici Roland SIEGWART, Illah R. NOURBAKHSH Introduction to Autonomous Mobile

More information

Robot Motion Control Matteo Matteucci

Robot Motion Control Matteo Matteucci Robot Motion Control Open loop control A mobile robot is meant to move from one place to another Pre-compute a smooth trajectory based on motion segments (e.g., line and circle segments) from start to

More information

Manipulator trajectory planning

Manipulator trajectory planning Manipulator trajectory planning Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering Department of Cybernetics Czech Republic http://cmp.felk.cvut.cz/~hlavac Courtesy to

More information

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive

More information

15. PARAMETRIZED CURVES AND GEOMETRY

15. PARAMETRIZED CURVES AND GEOMETRY 15. PARAMETRIZED CURVES AND GEOMETRY Parametric or parametrized curves are based on introducing a parameter which increases as we imagine travelling along the curve. Any graph can be recast as a parametrized

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

Introduction to Mobile Robotics

Introduction to Mobile Robotics Introduction to Mobile Robotics Olivier Aycard Associate Professor University of Grenoble Laboratoire d Informatique de Grenoble http://membres-liglab.imag.fr/aycard 1/29 Some examples of mobile robots

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Team Description Paper Team AutonOHM

Team Description Paper Team AutonOHM Team Description Paper Team AutonOHM Jon Martin, Daniel Ammon, Helmut Engelhardt, Tobias Fink, Tobias Scholz, and Marco Masannek University of Applied Science Nueremberg Georg-Simon-Ohm, Kesslerplatz 12,

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Functional Description and Complete System Block Diagram By: Adam Beach Nick Wlaznik Advisors: Dr. Huggins Dr. Stewart December 14, 2006 I. Introduction

More information

Robotics (Kinematics) Winter 1393 Bonab University

Robotics (Kinematics) Winter 1393 Bonab University Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots

More information

Introduction to Mobile Robotics Probabilistic Motion Models

Introduction to Mobile Robotics Probabilistic Motion Models Introduction to Mobile Robotics Probabilistic Motion Models Wolfram Burgard, Michael Ruhnke, Bastian Steder 1 Robot Motion Robot motion is inherently uncertain. How can we model this uncertainty? Dynamic

More information

Introduction to robot algorithms CSE 410/510

Introduction to robot algorithms CSE 410/510 Introduction to robot algorithms CSE 410/510 Rob Platt robplatt@buffalo.edu Times: MWF, 10-10:50 Location: Clemens 322 Course web page: http://people.csail.mit.edu/rplatt/cse510.html Office Hours: 11-12

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Navigation and Metric Path Planning

Navigation and Metric Path Planning Navigation and Metric Path Planning October 4, 2011 Minerva tour guide robot (CMU): Gave tours in Smithsonian s National Museum of History Example of Minerva s occupancy map used for navigation Objectives

More information

Mobile Robots: An Introduction.

Mobile Robots: An Introduction. Mobile Robots: An Introduction Amirkabir University of Technology Computer Engineering & Information Technology Department http://ce.aut.ac.ir/~shiry/lecture/robotics-2004/robotics04.html Introduction

More information

First of all, we need to know what it means for a parameterize curve to be differentiable. FACT:

First of all, we need to know what it means for a parameterize curve to be differentiable. FACT: CALCULUS WITH PARAMETERIZED CURVES In calculus I we learned how to differentiate and integrate functions. In the chapter covering the applications of the integral, we learned how to find the length of

More information

iracing Camera Tool Introduction Positioning the camera with Position Type

iracing Camera Tool Introduction Positioning the camera with Position Type iracing Camera Tool Introduction This is a brief introduction to the new camera tool built into the iracing simulator. You can enter the camera tool when in replay mode by hitting Ctrl-F12 at any time,

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

Chapter 3 Path Optimization

Chapter 3 Path Optimization Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for

More information

Chairside Correlation in CEREC 3D software

Chairside Correlation in CEREC 3D software Chairside Correlation in CEREC 3D software Correlation has been a popular design mode in the CEREC software since CEREC 2. This mode allows the user to copy an existing (or waxed up, etc) occlusion and

More information

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION ABSTRACT Mark A. Mueller Georgia Institute of Technology, Computer Science, Atlanta, GA USA The problem of autonomous vehicle navigation between

More information

Basics of Computational Geometry

Basics of Computational Geometry Basics of Computational Geometry Nadeem Mohsin October 12, 2013 1 Contents This handout covers the basic concepts of computational geometry. Rather than exhaustively covering all the algorithms, it deals

More information

15-494/694: Cognitive Robotics

15-494/694: Cognitive Robotics 15-494/694: Cognitive Robotics Dave Touretzky Lecture 9: Path Planning with Rapidly-exploring Random Trees Navigating with the Pilot Image from http://www.futuristgerd.com/2015/09/10 Outline How is path

More information

The Art Gallery Problem: An Overview and Extension to Chromatic Coloring and Mobile Guards

The Art Gallery Problem: An Overview and Extension to Chromatic Coloring and Mobile Guards The Art Gallery Problem: An Overview and Extension to Chromatic Coloring and Mobile Guards Nicole Chesnokov May 16, 2018 Contents 1 Introduction 2 2 The Art Gallery Problem 3 2.1 Proof..................................

More information

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Arun Das 05/09/2017 Arun Das Waterloo Autonomous Vehicles Lab Introduction What s in a name? Arun Das Waterloo Autonomous

More information

Particle Filter Localization

Particle Filter Localization E190Q Autonomous Mobile Robots Lab 4 Particle Filter Localization INTRODUCTION Determining a robots position in a global coordinate frame is one of the most important and difficult problems to overcome

More information

MATLAB Programming for Numerical Computation Dr. Niket Kaisare Department Of Chemical Engineering Indian Institute of Technology, Madras

MATLAB Programming for Numerical Computation Dr. Niket Kaisare Department Of Chemical Engineering Indian Institute of Technology, Madras MATLAB Programming for Numerical Computation Dr. Niket Kaisare Department Of Chemical Engineering Indian Institute of Technology, Madras Module No. #01 Lecture No. #1.1 Introduction to MATLAB programming

More information

NERC Gazebo simulation implementation

NERC Gazebo simulation implementation NERC 2015 - Gazebo simulation implementation Hannan Ejaz Keen, Adil Mumtaz Department of Electrical Engineering SBA School of Science & Engineering, LUMS, Pakistan {14060016, 14060037}@lums.edu.pk ABSTRACT

More information

Lecture 13 Visual Inertial Fusion

Lecture 13 Visual Inertial Fusion Lecture 13 Visual Inertial Fusion Davide Scaramuzza Course Evaluation Please fill the evaluation form you received by email! Provide feedback on Exercises: good and bad Course: good and bad How to improve

More information

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu

More information

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Introduction The goal of this survey paper is to examine the field of robotic mapping and the use of FPGAs in various implementations.

More information

CALCULUS II. Parametric Equations and Polar Coordinates. Paul Dawkins

CALCULUS II. Parametric Equations and Polar Coordinates. Paul Dawkins CALCULUS II Parametric Equations and Polar Coordinates Paul Dawkins Table of Contents Preface... ii Parametric Equations and Polar Coordinates... 3 Introduction... 3 Parametric Equations and Curves...

More information

Discover Robotics & Programming CURRICULUM SAMPLE

Discover Robotics & Programming CURRICULUM SAMPLE OOUTLINE 5 POINTS FOR EDP Yellow Level Overview Robotics incorporates mechanical engineering, electrical engineering and computer science - all of which deal with the design, construction, operation and

More information

Simultaneous Localization

Simultaneous Localization Simultaneous Localization and Mapping (SLAM) RSS Technical Lecture 16 April 9, 2012 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 Navigation Overview Where am I? Where am I going? Localization Assumed

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have

More information

Fast Local Planner for Autonomous Helicopter

Fast Local Planner for Autonomous Helicopter Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system

More information

Robot Motion Control and Planning

Robot Motion Control and Planning Robot Motion Control and Planning http://www.ceng.metu.edu.tr/~saranli/courses/ceng786 Lecture 2 Bug Algorithms Uluç Saranlı http://www.ceng.metu.edu.tr/~saranli CENG786 - Robot Motion Control and Planning

More information

Humanoid Robotics. Least Squares. Maren Bennewitz

Humanoid Robotics. Least Squares. Maren Bennewitz Humanoid Robotics Least Squares Maren Bennewitz Goal of This Lecture Introduction into least squares Use it yourself for odometry calibration, later in the lecture: camera and whole-body self-calibration

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Mobile Robot Kinematics

Mobile Robot Kinematics Mobile Robot Kinematics Dr. Kurtuluş Erinç Akdoğan kurtuluserinc@cankaya.edu.tr INTRODUCTION Kinematics is the most basic study of how mechanical systems behave required to design to control Manipulator

More information

ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space

ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu

More information

Centre for Autonomous Systems

Centre for Autonomous Systems Robot Henrik I Centre for Autonomous Systems Kungl Tekniska Högskolan hic@kth.se 27th April 2005 Outline 1 duction 2 Kinematic and Constraints 3 Mobile Robot 4 Mobile Robot 5 Beyond Basic 6 Kinematic 7

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information

Exercise 2-1. Programming, Using RoboCIM EXERCISE OBJECTIVE

Exercise 2-1. Programming, Using RoboCIM EXERCISE OBJECTIVE Exercise 2-1 Programming, Using RoboCIM EXERCISE OBJECTIVE In this exercise, you will learn new terms used in the robotics field. You will learn how to record points and use them to edit a robot program.

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

5/27/12. Objectives. Plane Curves and Parametric Equations. Sketch the graph of a curve given by a set of parametric equations.

5/27/12. Objectives. Plane Curves and Parametric Equations. Sketch the graph of a curve given by a set of parametric equations. Objectives Sketch the graph of a curve given by a set of parametric equations. Eliminate the parameter in a set of parametric equations. Find a set of parametric equations to represent a curve. Understand

More information

Tracking a Mobile Robot Position Using Vision and Inertial Sensor

Tracking a Mobile Robot Position Using Vision and Inertial Sensor Tracking a Mobile Robot Position Using Vision and Inertial Sensor Francisco Coito, António Eleutério, Stanimir Valtchev, and Fernando Coito Faculdade de Ciências e Tecnologia Universidade Nova de Lisboa,

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information