Autonomous Mobile Robots

Size: px
Start display at page:

Download "Autonomous Mobile Robots"

Transcription

1 Autonomous Mobile Robots Basics 1..1 / Leg configuration, stability and gaits, locomotion with 1,, 4 or 6 legs At least six legs for static walking (always statically stable tripod of legs) Min. DOF for legged mobile robots (lift and swing forward) More DOF => better maneuverability / more mass / more energy required / complexer control Number of possible gaits depends on number of legs N = ( * k 1)! with k: legs / N: possible events e.g. biped walker: N = 6 => lift l/r/both and release l/r/both 1 leg: hopping, continuous stabilization e.g. Raibert hopper legs (biped): anthropomorphic shape, continuous balance-correcting e.g. Honda ASIMO 4 legs (quadruped): standing still is passively stable, walking remains challenging e.g. Sony AIBO 6 legs (hexapod): static stability during walking, reduced control complexity 3..3 Wheel kinematic constraints of the 5 wheel types, pro/cons of wheel types 3 fundamental characteristics of a robot are set by: maneuverability, controllability, stability Initial frame: {XI, YI} & Robot frame: {XR, YR} Mapping: R= R I =R [ x y ] T [ cos sin 0 with R = sin cos ] Assumptions: plane always vertical, one single contact point, no sliding, pure rolling, no lateral slippage Fixed standard wheel: No vertical axis for steering, angle to chassis fixed, limited motion along wheel plane (forth and back) Parameters: β, l, α, r, φ Pro: good lateral stability at high speeds, many products available, simple, robust Con: may reduce robot s maneuverability Steered standard wheel: Additional DOF (wheel may rotate around vertical axis passing through center of wheel and ground contact point Parameters: β(t) Pro / Con: same as for fixed standard wheel Castor wheel: Able to steer around vertical axis (does not pass through ground contact point) Any motion orthogonal to the wheel plane must be balanced by an equivalent and opposite amount of castor steering motion Parameters: d Pro: Castor wheel is omnidirectional Con: kinematics somewhat complex Swedish wheel: No vertical axis of rotation Added DOF to standard wheel Exact angle γ between roller axes and main axis can vary (Swedish 45 / Swedish 90 ) Pro: truly omnidirectional Con: difficult, error-prone mechanism, limitation for loads Spherical wheel: No direction constraints on motion No principal axis of rotation, no appropriate rolling or sliding constraints Pro: truly omnidirectional Con: difficult, error-prone mechanism for actuation, limitation for loads 1 / 4

2 3..5 Robot kinematic constraints; diff. and omni drive examples Only fixed and steerable standard wheels have impact on robot chassis kinematics Rolling and sliding constraints Rolling constraints: J 1 s R 1 J =0 All standard wheels must spin around their horizontal axis an appropriate amount based on their motions along the wheel plane so that rolling occurs at the ground impact point. [ J 1 s = J 1f J 1s s ] Sliding constraints: C 1 s R 1=0 All standard wheels components of motion orthogonal to their wheel planes must be zero. => [ [ ] C 1f C 1s s J s R = J 1 C 1 s 0 C 1 s = ] [ ] 3.3 Robot mobility, steerability and maneuverability: definition, interpreting, geometric interpretation (ICR) Mobility: Kinematic mobility of a robot chassis is its ability to directly move in the environment Basic constraint limiting mobility is the rule that every wheel must satisfy its sliding constraint Is a function of the number of constraints on the robot's motion, not the number of wheels Degree of Mobility: quantifies degrees of controllable freedom based on changes to wheel velocity m =dimn [C 1 s ]=3 rank [C 1 s ] (no standard wheels: rank = 0, all directions constrained: rank = 3) Steerability: Number of independently controllable steering parameters Degree of Steerability: s=rank [C 1s s ] (0 DoS ) Maneuverability: Combination of the mobility available based on the sliding constraints + additional freedom contributed by steering Two robots with the same M are not necessarily equivalent (e.g. differential drive and tricycle) Degree of Maneuverability: M = m s (= : ICR always on line / = 3: ICR always on point) ICR: concept of robot's instantaneous center of rotation. It is found by considering the intersection of all standard wheels zero motion line. If no ICR exists, no motion is possible (except for infinity => move straight) The five basic types of three-wheel configurations (fig. 3.14) / 4

3 3.4. Holonomic and non-holonomic constraints; examples and interpretation Differential degree of freedom (DDOF): = m, governs ability to achieve various paths Degrees of freedom (DOF): ability to achieve various poses DDOF M DOF Holonomic kinematic constraint: can be expressed as an explicit function of position variables only Holonomic robot: robot that has zero nonholonomic kinematic constraints M 3 or no kinematic constraints M =3 => DDOF = DOF! e.g. locked bycicle, everything with omni-, swedish- and castor-wheels Nonholonomic kinematic constraint: requires a differential relationship, such as the derivative of a position variable Nonholonomic robot: robot with one ore more nonholonomic kinematic constraints e.g. bicycle, Ackermann, two steer Path and trajectory considerations; the omni drive and two-steered robot example Omnidirectional robots are able to trace any path through its workspace, but they have to use Swedish, castor, or spherical wheels (complex and expensive!) Nonholonomic constraints (fixed wheels) improve stability drastically on high speeds (centripetal force) Omndidrive: M = m s=3 0=3 any path and any trajectory Two steer: M = m s=1 =3 Not holonomic(!), but any ICR can be selected => any path Time is necessary to turn the wheels => not every trajectory possible / Open loop and closed loop control, diff. drive example Objective of a kinematic controller is to follow a trajectory described by its position or velocity profile as a function of time Open loop: Dividing the trajectory (path) in motion segments of clearly defined shape (e.g. lines / circles) Pre compute a smooth trajectory based on line and circle segments Disadvantages: Not at all easy to precompute No adaption to dynamic changes in environment Resulting trajectories usually not smooth (discontinuity in robot's acceleration) Closed loop: Use a real-state feedback controller Setting intermediate positions (subgoals) lying on the requested path Real-time calculations necessary Kinematic position control: Diff. drive example: Script / Book! Find control matrix K such that control of v(t) and ω(t) drives the error e to zero. Transformation into polar coordinates x, y, = error to goal pose (goal pose = origin) P-controller with control matrix K Control law: v t t = K e=k x y v=k ; =k k With = x y ; = atan y, x ; = Describe system in polar coordinates and in inertial reference frame Case differentiation: = /, / ; = /, / => v = -v With the control law described above robot does not change direction during approaching the goal k 0 ; k k 0 => poles negative Controller is exponentially stable for k 0 ; / 4 Sensors for odometric update: Wheel encoders, heading sensors (Compass, Gyroscope) High-quality, low-cost, excellent resolution Optical incremental encoders: Proprioceptive => robot reference frame Consists of: An illumination source Grating (Gitter) that masks the light Rotor disc with fine optical grid 3 / 4

4 Fixed optical detector Reads sine signal => convert to square wave pulse Resolution is measured in cycles per revolution (CPR) => ~000 used in mobile robotics Quadrature encoder: illumination / detector pairs, shifted 90 => 4x resolution & direction information Accuracy: ~100% => errors due to motor shaft, etc. are larger Next two used to determine robots orientation and inclination Compass (exteroceptive): Measure the direction of the earth magnetic field Hall effect compass: A constant current applied across the length of a semiconductor => voltage difference in the perpendicular direction, across the semiconductor's width, based on the relative orientation of the semiconductor to magnetic flux lines Usually two semiconductors => axes information Inexpensive, poor resolution, nonlinear, bias error, filtering required => lowers bandwidth Flux gate compass: Two small coils are wound on ferrite cores and are fixed perpendicular to another Alternating current activated in both coils => magnetic field causes shifts in phase depending on its relative alignment with each coil => direction of magnetic field in dimensions can be computed Improved accuracy, resolution, larger, more expensive Both are easily disturbed by other magnetic objects Gyroscope (proprioceptive): Preserve their orientation in relation to a fixed reference frame Mechanical gyroscopes: Concept relies on inertial properties of fast-spinning rotor Gyroscopic precession: angular momentum counteracting forces on the wheel => gyro inertially stable, when positioned in 3 gimbals Due to friction => angular drift Rate gyro: gimbals are restrained by a torsional spring with additional viscous damping => measurement of angular speeds instead of absolute orientation Optical gyroscopes: Angular speed sensors that use two monochromatic light beams (or lasers) emitted from the same source, instead of moving mechanical parts One beam travels clockwise through a fiber, the other counterclockwise => difference in path lengths due to rotation can be measured through the different traveling time (proportional to angular velocity) Available with bandwidth / resolution far beyond mobile robotics use Ground based beacons: GPS 4 satellites orbiting earth every 1 hours Every satellite has an own atomic clock and is regularly updated by a ground station Technical challenges: Time sync between satellites Real time update of exact location of satellites Precise measurement of time of flight Interferences with other signals Receiver is passive but exteroceptive Requires 4 satellites (3 for position, 1 for time sync) and line of sight(!) to them Location is determined through time of flight measurement Latency: ms => 5Hz Precision: ~15m DGPS: differential GPS Uses a second receiver that is static and at a known exact position Precision: ~1m Take phase of carrier signals of each satellite transmission into account: Precision: ~1cm Range sensors? Measurement principles, performance (laser, ultrasonic) Active ranging: low price, direct measurements of distance from robot to objects Time-of-flight ranging makes use of the propagation speed of sound or an electromagnetic wave Travel distance is given by sped of wave propagation times time-of-flight: d = c*t Speed of sound: 0.3m/ms Speed of electromagnetic signals: 0.3m/ns (!) 4 / 4

5 Ultrasonic sensor: Transmit a packet of (ultrasonic) pressure waves and measure the time it takes to reflect and return to receiver Uses threshold value for triggering incoming sound wave as valid echo Wave is usually generated by a piezo or electrostatic transducer at kHz Range: 1cm 5m Resolution: ~cm The bad: Sound propagates in a cone => no point measurement Error due to target material which absorbs the sound waves Bandwidth limited (low cycle time) Cross-sensitivity (only 1 sensor may be active) Laser rangefinder: Consists of a transmitter which illuminates a target with a collimated beam and a receiver capable of detecting the component of light which is essentially coaxial with the transmitted beam Mechanical mechanisms with a mirror may be used to cover D or even 3D space Measurement methods: Use pulsed laser and then measure elapsed time directly (just as in ultrasonic sensors) Measure beat frequency between a frequency modulated continuous wave and its received reflection Measure phase shift of the reflected light: Range estimation by measuring the phase shift between transmitted and received signals Surfaces having a roughness greater than the wavelength of the incident light reflect it (diffuse reflection) D '=L D=L c with = (D = total distance, θ: phase shift) f Dark distant objects will not produce as good range estimates as close, bright ones Angular resolution: 0.5 Depth resolution: 5cm Range: 5cm 0m< The bad: Optically transparent materials can not be detected Triangulation Sensor: D, 3D; Resolution as a function of distance Use geometric properties manifest in their measuring strategy to establish distance readings to objects Optical triangulation (1D): Collimated beam (IR LED, laser) is transmitted toward target, reflected light is collected by a lens and projected onto a position-sensitive device (PSD) or linear camera Distance D= f L x Good resolution for close objects (depends on 1/x!) Range: up to 1-m The good: High bandwidth No cross-sensitivities Structured light (D): Linear camera or PSD is replaced with CCD or CMOS camera, then distances to a large set of points instead of only one point can be recovered Emitter projects a known pattern or structured light onto environment Increasing baseline b improves resolution but makes sensor bigger Larger detector length f can provide larger field of view or improved range resolution but makes the sensor head bigger Doppler effect 5 / 4

6 4.1.8 Visual ranging sensors: Depth from focus and floor plane extraction Vision based sensors like the CCD (charged couple device) and the CMOS (complementary metal oxide semiconductor) provide us with images that can be used for computer vision General solution is to recover depth by looking at several images of the scene to gain more information, hopefully enough to at least partially recover depth Depth from focus: Relies on fact that image properties not only change as a function of the scene but also as a function of the camera parameters To determine the range to an object, the sensor simply moves the image plan (via focusing) until maximizing the sharpness (using formulas) of the object. When the sharpness is maximized, the corresponding position of the image plane directly reports range. When the image plane does not coincide with the focal plane, the image of a point in the scene is a circle on the image plane (= blur circle). When the blur radius is known, one can compute e from which the distance d can be recovered. The bad: Loses sensitivity as objects move farther away Is a slow method Floor plane extraction: Vision-based approach for identifying the traversable portions of the ground Works only when: Obstacles differ in appearance from the ground Ground is flat and angle to camera known No overhanging obstacles Algorithms use edge detection and color detection jointly while making certain assumptions regarding the floor (e.g. max texture or approximate color range) Adaptive floor plane extraction: parameters can change over time (as reference, pixels at the bottom right in front of the robot are taken => use histograms) Applications: lawn mowing, social indoor robots, automated electric wheelchairs Stereo vision and depth from focus require the objects to have texture in order to detect them accurately. Floor plane extraction on the other hand focuses on open wide spaces where there are no reflecting objects around. Such algorithms tend to use edge detection and color detection while making certain assumptions regarding the floor Stereo vision: Concept, Calibration, Rectification, Correspondence Search, Disparity Map and Triangulation Concept: Reconstruct 3D object from two images taken at different locations Ideal case: both cameras identical and aligned on horizontal axis b = baseline / f = focal length / v v' = disparity Distance is inversely proportional to disparity z = bf v v ' Disparity is proportional to b Calibration: Estimate relative pose (rotation / translation) between cameras: 10 parameters for each camera Estimate focal length, image center, radial distortion: 5 parameters for each camera (radial / tangential distortion) Correspondence Search: Matching between points in the two images which are projection of the same 3D real point Epipolar constraint: Correspondent of a point in an image must lie on a line in the other image called epipolar line Conjugate points can be searched along epipolar lines (reduces computational cost to 1D!) Rectification: Determines transformation of each image plane so that pairs of conjugate epipolar lines become collinear and parallel to one of the image axes (usually horizontal one) Triangulation: 6 / 4

7 3D reconstruction via triangulation Disparity map: Find conjugate points of all image pixels of original images Compute for each pair of conjugate points disparity d d(x,y) = disparity map Are usually visualized as grey scale images (closer = lighter / further = darker) Motion and Optical Flow: Basic Principle of Optical Flow Motion field: assigns a velocity vector to every point in an image Optical flow: motion of brightness patterns in an image We assume, that optical flow corresponds to motion field, although this is not always true in practice Algorithms attempt to find correlations between near frames in a video and generate a vector field showing where each pixel or region moved E(x, y, t) = irradiance at time t at the image point (x, y) u(x, y) / v(x, y) = optical flow vector at that point Find a new image for a point where the irradiance will be the same at time t t E x u t, y v t, t t =E x, y, t If brightness varies smoothly => Taylor / t 0 we can abbreviate: E x u E y v E t=0 optical flow constraint equation We can only get the direction of the velocity (u, v) and not unique values for u and v See lecture notes for more details!?! 4..1 / Representing uncertainty, Gaussian distribution, error propagation law, usage Estimate of the true value E[X] = g(ρ1, ρ,..., ρn) with n measurements with values ρi =E [ X ]= x f x dx Var x = = x f x dx with σ = standard deviation Gaussian distribution: Also called normal distribution Statistical representation for describing the performance characteristics of a sensor. The imperfect behavior is due to the fact that a sensor is a physical device and suffers from both systematic and random errors Is a probability density function f(x) identifying for each possible x its corresponding probability X (random variable). f(x) takes only two parameters: σ (standard deviation) and μ (expected value) Is a conservative distribution: all possible errors are possible although very large errors are highly improbable. f x = 1 x exp Error propagation: General solution can be generated using first order Taylor expansion of fi. The output covariance matrix CY is given by the error propagation law: T C Y =F X C X F X (CX: covariance matrix representing input uncertainties / CY: covariance matrix representing the propagates uncertainties for the outputs / Fx is the Jacobian matrix) See picture at p150! 7 / 4

8 Line extraction: Split and merge, linear regression, incremental, RANSAC, Hough-Transform, ExpectationFeature extraction: Features are lines, planes, etc. Raw data requires huge amount of storage Basis for high level features (e.g. edges, doors, tables, other objects) Line representation: Cartesian: y = ax + b Polar: r = x*cos(α) + y*sin(α) Split and merge: Most popular algorithm which is originated from computer vision Recursive procedure of splitting and fitting Slightly different version, called IterativeEnd-Point-Fit, simply connects the end points for line fitting Line regression: Inspired from Hough-Transform algorithm so that it transforms into a search problem in line model space (r, α) Each sliding window is fitted by a segment (ri, αi) Then adjacent segments are merged if their model points are close Incremental: Simplicity Has a different name: Line-Tracking To increase the speed, a few points can be added instead of 1 in each step. RANSAC: Acronym of Random Sample Consensus It is a generic and robust fitting algorithm of models in the presence of data outliers Drawback: a nondeterministic method, results are different between runs Hough-Transform: Most successfully applied on intensity images Computes an accumulator array of line model space (r, α) Then it selects the most intense points Drawbacks: difficult to choose the grid size, does not take into account noise and uncertainty Expectation-Maximization: Is a probabilistic method (nondeterministic) Is based on an iterative optimization procedure Drawbacks: Can be trapped in local minima Requires good initial values Comparison: First 3 deterministic methods perform better Split-and-merge, Incremental, Line-Regression Make use of the sequencing property of scan points Nondeterministic methods produce high FalsePos RANSAC, EM Do not use sequencing property Fit lines falsely across the map Overall: Split-and-merge is the fastest, best real-time applications 8 / 4

9 Incremental is also good candidate for SLAM by its low FalsePos 4.3. Edge Detection steps: Image gradient, Noise Filtering, Thresholding, edge thinning - Canny Edge detector Edges correspond to sharp changes of intensity Change is measured by first derivative in 1D Biggest change, derivative has maximum magnitude Or second derivative is zero Image gradient: [ f= f f, x y ] Points in the direction of most rapid change in intensity Edge strength is given by gradient magnitude f Noise filtering: Smooth the edge Due to h f x h f = h f => x x Look for peaks in x h f Thresholding: Changing illumination => constant threshold level in edge detection is not suitable Solution: dynamically adapt the threshold level Histogram of the gradient magnitudes of the processed image is calculated With this histogram it is easy to consider only the n pixels with the highest gradient magnitude for further calculation steps Edge thinning (Nonmaxima suppression): Output of an edge detector is usually a b/w image where the pixels with gradient magnitude above a predefined threshold are black and all the others are white Generates contours described with only one pixel thinness Canny edge detector: Idea: detection formulated as an optimization problem Three goals: Maximize the signal-to-noise ratio Achieve highest precision possible on location of edge Minimize number of edge responses associated with each edge Process: Smooth image via Gaussian convolution Finding maxima in the (rectified) derivative This is only one step in practice due to: h f x Detector 1D: Convolute image I with G' to obtain R Find absolute value of R Find peaks in R (> threshold), to eliminate spurious noise peaks Next step: construction of complete edge from edge pixels => nonmaxima supression Idea: use edge direction information to reduce thickness of all edges to a single pixel 4.3. Feature extraction based on camera images: Scheme and tools? (fig. 4.41) General remarks about computer vision in mobile robotics: Challenge: reduce information = remove majority of irrelevant information Two key requirements for computer vision methods in mobile robotics: Operation in real-time (not offline) Robust to real-world conditions (no controlled illumination,...) Two classes of vision-based feature extraction methods: Spatially localized features: found in images' subregion, corresponding to specific locations in real world Whole-image features: functions of entire image(s), correspond to large visually connected area in real world 9 / 4

10 Scheme: Conditioning: Supress noise Background normalization by supressing uninteresting systematic or patterned variations Done by: Gray-scale modification (e.g. thresholding) (Low pass) filtering Labeling: Determination of the spatial arrangement of the events, I.e. searching for a structure Grouping: Identification of the events by collecting together pixel participating in the same kind of event Extracting: Compute a list of properties for each group Matching (see chapter 5) Image preprocessing: Gaussian smoothing (Low-pass filter) removes highfrequency noise => causes first and second derivative of intensity to be far more stable G= [ ] Spatially localized features: Edge detection: Edges: Locations where the brightness undergoes a sharp change Differentiate one or two times the image Look for places where the magnitude of the derivative is large Noise, thus first filtering / smoothing required before edge detection Optimal edge detection: Canny (see above) Gradient edge detectors: Simpler, discrete kernel operators => faster (real-time behavior), eg. Roberts: x, two diagonal directions Prewitt / Sobel: 3x3, separate matrix for row and column direction Dynamic thresholding (see above) Hough transforms (straight edge extraction): For all pixels that are part of a single straight line through I, they must all lie on a line defined by the same values for m and b. General definition of this line is: y = m*x + b. Uses this basic property, creating a mechanism so that each pixel can vote for various values of the (m, b) parameters. The lines with the most votes at the end are straight edge features. Floor plane extraction (see above) 4.3 Point Feature Extraction: Harris versus SIFT features, basic concept and comparison (lecture notes) Harris Corner Detector: A small window is shifted over an image. Corner is, where large change in intensity occurs in two directions. There are three possibilities that can occur shifting the window: flat region: no change in all directions edge: no change along the edge direction corner: significant change in all directions M is a x matrix computed from image derivatives: M = w x, y x, y 10 / 4 [ I x Ix Iy Ix Iy I y ] Intensity change in shifting window: eigenvalue analysis

11 E u, v [u, v ] [] M u v Measure of corner response: R=det M k trace M det M = 1 & trace M = 1 k = empirical constance ( ) R depends only on eigenvalues of M R is large for a corner R is negative with large magnitude for an edge R is small for a flat region The algorithm: Find points with large corner response function R (R > threshold) Take the points of local maxima of R Some properties: Rotation invariance (ellipsis rotates but its shape eigenvalues remains the same) => corner response R is invariant to image rotation R is non-invariant to image scale! (drops drastically for scale > 1.5) => Solution: design function on the region, which is scale invariant Functions for scale invariant detection: Laplacian: L= G xx x, y, G yy x, y, Difference of Gaussian: DoG=G x, y, k G x, y, Both kernels are invariant to scale and rotation Harris-Laplacian: find local maximum of Harris corner detector in space and Laplacian in scale SIFT: find local maximum of DoG in space and scale SIFT (Lowe): Scale Invariant Feature Transform is an approach for detecting and extracting local feature descriptors that are reasonably invariant to changes in: rotation scaling small changes in viewpoint illumination image noise Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale and other imaging parameters Detection stages: Scale-space extrema detection: Convolution of image with Gaussian filters at different scales and generation of DoG images from difference of adjacent blurred images SIFT keypoints are identified as local maxima or minima of the DoG images across scales Each pixel in the DoG images is compared to its 8 neighbors at the same scale, plus the 9 corresponding neighbors at neighboring scales If the pixel is a local maximum or minimum it is a keypoint Keypoint localization: Interpolation of nearby data is used to accurately determine its position KP with low contrast are removed Responses along edges are eliminated Orientation assignment: To determine the keypoint orientation, a gradient orientation histogram is computed in the neighborhood of the keypoint Peaks in the histogram correspond to dominant orientations. If more than one peak is found, a separated feature is assigned to the same point location All the properties of the keypoint are measured relative to the keypoint orientation, this provides invariance to rotation Generation of keypoint descriptors: A keypoint descriptor is created by first computing the gradient magnitude and orientation at each image sample point in a region around the keypoint location, as shown on the left These samples are then accumulated into orientation 11 / 4

12 histograms summarizing regions, as shown on the right, with the length of each arrow corresponding to the sum of the gradient magnitudes near that direction within the region The descriptor is formed from a vector containing the values of all the orientations histogram entries, corresponding to the arrows of the right side This vector is then normalized to enhance invariance to changes in illumination Advantages of SIFT features: Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual feature can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance 4.3 Visual SLAM with Single camera: Constant velocity model, feature tracking and triangulation (lecture notes) Structure From Motion (SFM): Take some images of the object to reconstruct Features are extracted from all frames and matched among them All images are processed simultaneously Both camera motion and 3D structure can be recovered by optimally fusing the overall information up to a scale factor Robust but far from real-time! Visual SLAM: Ability to estimate the location of the camera after 10 min of motion with the same accuracy as was possible after 10 seconds Features must be stable, long-term landmark, no transient (as in SFM) Covariance matrix is not diagonal => uncertainty of any feature affects position estimation of all other features and camera Extended Kalman Filter is used for prediction and observation Attempts to predict where the camera will be in the next time step (motion model for smoothly moving camera) Constant velocity model: Unknown intentions and so unknown accelerations are taken into account by modeling the acceleration as a process of zero mean and Gaussian distribution By setting the covariance matrix of n to small or large values, we define the smoothness or rapidity of the motion we expect. In practice, 4m/s and 6rad/s are used. Feature tracking: By predicting the next camera pose, we can predict where each feature is going likely to appear At each frame, the features occuring at previous steps are searched in the elliptic region where they are expected to be according to the motion model (normalized sum of square differences is used for matching) Large ellipses mean that the feature is difficult to predict, thus the feature inside will provide more information for camera motion estimation Once the features are matched, the entire state of the system is updated according to EKF Number of features to be constantly visible in image varies (in practice between 6-10) according to: Localization accuracy Computing power available If a feature is required to be added, the detected feature is added only if it is not expected to disappear from the next image Triangulation: Up to now, tracked features were treated as D templates in image space Long-term tracking can be improved by approximating the features as a locally planar region on 3D world surfaces 1 / 4

13 Basics 5.1 The general schematics for mobile robot localization; implementation (fig. 5.) The four building blocks of navigation: Perception (robot must interpret its sensors to extract meaningful data) Localization (robot must determine its position in the environment) Cognition (robot must decide how to act to achieve its goals) Motion control (robot must modulate its motor outputs to achieve the desired trajectory) Five steps for map-based localization: (1) Prediction based on previous estimate and odometry () Observation with on-board sensors (3) Measurement prediction based on prediction and map (4) Matching of observation and map (5) Estimation => position update (posteriori position) Challenges of Localization: Knowing absolute position is not enough Localization in human-scale in relation with environment Planning in the cognition step requires more than only position as input Perception and motion plays an important role Sensor noise (by environment or sensor itself, reduces useful information drastically) Sensor aliasing (non-uniqueness of sensor readings, mapping of many readings to one state ) Effector noise (position update is based on proprioceptive sensors) Odometric position estimation 5..4 Odometric position estimation and error model for a differential drive robot [] x Pose of a robot is represented by the vector: p= y For a differential-drive robot the position can be estimated starting from a known position by integrating the movement Basic equation for odometric position update (for differential drive robots): [] [ s cos / x' p ' = y ' = p s sin / ' ] s r s l b s r s l s= = Error model: Goal: find covariance matrix p ' Initial covariance matrix p is known at starting point Assumptions: Two errors of individually driven wheels are independent Variance of errors are proportional to the absolute value of the traveled distances [ Resulting covariance matrix: =covar s r, s l = ] k r s r 0 0 k l sl kr/kl: error constants representing nondeterministic parameters of motor drive and wheel-floor interaction (experimental determination and verification) 13 / 4

14 Motion errors are due to imprecise movement because of deformation of wheel, slippage, unequal floor, errors in encoders,... Assuming that p and rl = s r ; s l are uncorrelated and the derivation of p' can be approximated bi first order Taylor expansion, we get the error T T propagation law: p ' = p f p p f f f rl rl The two Jacobians F p= p f and F Delta = f can be calculated from the definition of p': see above Results are shown on the right Note that the uncertainty in y grows much faster than in the other direction of movements. This results from the integration of the uncertainty about the robot's orientation. The ellipsis are the uncertainties in x and y, the orientation is not plotted although its effect can be indirectly observed In the second picture the main axis of the ellipsis does NOT remain perpendicular to the direction of the movement (curve radius r is kept constant) rl rl 5.4 Belief representation? (Metric, grid, topologic, single and multiple hypotheses) Fundamental issue that differentiates various map-based localization systems is the representation The robot must have a representation (a model) of the environment, or a map The robot must also have a representation of its belief regarding its position on the map Four examples of belief representation (continuous / discrete): Continuous map with single hypothesis Continuous map with multiple hypotheses Discretized map with probability distribution Discretized topological map with probability distribution The single hypothesis is the most direct possible postulation of mobile robot position Given a unique belief, there is no position ambiguity. The robot can simply assume that its belief is correct, and can then select its future actions based on its unique position The challenge is the position update to generate a single hypothesis (often impossible) Four examples of single-hypotheses of position using different map representations: Real map with walls, doors and furniture Line-based map Occupancy grid-based map Topological map using line features (Z/S lines) and doors In the multiple hypothesis beliefs, the robot tracks a set of positions The key advantage of the multiple-hypothesis representation is that the robot can explicitly maintain uncertainty regarding its position Belief can even be updated with only partial available sensor data Robot can also measure its degree of uncertainty regarding position => e.g. robot may choose paths that minimize its future position uncertainty Disadvantage of multiple-hypothesis approaches: Decision-making Can be computationally very expensive 14 / 4

15 5.5 Map representation: continuous, decomposition in cells Three fundamental relationships must be understood when choosing a particular map representation: Precision of the map must appropriately match precision with which the robot needs to achieve its goals Precision of map and type of features represented must match precision and data types returned by robot's sensors Complexity of map representation has direct impact on computational complexity of reasoning about mapping, localization and navigation Continuous representations: Only in D, higher dimensionality can result in computational explosion Total storage needed is proportional to density of objects in the environment => closed-world assumption Features often represented as polygons, or as infinite lines (best line fit of many laser readings) Extremely high accuracy is possible Map representation map be computationally costly => represent only important features Decomposition strategies: Capture only the relevant, useful features and discard all the others Disadvantage: loss of fidelity between map and real world (qualitative: overall structure / quantitative: geometric precision) Advantage: map representation can be minimized and reasoning and planning can be much faster than in a detailed map Exact cell decomposition: Decomposition by selecting boundaries between discrete cells based on geometric criticality Can be extremely compact and grows in size with number of objects Underlying assumption: particular position of robot within each area of free space does not matter. What matters is robot's ability to traverse from each area of free space to the adjacent areas. Fixed decomposition: World is tessellated, transforming the continuous real environment into a discrete approximation for the map Very popular approach in mobile robotics Disadvantage: because of its inexact nature, narrow passages might get lost during transformation Adaptive cell decomposition: Rectangle bounding free space is split into 4 rectangles If rectangle is complete free or occupied, nothing is done If rectangle is partly free/occupied it is split up into another 4 rectangles This is done until a predefined resolution is attained Occupancy grid: Environment is represented by a discrete grid Each cell is either filled or empty Each cell has a counter that increases if it is hit If cell is obstacle and beam travels through, counter resets Disadvantages: Size of map grows with size of environment Not compatible with closed-world assumption because memory is used for every cell Topological decomposition: Is a graph that specifies nodes and connectivity between those nodes Nodes document an area based on any sensor discriminant such that the robot can recognize entry and exit of the node (e.g. visual fingerprint) Advantage: environment may contain important non-geometric features that have no ranging relevance but are useful for localization / Probability theory applied to robot localization: The Bayesian update formula p(a): e.g. p(rt = l): p(a B): 15 / 4 prior probability of A (probability of A being true independent of any additional knowledge) prior probability that robot r is at position l at time t conditional probability of A given that we know B

16 e.g. p(rt = l it): probability that robot is at position l given robot's sensor input I p B A p A p i l p l Bayes formula: p A B = or p l i = p B p i => compute robot's new belief state as a function of its sensory inputs and its former belief state p(i l): key of the equation and this probability of a sensor input at each robot position must be computed using some model p(l): easy to recover, simply the probability p(r = l) associated with the belief state before the perceptual update process p(i) does not depend upon l => constant => often dropped In order to compute the new probability of position l in the new belief state, one must integrate over all possible ways in which the robot may have reached l according to the potential positions expressed in the former belief state Update equation: p l t ot = p l t l ' t 1, o t p l ' t 1 dl ' t q Total probability for a specific position l is built up from the individual contributions from every location l' in the former belief state given encoder measurement o Incorporates the Markov assumption: output is only a function of robot's previous state! Probabilistic Map-Base localization: Action and perception update; Markov versus Kalman filter Action update: Represents the application of some action model Act to the mobile robot's proprioceptive encoder measurements and prior belief state to yield a new belief state representing the robot's belief about its current position Action update process contributes uncertainty to robot's belief about position because of encoder error Perception update: Represents the application of some perception model See to the mobile robot's exteroceptive sensor inputs and updated belief state to yield a refined belief state representing the robot's current position Perception update generally refines the belief state Sensor measurements tend to provide clues regarding the robot's possible position Markov localization: Robot's belief state is usually represented as separate probability assignments for every possible robot pose in its map Action and perception update must update probability of every cell Allows for localization starting from any unknown position => can recover from ambiguous situations Requires discrete representation of the space (e.g. geometric grid, topological graph) Kalman filter localization: Represents robot's belief state using a single, well-defined Gaussian probability density function, and thus retains just a μ and σ parametrization of the robot's belief about position with respect to the map Just updating of the two Gaussian distribution parameters is required Tracks robot from initially known position and is inherently precise and efficient! If uncertainty becomes too large, robot gets lost Case Study I: Markov localization using a topological map Markov application is especially used when the robot s environment is available in topological form. The topological description contains only information about connectivity between hallways and rooms, it doesn t give any information about the geometry of the environment Dervish, the winner of the AAAI contest used probabilistic Markov localization with multiple hypothesis belief state. The robot was equipped with sonars. Due to the fact that Dervish only had a topological representation of its environment it would be appropriate to design Dervish s perceptual system to detect matching perceptual events: the detection and passage of connections between hallways and offices. The topological map consists of nodes which represent the rooms and hallways. For a topological representation the key issue involves assignment of nodes and connectivity between nodes. Dervish associated with each topological node n a probability that the robot is at a physical position within the boundaries of n: p(n) Dervish s certainty matrix: Each perceptual event consists of a percept-pair (a feature on one side of the robot or two features on both sides). Given a specific percept pair i from the certainty matrix, the likelihood of each possible position n is updated with: p n i = p i n p n 16 / 4

17 The value of p(n) is already available from the current belief state of Dervish, and so it remains to calculate p(i n). Dervish s feature extraction system only detects four total features and because a node contains (on a single side) one of five total features, every possible combination of node type and extracted feature can be represented in the certainty matrix above. For example the probability that Dervish interprets an open hallway as an open door is 0.1. When the robot detects an event, multiple perception update steps will need to be performed to update the likelihood of every possible robot position given Dervish s former belief state. This is because there is a chance that the robot has traveled multiple topological nodes since its previous perceptual event (false negative errors). Therefore the likelihood of position n given perceptual event i is calculated with: p n t i t = p nt n ' t i,i t p n' t i dn ' t i p n ' t i denotes the likelihood of Dervish being at position n' as represented by Dervish's former belief state. See example in Script or Book p17ff Case Study II: Markov localization using a grid map, particle filter The environment is represented by a grid which allows higher resolution compared to topological maps. In this application the robot uses a D geometric representation of free and occupied space. The belief state representation for the robot Rhino for example consists of 15x15x15 array, e.g. 15^3 possible robot positions. Dervish only used perceptual events, ignoring encoder inputs. Rhino makes use of an explicit action update phase and a perception update phase. Given encoder measurements o at time t, each update position probability in the belief state is expressed as a sum over previous possible positions and the motion model: P l t ot = P l t l ' t i, ot p l t i The perception model follows the Bayes formula. Given a range perception i the probability of the robot being at p i l p l each location l is updated with: p l i = p i The calculation of p(i l) is not trivial because of the fine-grained metric representation of Rhino, the number of possible sensor readings and environmental geometric contexts is extremely large. A sensor model must calculate the probability of a specific perceptual measurement using three key assumptions: (1) If an object is detected, measurement error can be described with a distribution that has a mean at the correct reading. () There should always be a nonzero chance that a range sensor will read any measurement value, even if this measurement disagrees sharply with the environmental geometry. (3) There is a specific failure mode in ranging sensors whereby the signal is absorbed or coherently reflected, causing the sensor s range measurement to be maximal. This causes a local peak in the probability density distribution at the maximal reading of a range sensor. The robot always starts with a flat probability density for each possible position, e.g. no bias. For example when the robot encounters first one door, then a second door, the probability density function starts being multimodal and finally unimodal and sharply defined. The ability of Markov localization system to localize the robot from an initially lost belief state is its key distinguishing feature. Reducing computational complexity: randomized sampling (particle filter, condensation algorithms, Monte Carlo algorithms) General scheme of Kalman filter localization: Static problem, fusion of probability density of two estimates Sensor fusion problem is key to robust localization! It incorporates all information, regardless of precision, to estimate the current value of the variable of interest (I.e. position) Static estimation: Suppose our robot has two sensors, both providing a measurement of the same variable, but with different error characteristics. thus we wish to combine the information provided by the two sensors, recognizing that such sensor fusion, when done in a principled way, can only result in information gain Assume we have taken two measurements. As a simplified way of characterizing the error associated with each of these estimates, we presume a (unimodal) Gaussian probability density curve and thereby associate one variance with each measurement. In summary, this yields two robot position estimates: q 1=q1 with variance 1 q =q with variance 17 / 4

18 Applying weighted least squares technique to get best estimate q : n S= wi q q i wi: weight of measurement i i =1 To find the minimum error we set the derivative of S = 0: n n w i qi n S i=1n = wi q qi = w i q q i =0 => q= q q i=1 i=1 wi i =1 1 If we take as the weight w i=, then the values of q in terms i of two measurements can be defined as follows: q= q 1 q or q=q 1 1 q q = => = 1 we can see that the resulting variance is less than all variances of the With 1 1 individual measurements. Final form used in Kalman filter implementation: x k 1= x k K k 1 z k 1 x k k ; k = 1 ; z = k z The updated variance of the state x k 1 is given using k 1= k K k 1 k where K k 1= Dynamic estimation: Suppose that the motion of the robot between times k and k+1 is described by velocity u + noise w The optimal estimate at time k+1 is given by the last estimate at k and the estimate of the robot motion including estimated movement errors / 3 Kalman filter applied to mobile robots Five steps of map-based localization: 1. Robot position prediction: Robots position at time step k+1 is predicted based on its old location (time step k) and its movement due to the control input u(k). Observation: Obtain observation Z(k+1) (measurements) from robot's sensors at new location at time k+1 Can represent raw data scans as well as features like lines, doors, or any other landmark Parameters of targets are usually observed in sensor frame {S} (observations have to be transferred to world frame {W} or measurement predictions to {S}) 18 / 4

19 3. Measurement prediction: k 1 k Use predicted robot position p and map M(k) to generate multiple predicted feature observations zt Transform them into sensor frame (with function hi = coordinate transformation between {W} and {S}) 4. Matching: Assignment from observations zj(k+1) (gained by sensors) to targets zt Identify all of the single observations that match specific predicted features well enough to be used during the estimation process. For each measurement prediction for which an corresponding observation is found we calculate the innovation Mahalanobis distance is used to find pairs of predicted and observed features Best matching candidate is selected if multiple matches occur. Not matching observations are ignored. 5. Estimation: k 1 mlin k 1 of the robot's Applying the Kalman filter: Compute the best estimate p position based on the position prediction and all the observations at time k+1. By fusing prediction of robot position (magenta) with innovation gained by measurements (green) we get the updated estimate of the robot position (red) 5.8 General scheme of map building, problems with map building, basic idea of stochastic map Goal: starting from an arbitrary initial point, a mobile robot should be able to autonomously explore the environment with its onboard sensors, gain knowledge about it, interpret the scene, build an appropriate map and localize itself relative to this map => SLAM (simultaneous localization and mapping) problem Problem: If a mobile robot updates its position based on an observation of an imprecisely known feature, the resulting position estimate becomes imprecise. Similarly the map can be updated with a feature observed from an imprecisely known position. Changes in the environment are also hard to handle Only path to a complete and optimal solution to this joint problem is to consider all the correlations between position estimation and feature location estimation = stochastic maps Two main problems: Map maintaining (keeping track of changes in environment) Representation and reduction of uncertainty Stochastic map technique: Each feature is represented by the covariance matrix t and an associated credibility factor ct ct is between 0 and 1 and quantifies the belief in the existence of the feature in the environment The robot uses the Kalman filter procedure to estimate and update its position. To the matching algorithm, an extended loop is added Matching step has 3 possible outcomes: Matched prediction and observation Unexpected observation Unobserved prediction Unexpected observations will effect the creation of new features in the map Unobserved measurement predictions will effect the removal of features from the map 19 / 4

20 6. Global Path Planning: Graph Search Strategy - NF1, Breadth-First Search, Depth-First Search, Greedy search and A* Global path planning (representation of the environment): Road-map (graph): identify a set of routes within the free space Cells: discriminate between free and occupied cells Potential field: impose a mathematical function over the space Algorithms to search a graph for a path: Wavefront Expansion NF1: Wave-front going from goal marking each cell with distance to goal, after it reached start point path follows the lowest numbers Breadth-First Search: examines all the unexamined successors (on same level) of a node until it finds the goal Depth-First Search: examines the unexamined successors of a node by going one branch as deep as possible Greedy Search: examines the unexamined successors of a node by using a priority queue (cost estimate of cheapest path from state at node n to goal) A*: examines the unexamined successors of a node by using a heuristic function (special case of a greedy search), e.g. nodes connecting start and goal almost linear are examined first Heuristic cost function f(n) is equal to the sum of the path cost g(n) to get from the start to node n and the straight-line distance h(n) from node n to the goal Road-Map planning: Visibility graph, Voronoi Diagram Road map path planning: Connect free spaces like roads Find the shortest connection to the goal by using the roads Visibility graph: Connect all corners, start and end point of the (polygonal) obstacles with straight lines if nothing is in between Find shortest path along lines Pro: shortest path, fast, most of it can be done offline Con: number of nodes increases quickly in more complex environments, path very close to obstacle (=> grow obstacles more than robot's radius) Voronoi diagram: Stay away as far as possible from obstacles by maximizing sensor readings Used to conduct automatic mapping of an environment by finding and moving on unknown Voronoi edges, then constructing a consistent Voronoi map of the environment Pro: safety, easy to execute (simple control rules) Con: no optimal solution, not executable with low range sensors / 3 Main methods for path planning: cell decomposition, potential field Cell decomposition: Divide area into simple, connected regions called cells Determine which open cells are adjacent and construct a connectivity graph Find cells in which initial and goal configurations lie and search for a patch in the connectivity graph to join the initial and goal cell Compute plan within each cell (e.g. passing through midpoints of cell boundaries or by wall-following, etc) Exact cell decomposition: Boundaries are placed as a function of the structure of the environment, such that the decomposition is lossless Disadvantage: number of cells depends upon density and complexity of objects in environment Rare usage in mobile robotics Approximate cell decomposition: Decomposition results in an approximation of the actual map Grid-based decomposition: Fixed grid size: NF1(grassfire) for path planning, easy, applicable to many algorithms, independent of environment complexity, cost is required memory Adaptive cell decomposition: Variable-size: rectangle recursively decomposed into smaller rectangles if needed, adapts to complexity of environment, requires less memory Potential field: 0 / 4

21 Creates a field or gradient across the robot's map that directs the robot to the goal position Robot is treated as a point mass Robot gets attracted by goal while being repulsed by obstacles that are known in advance This is more than just path planning. The resulting field is also a control law. The robot can always determine its next required action based on the field Potential field path planning: Concept and example Generation of potential field function U(q): attracting and repulsing fields, summing up fields, functions must be differentiable Generate artificial force field F(q): F q = U q = U att q U rep q Robot speed set proportional to force F(q) Attractive potential field: Parabolic function representing Euclidian distance Attracting force converges linearly towards 0 (goal) 1 1 U att q = k att goal q = k att q q goal Repulsing potential field: Shoud generate barrier around all obstacles (strong if close, no influence if fare) { k rep U rep q = q 0 0 F rep q = U rep q = F att q = U att q =k att q q goal { k rep if q 0 if q roh0 } q q obst q 0 q q 0 if q 0 if q roh 0 } Local minima problem exists Extended potential field method: Rotation potential field and task potential field introduced Rotation potential field: force is also a function of robots orientation relative to obstacles (e.g. if parallel to wall repulsive force smaller because no possibility to hit the wall) Task potential field: filters out the obstacles that should not influence the robots movements (I.e. only obstacles in front of robot considered) Harmonic potentials: Movement similar to fluid particles (hydrodynamics analogy) Ensures that there are no local minima but complicated 6.. Obstacle Avoidance: Bug, VHF, Bubble band, Dynamic window, ASL approach, Goal of obstacle avoidance: avoid collisions with obstacles Efficient obstacle avoidance should be optimal with respect to: overall goal, actual speed and kinematics of robot, onboard sensors, actual and future risk of collision OA algorithms: Bug1: fully circle object first, then depart from point with shortest distance toward goal Bug: follow object contour and depart immediately when crossing direct connection between start and goal (robot has to decide in which direction it wall-follows the object) Robots behavior at each instant is generally a function of only its most recent sensor reading => can lead to undesirable and yet preventable problems in cases where robot's instantaneous sensor readings do not provide enough information for robust oa. Tangent bug: Adds range sensing and local environmental representation termed the local tangent graph Can go along shortcuts when contouring obstacles Vector field histogram (VFH): Creates local map (small occupancy grid) of environment around robot Generates polar histogram with angle on x-axis and probability of an obstacle on y-axis Uses cost function to calculate best opening to avoid obstacle G = a * target_direction + b * wheel_orientation + c * previous_direction VHF+: Takes into account a simplified model of moving robot's possible trajectories based on its kinematic limitations (robot moving on arcs or straight lines) Results in a masked polar histogram where obstacles are enlarged so that all kinematically blocked 1 / 4

22 trajectories are properly taken into account Bubble band concept: Bubble = maximum free space which can be reached without any risk of collision Generated using the distance to the object and a simplified model of the environment Bubbles are used to form a band of bubbles which connect the start point with the goal point Global path is planned before moving (global map and path planner required) If unexpected obstacle is encountered, robot's motion can be planned with the knowledge of the free space and by minimizing the robot's deflection bubble-band tension (the closer the robot sticks to the original path, the better) Basic curvature velocity method (CVM): Takes actual kinematic and dynamic constraints of robot into account Obstacles an robot are put into velocity space (ω, v) Robot is expected to travel on arcs (c = ω / v) Obstacles are modeled as circles due to performance and limit some possible c's Limitation: obstacles as circles, only travel on arcs Lane curvature velocity method (LCM): Not only arcs are considered Lanes are calculated and the one with best properties is chosen Better performance in narrow areas (corridors, passing doors) Local dynamic window approach: Kinematics of the robot is taken into account by searching a well-chosen velocity space Robot moves only on arcs, at least for one time-stamp Around actual velocities, a window of achievable velocities in the next time-stamp is searched for optimal parameters. This dynamic windows is then reduced keeping only those velocity tuples that ensure that the vehicle can come to a stop before hitting an obstacle. Rectangular window, due to independent translation / rotation capabilities New motion is chosen upon an objective function (prefers fast forward motion, maintanance of large distances to obstacles and alignment to goal heading) O = a * heading(v, ω) + b * velocity(v, ω) + c * dist(v, ω) Global dynamic window approach: Adding NF1 (grassfire) to objective function O NF1 is only calculated on selected rectangular region which is directed from robot toward goal Schlegl approach: Variation of dynamic window approach Cartesian grid, motion on circular arcs Takes robot shape into account Calculates distance to collision between single obstacle point and robot Real-time achieved by pre-calculated lookup table ASL approach: Global path generated in advance Local path-planning (NF1) + obstacle avoidance (dynamic window) + bubble band (smooth trajectories) Resulting path is converted to elastic band (not taking kinematics into account, robot can turn on spot) Enhanced dynamic window (considers shape of robot) takes care of moving robot along path Real-time because only max speed is calculated Nearness diagram: improvement of VFH Gradient method: fast recalculation of wavefront propagation / takes into account closeness to obstacles / allows generating continuous interpolations of the gradient direction at any given point in grid Adding dynamic constraints: use of ego-dynamic space / transforms obstacles into distances that depend on braking constraints and sampling time of the underlying obstacle avoidance method Behavior based: difficult to introduce precise task / reachability of goal not provable Fuzzy / Neuro-Fuzzy: learning required / difficult to generalize 6.3 Navigation Architecture: Temporal / control decompositions, examples Keep software modular! During a project the robot hardware or environment can drastically change Testing software exhaustively in simulation before using on the physical hardware is possible Decompositions: identify axes along which we can justify discrimination of robot software into distinct modules Temporal decomposition: / 4

23 Distinguishes between processes that have varying real-time and non real-time demands Decompose tasks according to temporal requirements Sensor response time: time for a module to change its output given a changing input Temporal depth: concept of classifying processes by their amount of looking back and looking ahead Spatial locality: concept of classifying the spatial impact of a module (PID wheel speed control is very local where path planning is global) Context specificity: concept of how much context the modules use to produce their output (lowest-level modules tend to produce outputs directly as a result from sensor modules) Control decomposition: Identifies the way in which each module's output contributes to the overall robot control outputs The modules are modeled as a number of blocks somehow linked together and acting on the symbolic module r, which represents the robot and the environment. This r -block has as input the control output of the modules and outputs what the robot perceives Serial decomposition: All the modules in a serial configuration Good predictability and verifiability since the state and outputs of each module depend entirely on the inputs it receives from the module upstream Parallel decomposition: All modules in parallel How will each module impact the system? Temporal switching (switched parallel): Only one module can act on system at one time instant Problem: switching can happen fast => oscillating robot Mixed parallel: Control at any time shared between multiple modules Problem: combining multiple recommendations mathematically does not guarantee an outcome that is globally superior Problem: verification of robot performance extremely difficult Case studies: tiered robot architectures General tiered architecture: Based on temporal decomposition Executive layer: Activation of behaviors Failure recognition Re-initializing the planner Path planning represents here the top-level, using all global information, non-real time element. At real-time, the PID controller checks the motion. Real-time controller also contains different behaviors that can form a switched or mixed parallel architecture. Executive activates the right behaviors based on the information received from the planner. It also recognizes failure and can re-initiate the planner. It would also contain all tactical decision-making as well as the robot's short-term memory, i.e. the localization and mapping modules. A similar architecture was already designed in 1969 for Shakey! Two-tiered architecture for off-line planning Simplest case, plan first, then execute (no integration at all) Very inflexible towards new/unknown environments or unmapped obstacles. Two useful cases: Static route-based applications: can be used in factory or warehouse settings, which is completely static and known (ex. robot following colored lines on the floor, path planning done beforehand). Extreme reliability demands: reliability constraints can also make offline planning necessary: Online path planning can take a long time (exponential to the complexity of the problem), so that robots with tight temporal constraints could, at the expense of memory, have many plans stored (ex. space shuttle missions with many predetermined conditional plans for any case, so no time is spent in space decision making). Three-tiered episodic planning architecture: 3 / 4

24 Most popular architecture at the moment, since it can incorporate data gathered on the way. Planner is triggered when needed (e.g. blockage, failure) Executive module decides when to replan a path. Encountering an obstacle for example would be such a trigger. Also, replanning could be triggered once a certain amount of environmental data is gathered. Ex: Commercial Cye robot: Gets a set of goal/way points, replans at each waypoint incorporating the data gathered on the way to its current location. These systems usually have a global and a short-term local map. Integrated planning and execution architecture: All integrated, no temporal decoupling between planner and executive layer No more temporal distinction between planner and execution module. Very fast planning algorithms needed, so that planning can be done in one executive's cycle. Stentz designed such a shown to be working architecture for a large off-road vehicle travelling at high speeds. He used an optimized grassfire algorithm that is possible to execute during one basic control loop cycle. This kind of architecture is optimal, since the robot performs at every control cycle an action planned by a global planner including all the available information at the moment. If the environment gets very large, this architecture gets limited. Faster hardware will push this kind of architecture forward as well. 4 / 4

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Mobile Robot Kinematics

Mobile Robot Kinematics Mobile Robot Kinematics Dr. Kurtuluş Erinç Akdoğan kurtuluserinc@cankaya.edu.tr INTRODUCTION Kinematics is the most basic study of how mechanical systems behave required to design to control Manipulator

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta CMPUT 412 Motion Control Wheeled robots Csaba Szepesvári University of Alberta 1 Motion Control (wheeled robots) Requirements Kinematic/dynamic model of the robot Model of the interaction between the wheel

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

Sensor technology for mobile robots

Sensor technology for mobile robots Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.uni-hamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

EE565:Mobile Robotics Lecture 2

EE565:Mobile Robotics Lecture 2 EE565:Mobile Robotics Lecture 2 Welcome Dr. Ing. Ahmad Kamal Nasir Organization Lab Course Lab grading policy (40%) Attendance = 10 % In-Lab tasks = 30 % Lab assignment + viva = 60 % Make a group Either

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Robotics (Kinematics) Winter 1393 Bonab University

Robotics (Kinematics) Winter 1393 Bonab University Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Introduction to Autonomous Mobile Robots

Introduction to Autonomous Mobile Robots Introduction to Autonomous Mobile Robots second edition Roland Siegwart, Illah R. Nourbakhsh, and Davide Scaramuzza The MIT Press Cambridge, Massachusetts London, England Contents Acknowledgments xiii

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100 ME 597/747 Autonomous Mobile Robots Mid Term Exam Duration: 2 hour Total Marks: 100 Instructions: Read the exam carefully before starting. Equations are at the back, but they are NOT necessarily valid

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

CS283: Robotics Fall 2016: Sensors

CS283: Robotics Fall 2016: Sensors CS283: Robotics Fall 2016: Sensors Sören Schwertfeger / 师泽仁 ShanghaiTech University Robotics ShanghaiTech University - SIST - 23.09.2016 2 REVIEW TRANSFORMS Robotics ShanghaiTech University - SIST - 23.09.2016

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Terry Payne Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Centre for Autonomous Systems

Centre for Autonomous Systems Robot Henrik I Centre for Autonomous Systems Kungl Tekniska Högskolan hic@kth.se 27th April 2005 Outline 1 duction 2 Kinematic and Constraints 3 Mobile Robot 4 Mobile Robot 5 Beyond Basic 6 Kinematic 7

More information

ROBOTICS AND AUTONOMOUS SYSTEMS

ROBOTICS AND AUTONOMOUS SYSTEMS ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 6 PERCEPTION/ODOMETRY comp329-2013-parsons-lect06 2/43 Today We ll talk about perception and

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Practical Robotics (PRAC)

Practical Robotics (PRAC) Practical Robotics (PRAC) A Mobile Robot Navigation System (1) - Sensor and Kinematic Modelling Nick Pears University of York, Department of Computer Science December 17, 2014 nep (UoY CS) PRAC Practical

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing Larry Matthies ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Probabilistic Motion and Sensor Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Sensors for Mobile

More information

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10 Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

COS Lecture 13 Autonomous Robot Navigation

COS Lecture 13 Autonomous Robot Navigation COS 495 - Lecture 13 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Basilio Bona DAUIN Politecnico di Torino

Basilio Bona DAUIN Politecnico di Torino ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene? System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16 Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information