Two laser scanners raw sensory data fusion for objects tracking using Inter-Rays uncertainty and a Fixed Size assumption.

Size: px
Start display at page:

Download "Two laser scanners raw sensory data fusion for objects tracking using Inter-Rays uncertainty and a Fixed Size assumption."

Transcription

1 th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 Two laser scanners raw sensory data fusion for objects tracking using Inter-Rays uncertainty and a Fixed Size assumption. PawełKmiotek a,b and Yassine Ruichek a (a) Systems and Transportation Laboratory University of Technology of Belfort-Montbeliard - France pawel.kmiotek,yassine.ruichek@utbm.fr (b) Department of Computer Science AGH University of Science and Technology - Kraków, Poland Abstract This paper presents a fusion method for objects tracking using two Laser Range Finders (LRF). The tracking is based on the Extended Kalman filter. Tracked objects are represented by Oriented Bounding Box (OBB). To improve the objects state estimation, two paradigms are introduced. The first one concerns Inter-Rays (IR) uncertainty, which considers the fact that the raw data points representing the extremities of an extracted OBB do not coincide with the real objects extremities. The second paradigm, called Fixed Size assumption, assumes that objects size does not change during the tracking. This is expressed by the fact that a track representing an object change its size depending on the IR uncertainty. The fusion technique benefits of the increased perception angular resolution obtained by using two LRFs. The fusion technique takes place in the early stage of the measurement extraction from the raw data points. Experimental results are presented to demonstrate the reliability of the two-lrf based fusion method, especially for far objects. INTRODUCTION In the last decade, many research programs have been launched to study the concept of intelligent vehicles and their integration in the city of the future. In this framework, the Systems and Transportation Laboratory of the University of Technology of Belfort-Montbéliard (France) is working to develop a vehicle having the ability to navigate autonomously in various urban environments. The research developments are based on an experimental platform consisting of an electrical vehicle with an automatic control, equipped with several sensors and communication interfaces. To reach the objective, the first primary task is to develop a perception system for detecting, localising and tracking objects in this type of environments. In this paper, the emphasis is put on tracking of compact dynamic objects using laser sensory data. Representation of dynamic objects is crucial for tracking and trajectory planning. In the literature concerning tracking, points with elliptical uncertainty are used for representing objects position [][]. This representation is good ISIF 997 pawel.kmiotek@agh.edu.pl enough for obstacle detection, collision warning or driving assistance systems in well structured environments like highways [][3]. In the urban areas, there are less constraints on the objects movements. Thus, for the task of autonomous navigation in demanding urban areas, these representation methods are not sufficient. Oriented Bounding Box (OBB) [4][5][6] provides a good approximation of the size, shape and orientation angle of dynamic objects, with a good data compression ratio. In general, laser range sensors have a small angular resolution. To overcome the limitation, the authors propose to enriche the OBB model by an Inter-Rays (IR) uncertainty and a Fixed Size (FS) assumption. The IR uncertainty is developed to handle the fact that the raw data points representing the extremities of the extracted OBB do not coincide with the real object s extremities. The idea of the FS assumption is to consider that objects size does not change during the tracking. The introduction of these two paradigms allows to increase the tracking system reliability by better object s size and center position estimation. Even with the improvement thanks to the IR uncertainty and FS assumption, a system equipped with a single LRF does not perform well for far objects, because of the limited spatial resolution. One cannot obtain correct estimation of the object s size and the other information such as orientation, speed and position are very uncertain. Multi-sensor configuration is often used to increase robustness and accuracy of the environment representation [][3]. Multiple sensors can provide redundant, complementary or both kind of information. By increasing the perception resolution, a two LRF configuration (complementary aspect) allows to obtain better estimation of objects state at long ranges. The proposed tracking system is based on the Extended Kalman Filter (EKF) with Discrete White Noise Acceleration Model (DWNA) []. There are two main approaches of fusion using KF: measurements fusion and tracks fusion. In the first approach, one can extract two variants: Weighted Measurement Fusion (WMF), Merge Measurement Fusion (MMF). In the second approach, one can find: Track-To- Track Fusion, Modified Track-To-Track Fusion, Track-To-

2 Track Fusion with fused predictions [7][]. In [8], the authors used two LRF and the WMF method was chosen. This approach, however, takes only into account the redundancy aspect of the two-lrf configuration, and, thus, does not perform well for far objects. This paper presents a two-lrf based fusion method, which takes into account the complementary aspect of the multi-sensor configuration in terms of angular resolution. In the context of KF, the method takes place in the stage of measurements extraction (raw data points fusion). The paper is organized as follows. Section presents the OBB representation for dynamic objects, with the IR uncertainty paradigm and the FS assumption. The data association technique is described in section 3. The tracking model is explained in section 4. In section 5 the fusion method is presented. Before concluding, experimental results are presented in section 6. OBJECT REPRESENTATION. OBB based model for object representation Urban environments are characterised by limited spaces available for navigation and there are little objects movement constraints. In these conditions, geometrical representation of dynamic objects is necessary. Oriented bounding box (OBB) is a way of representing objects geometry with sufficient approximation for the means of navigation. The OBB based representation is described by two vectors z () and σz (). The first one represents the OBB geometry and includes the centre coordinations cx, cy, the orientation angle θ and the size dx, dy. The second vector represents uncertainties on the components of the vector z. z = [cx, cy, θ, dx, dy] T () σ z = [σ cx, σ cy, σ θ, σ dx, σ dy] T () To construct the OBB based measurement, a specific method is used. The OBB construction method consists of the four following main steps. The first step is to find a contour of the tracked objects using a semi convex-hull technique [9]. In the second step, a method based on Rotating Calipers (RC) technique [] is used to construct an OBB, which is best aligned to the object s contour. The third step consists of the uncertainty computation. Finally, the forth step concerns the application of the IR uncertainty paradigm and the FS assumption. In this paper, we will focus on the forth step. The previous steps are described in details in [4].. Inter-Rays uncertainty An important aspect of OBB extraction is the fact that the raw data points representing the extremities of the extracted OBB do not coincide with the real object s extremities (see Figure ). In the Figure, minx, miny, maxx, maxy are respectively the minimum x coordinate, the minimum y coordinate, the maximum x coordinate and the maximum y 998 Lr+n d (r+n) IRy Pr+n (r+n) IRy maxy minx y Extracted OBB Real Object x (r) IRx r+n r+ r+n r Laser rays maxx miny Pr Lr (r) d IRx Figure : Inter-Rays uncertainty paradigm. coordinate of the extracted OBB. The line Lr (respectively Lr + n) is crossing the point maxx (respectively maxy ) and is perpendicular to the OBB side to which maxx (respectively maxy ) belongs. The Inter-Rays (IR) real object s extremities position estimation and their variances are added to the OBB s size and OBB s size uncertainty. The real object s extremities are situated between the raw data points delimiting the OBB (maxx, maxy ) and the points Pr and Pr+n. Pr (respecitvely Pr+n) is the intersection point between the ray r (respecitvely r+n) with the line Lr (respectivelylr+n). The introduction of the IR uncertainty, the measurement vector z is augmented by quantities: d IRx, µ IRx, σirx (for the OBB s local X axis), d IRy, µ IRy, σiry (for the OBB s local Y axis). Considering the OBB s local X axis, the real object s extremity position is uniformly distributed with the mean µ IRx, which is equal to the half of the IR line segment length d IRx. The IR line segment is defined by the point maxx and Pr. To fulfil Kalman Filter assumption, the distribution of the real object s extremity position is approximated by a normal distribution with the mean µ IRx, and the variance σirx dirx, which is set to ( N σ ). N σ is the number of sigmas and represents the confidence interval of the approximated distribution which is equal to the IR line segment length d IRx (see Figure ). The N σ is set to 6. Standard deviations from the mean Cumulative %.%.3% 5.9% 5% 84.% 97.7% 99.9% Figure : Normal distribution approximation. The measurement Inter-Rays values z[µ IRx ] and z[σ IRx ] are used in each iteration of the tracking algorithm to correct the size of the OBB measurement [4]. The correction equations are expressed as follows: w4 IR d IR

3 z[dx] = z perc [dx] + z[µ IRx ] (3) z[σ dx] = z perc [σ dx] + z[σ IRx] (4) where z perc is the perceived measurement, z is the corrected measurement used for tracking. The same process is applied for the OBB s local Y axis. It may happen that the IR line segment length d IRx is large. It may cause great overestimation and tracking instability. To avoid this situation, the IR line segment length d IRx is limited to a certain value..3 Fixed Size assumption The idea of the fixed size (FS) assumption is based on the fact that, in general cases, objects size does not change during the tracking. However, due to the LRF s limited resolution and change of the relative distance and orientation of the observed object, measurements of the object s size vary in. The principle of the FS assumption is that the size of the track representing the tracked object can change depending on the IR uncertainty. The FS algorithm takes place in each iteration of the tracking after the track prediction and measurements extraction. For the following algorithm description, we consider the local OBB s X axis. The same process is applied to the local OBB s Y axis. Having the perceived OBB measurement with the IR line segment length z perc [d IRx ], we obtain the corrected IR line segment length z[d IRx ] associated with the OBB measurement: measurement s sides relatively to perceived object s sides. The center translation is needed to adjust the measurement OBB sides to perceived object s sides. The updating of the center position is achieved as follows. Firstly, a visibility factor V F x is computed for the OBB s local X axis. The visibility factor permits to compute the center translation coefficient, which is proportional to the difference between sides normals angles (β minx and β maxx ). V F x = max(βf minx, βf maxx ) β f minx + (9) βf maxx where β minx and β maxx correspond respectively to the angles between OBB s sides minxside and maxxside normals and their radius vectors (see Figure 3). f is a smoothing parameter, which is set experimentally to 4. The visibility factor becomes less sensible to the angle difference as the smoothing parameter value increase. N minx Measurement OBB maxy side minx side y minx maxx side x N maxx miny side maxx z[d IRx ] = min(z perc [d IRx ], ˆx k [d IRx ]) (5) where ˆx k [d IRx ] is the IR line segment length associated with the track at k-. The quantity z[d IRx ] is then memorised in the track ˆx k : ˆx k [d IRx ] = z[d IRx ] (6) After using the equation (3) and (4), the next step consists of the measurement s size correction by using the following equations: If z perc [d IRx ] ˆx k [d IRx ] and z[dx] < ˆx k [dx] than z[dx] = ˆx k [dx] (7) If z perc [d IRx ] < ˆx k [d IRx ] and z[dx] < ˆx k [dx] than z[dx] = ˆx k [dx] z perc [µ IRx ] ˆx k [µ IRx ] (8) where ˆx k [dx] is the track predicted size at the k, and ˆx k [µ IRx ] is equal to ˆx k [d IRx]. The measurement s size correction allows to obtain the best object size perceived up to the k in terms of IR uncertainty. After correcting the perceived measurement s size, the measurement s center must be appropriately translated. The measurement s size correction introduces translation of 999 Laser rays Figure 3: Visibility factor associated to the OBB s local X axis. In the second step, the direction factor DF x associated to the OBB s local X axis is computed. Providing the direction of center translation, the direction factor is expressed as follows: {, if βmaxx > β minx (a) DF x =, if β maxx < β minx (b) In the last step, a difference between the perceived size z perc [dx] and the corrected size z[dx] is calculated: dx = z[dx] z perc [dx] () Finally, the measurement s center translation is expressed as follows: z[cx] = z[cx] + V F x DF x dx z[σ cx] = z[σ dx ] ()

4 3 DATA ASSOCIATION One of the most important tasks of autonomous navigation in urban areas is tracking of dynamic objects. Data association, which is closely related to the objects representation and sensory data, is a crucial part of the tracking process. In this section, a data association methodology suitable for the OBB representation and laser scanner data is proposed. The emphasis is put on association of raw data points of coalescing objects. Since geometrical features are taken into account, only objects being previously recognised as separated ones can be tracked correctly. one, Mahalanobis distance based gating is used to associate raw data points with a track. Not associated points are processed to create new tracks. In the last case, a method based on the Nearest-Neighbour principle is used. 4 TRACKING The object s state estimation is done by the means of Extended Kalman Filter (EKF). All values of the track s state vector are expressed in the local ego-vehicle coordinate system. Tracks are represented by the augmented OBB state vector x k : tracks New track Points clustering Track to cluster correlation How many tracks correlated with a cluster track Single track association or more tracks Multiple tracks association Figure 4: Data association schema. The data association algorithm is composed with the following stages (see Figure 4): preliminary association (raw data points clustering), tracks to clusters correlation and raw data points to track association. The preliminary association is based on distance thresholding. Points belong to the same cluster if the Euclidean distance between them is below a certain threshold. Each cluster is represented by an Axis Aligned Bounding Box (AABB). The raw data points clustering uses two thresholds: a general threshold and a neighbouring points one. Applied to the points produced by neighbouring rays, the neighbouring threshold is greater than the general one, which is used for all non neighbour points. Neighbouring points threshold is used for points which are produced by neighbouring rays and is greater then general one which is used for all non neighbour points. After raw data points are clustered into an AABB, track to cluster correlation takes place. The track is correlated with the cluster if the track s OBB intersects with the cluster s AABB. If the track do not intersect any cluster, the track is correlated with the closest cluster. There are three possible outputs of track to cluster correlation. A cluster can be correlated with zero, one, two or more tracks. These cases represent respectively the following situations: appearance of a new object, tracking of separated object, and multi-object tracking (see Figure 4). Basing on the results of the previous step, raw data points to track association follows. In this stage, raw data points, positively associated with a track, create a measurement. Each measurement is in the OBB format (see () and ()). In the first situation (appearance of a new object), all the points are used to create the measurement. In the second x k = [cx, cx, cy, cy, θ, θ, dx, dy] T (3) Since tracking is done from dynamic platform, odometry information is used to increase the tracking accuracy. State change of the ego-vehicle is represented as differences of position x, y and angle γ between consecutive instants. Thus, the input to the state transition equation is defined as: u k = [ x, y, γ] (4) The Discrete White Noise Acceleration Model (DWNA) [] is used to describe objects kinematics and process noise. Thus, taking into account the odometry information, the track state transition is modelled as follows [4]: ˆx k k = A( x, y, γ)f ˆx k + Bu k + Gv k (5) where F is is the standard DWNA transition matrix, B is the odometry-input model, G represents the noise gain matrix, v k is the process noise, defined with the Gaussian distribution: where v k = [ cx, cy, θ, ˆσ dx, ˆσ dy ], v k N(, Qk) (6) Q k = Gv k G T (7) with ˆσ dx and ˆσ dy are the process errors for OBB sizes dx and dy respectively. The prediction covariance matrix is: P k k = A x (ˆx k )FP k A T x (ˆx k )F T + Q k (8) where P k is the estimation covariance matrix. The observation equation can be written as follows: z k = Hˆx k k + w k (9) where H is the observation model and w k, which represents the measurement noise, is defined with a Gaussian distribution: where I 5,5 is the identity matrix. w k N(, R)R = σ z I 5,5 ()

5 5 FUSION METHODOLOGY The proposed representation model performs well except for objects poorly represented by raw data points. This situation occurs for far objects. Indeed, the number of laser rays colliding with objects is inversely proportional to the distance and proportional to the LRF angular resolution. Since the increase of LRF angular resolution is limited, the number of laser rays colliding with objects decreases with the distance. Hence, at a certain range, the object state estimation becomes very uncertain or even impossible to obtain. To overcome this limitation more LRF sensors can be used. A multiple LRF configuration provides higher perception angular resolution, and, thus, better object state estimation can be achieved. Furthermore, interlacing rays allows additional size estimation refinement by utilizing Inter-Rays uncertainty. KF based fusion methods can be divided into two groups: measurements fusion and tracks fusion. In the case of far objects, none of general approaches fits. In [8], two LRF were fused by using Weighted Measurement Fusion (WMF) method [7]. In this method, OBB measurements are extracted from raw data points for each sensor. The OBB measurements coming from the two sensors are then fused. However, this method takes into account only the redundancy aspect of the two-lrf configuration, and does not benefit from the increased perception angular resolution. Thus, it does not perform well for far objects. A method taking into account the complementary aspect of the multisensor configuration must operate on raw sensory data. The proposed approach takes advantage of the following aspects of increased perception angular resolution: more raw data points per object and lower distance between laser rays. To benefit from the first aspect, the raw data points coming from different sensors must be merged to extract an OBB measurement. The first step of the whole tracking system consists of data association. Raw data points association is performed for each sensor separately, and raw data points are regrouped in clusters. The number of clusters correlated with a track is equal to the number of sensors. During the points clustering, the online semi convexhull construction takes place (see [9]). The points constructing each semi convex-hull are sorted according to their angular coordinate. To construct the fused semi convex-hull from the semi convex-hulls correlated with a track, the following algorithm is performed. It starts by inserting the two points with the smallest angular coordinates into a new semi convex-hull to be constructed. To choose a point with the smallest angular coordinate, we consider only the first points of all the semi convex-hulls, since points of each semi convex-hull are sorted. The point being inserted is deleted from the original semi convex-hull. In each iteration, a new point with the minimum angular coordinate is inserted into the semi convex-hull being constructed. For each point insertion, the convexity condition is checked and if violated the existing semi convex-hull recalculation occurs(see [9]). The constructed semi convex-hull serves then as an input for the Calipers based OBB extraction method. After the OBB extraction Inter-Ray (IR) based size refinement starts. In the case of a single LRF, the distance between rays increases with the distance from the sensor. In a mulit-sensor case, the inter-rays distance varies between and d LRF, where d LRF a inter-rays distance for each LRF. This IR distance variation allows to refine the size of perceived objects, where the refinement level corresponds to the relative position between objects and sensors. The IR uncertainty computation for multiple sensors is similar to the single sensor case (see.). The only difference between the two configurations is that in the multiple sensor case, the d IR values are computed for each LRF and than the smallest is chosen. To correctly choose rays r and r + n of each sensors, the coordinates of extreme point (e.g. maxy ) must be expressed in the local sensor coordination system. 6 RESULTS The evaluation is based on a software platform, developed to simulate the sensors and the multiple objects tracking process. The simulator permits flexible changing of all sensors parameters and mounting position. This allows to test developed algorithms with different sensor configurations. In the simulator, laser range finder (LRF), LIDAR, stereovision and odometry sensors are implemented. To test the proposed approach, a single LRF and a two- LRF configurations are evaluated and compared. In the first configuration, a Laser Range Finder (LRF) is mounted in front of the instrumented vehicle. In the second case, two LRFs are parallely mounted in front of the vehicle with horizontal interspace of m. The step angle for the LRFs is set to with an angle range of 8 (similarly to the real sensor parameters). The sensor range is set to m and the range uncertainty σ ρ is set to.5m. To show the interest of using IR uncertainty and FS assumption, a scenario where a tracking vehicle runs in circles in front of the instrumented vehicle. Figures 5 and 6 show the trajectory of the tracked vehicle without and with IR uncertainty and FS assumption, respectively. One can see that the second approach outperforms the first one in terms of object s center position estimation. The better center position estimation can be obtained thanks to the more reliable object s size estimation. This is due to the proportional relation between the size and the center of the OBB. The places when the measurements present great deviation from the real object center trajectory, correspond to the situations when only one side of the object can be perceived by the sensor. Thus, only one object s dimension information is available. One can see that the usage of the FS assumption, which stores the object size, allows to obtain reliable estimation, even in the cases when only one object s side is seen. To evaluate and compare the one LRF based tracking with the two LRFs based one, a second scenario is used. It corresponds to a vehicle which is travelling towards the instru-

6 8 Real center trajectory Measured center trajectory Estimated center trajectory Real center trajectory Measured center trajectory Estimated center trajectory Y Y X Figure 5: Tracked vehicle trajectory without IR uncertainty and FS assumption (in the instrumented vehicle s local coordinate system) X Figure 7: Vehicle trajectory (in the instrumented vehicle s local coordinate system). 8 Real center trajectory Measured center trajectory Estimated center trajectory Y X Figure 6: Tracked vehicle trajectory with IR uncertainty and FS assumption (in the instrumented vehicle s local coordinate system). mented one, according to the trajectory illustrated in Figure 7. One can see in Figures 8, 9,, 4, 5, 8 and 9 that the single LRF based tracking provides bad state estimation, when the vehicle is far. However, the performance of this method increases with the decrease of the distance between the sensor and the tracked vehicle. The two LRFs based method behaves similarly, but with better vehicle s state estimation. The fusion of the two LRFs allows to obtain the same performance as for the single LRF but for greater distances (see Figures,, 3, 6, 7, and ). One can see in Figures 4 and 5 that the IR uncertainty µ IR stays constant at the beginning of tracking (when the vehicle is far). This is due to the IR line segment length d IRx limitation, as mentioned in section.. In our test, the limit is set to meters. In Figures 8 and 9, showing the center position errors for the one LRF, one can see great oscillations. This effect is a result of the sensor s low resolution at far distances. To explain the nature of the problem, let us use the example shown in Figure. In the example, the real object moves to the right what can be seen as the change of the position in the different instants. The measurement, however, stays at the same place due to the low laser rays resolution. If the object continues it s movement, it will be detected by a new raw data points configuration and, hence, the measurement will change it s position. This effect takes place all the during the tracking of the object. It s intensity is proportional to the laser rays resolution and the velocity of the object. Lower the resolution and the velocity are more prominent the effect becomes, since the period, when the measurement is static, increases. Thus, in the beginning of the scenario, when the tracked object is far and it s speed is low, the object s movement is perceived as a jerking one. The use of KF smooths the estimated velocity. However at low speed, when the position of the measurement stays unchanged for a long, the estimated velocity resents great oscillations. The introduction of the second LRF allows to increase the laser rays resolution and thus the oscillation effect is importantly reduced (see Figures 3,4,,) Measured center position error (X coordinate) Estimated center position error (X coordinate) Figure 8: One LRF - object s center position error (X coordinate).

7 .6 Measured center position error (Y coordinate) Estimated center position error (Y coordinate). Measured center position error (Y coordinate) Estimated center position error (Y coordinate) Figure 9: One LRF - object s center position error (Y coordinate)..7.6 Measured center position error (X coordinate) Estimated center position error (X coordinate) Figure : Two LRFs - object s center position error (Y coordinate). 5 Measured orientation angle error Estimated orientation angle error Figure : Two LRFs - object s center position error (X coordinate). 7 CONCLUSIONS A two-lrf based fusion method for objects tracking is presented. An Oriented Bounding Box model is used to represent the tracked objects. Enriched by the Inter-Ray uncertainty and Fixed Size assumption paradigms, the OBB model performs well with a single LRF, except for far objects, because of the limited angular resolution of the sensor. To overcome this limitation, the authors have proposed to use two LRF in order to increase the perception angular resolution. The raw data fusion method leads to better object state estimation. Furthermore, interlacing rays allows additional size estimation refinement using the IR uncertainty. The experimental results have shown the reliability of the two-lrf based fusion system, especially for far objects, when compared with the usage of a single LRF. References [] Y. Bar-Shalom and T. E. Fortman, Tracking and data association. Academic Press Professional, Inc., 988. [] C. Blanc, L. Trassoudaine, Y. Guilloux, and R.Moreira, Track to track fusion method applied to road obstacle detection, in Proceedings of the Sev- 3 Figure : One LRF - object s orientation angle error. enth International Conference on Information Fusion, vol., pp , 4. [3] U. Hofmann, A. Rieder, and D. Dickmanns, Radar and vision data fusion for hybrid adaptive cruise control on highways, Machine Vision and Applications, vol. 4, pp. 4 49, 3. [4] P. Kmiotek and Y. Ruichek, Representing and tracking of dynamics objects using oriented bounding box and extended kalman filter., in Proceedings, th International IEEE Conference on Intelligent Transportation Systems, pp. 3 38, 8. [5] A. Petrovskaya and S. Thrun, Model based vehicle tracking for autonomous driving in urban environments, in Proceedings of Robotics: Science and Systems Conference 8, 8. [6] D. Streller, K. Frstenberg, and K. Dietmayer, Vehicle and object models for robust tracking in traffic scenes using laser range images, in Intelligent Transportation Systems,., ITSC IEEE Conference on Intelligent Transportation Systems,. [7] J. Gao and C. Harris, Some remarks on kalman filters for the multisensor fusion, Information Fusion, vol. 3, pp. 9, September.

8 5 Measured orientation angle error Estimated orientation angle error z perc [µ IRy ] x k [µ IRy ] Figure 3: Two LRFs - object s orientation angle error. z perc [µ IRx ] x k [µ IRx ]. Figure 5: One LRF - Inter-Rays uncertainty µ IR (Y coordinate) z perc [µ IRx ] x k [µ IRx ] Figure 4: One LRF - Inter-Rays uncertainty µ IR (X coordinate) [8] Y. Kmiotek, P. Ruichek, Multisensor fusion based tracking of coalescing objects in urban environment for an autonomous vehicle navigation, in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, (Seoul), pp. 5 57, 8. Figure 6: Two LRFs - Inter-Rays uncertainty µ IR (X coordinate). [9] P. Kmiotek and Y. Ruichek, Objects oriented bounding box based representation using laser range finder ensory data, in Proceedings of IEEE International Conference on Vehicular Electronics and Safety, pp. 8 84, 8. [] G. Toussaint, Solving geometric problems with the rotating calipers, in Proc. MELECON, Athens, Greece, 983. [] Y. Bar-Shalom, X. Li, and T. Kirubarajan, Estimation with applications to tracking and navigation. Wiley New York, z perc [µ IRy ] x k [µ IRy ] Figure 7: Two LRFs - Inter-Rays uncertainty µ IR (Y coordinate). 4

9 .5 Measured Y side lenght error Estimated Y side length error.5 Measured X side length error Estimated X side length error Figure 8: One LRF - object s X side size error. Figure : Two LRFs - object s Y side size error Measured Y side length error Estimated Y side length error Time: t t = t + t t 3 = t + t Laser rays, Measurement OBB, Real Object Figure : The example of the measurement OBB extraction for different object positions for greater distances (small LRF resolutions). 4 LRF - Estimated center X velocity LRF fusion - Estimated center X velocity Figure 9: One LRF - object s Y side size error Measured X side length error Estimated X side length error Figure 3: Comparison of the velocity estimation between one LRF and two LRF fusion (coordinate X) LRF - Estimated center Y velocity LRF fusion - Estimated center Y velocity Figure : Two LRFs - object s X side size error Figure 4: Comparison of the velocity estimation between one LRF and two LRF fusion (coordinate Y). 5

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen Simuntaneous Localisation and Mapping with a Single Camera Abhishek Aneja and Zhichao Chen 3 December, Simuntaneous Localisation and Mapping with asinglecamera 1 Abstract Image reconstruction is common

More information

Extended target tracking using PHD filters

Extended target tracking using PHD filters Ulm University 2014 01 29 1(35) With applications to video data and laser range data Division of Automatic Control Department of Electrical Engineering Linöping university Linöping, Sweden Presentation

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

MULTI-SENSOR DATA FUSION FOR REPRESENTING AND TRACKING DYNAMIC OBJECTS

MULTI-SENSOR DATA FUSION FOR REPRESENTING AND TRACKING DYNAMIC OBJECTS AKADEMIA GÓRNICZO-HUTNICZA IM. STANIS LAWA STASZICA WYDZIA L ELEKTROTECHNIKI, AUTOMATYKI, INFORMATYKI I ELEKTRONIKI Katedra Informatyki UNIVERSITÉ DE TECHNOLOGIE DE BELFORT-MONTBÉLIARD Laboratoire Systèmes

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Mobile robot localisation and navigation using multi-sensor fusion via interval analysis and UKF

Mobile robot localisation and navigation using multi-sensor fusion via interval analysis and UKF Mobile robot localisation and navigation using multi-sensor fusion via interval analysis and UKF Immanuel Ashokaraj, Antonios Tsourdos, Peter Silson and Brian White. Department of Aerospace, Power and

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER. Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda**

DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER. Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda** DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda** * Department of Mechanical Engineering Technical University of Catalonia (UPC)

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

Vision-based Frontal Vehicle Detection and Tracking

Vision-based Frontal Vehicle Detection and Tracking Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga,

More information

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100 ME 597/747 Autonomous Mobile Robots Mid Term Exam Duration: 2 hour Total Marks: 100 Instructions: Read the exam carefully before starting. Equations are at the back, but they are NOT necessarily valid

More information

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Implementation of Odometry with EKF for Localization of Hector SLAM Method Implementation of Odometry with EKF for Localization of Hector SLAM Method Kao-Shing Hwang 1 Wei-Cheng Jiang 2 Zuo-Syuan Wang 3 Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung,

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

Attack Resilient State Estimation for Vehicular Systems

Attack Resilient State Estimation for Vehicular Systems December 15 th 2013. T-SET Final Report Attack Resilient State Estimation for Vehicular Systems Nicola Bezzo (nicbezzo@seas.upenn.edu) Prof. Insup Lee (lee@cis.upenn.edu) PRECISE Center University of Pennsylvania

More information

A Comparison of Position Estimation Techniques Using Occupancy Grids

A Comparison of Position Estimation Techniques Using Occupancy Grids A Comparison of Position Estimation Techniques Using Occupancy Grids Bernt Schiele and James L. Crowley LIFIA (IMAG) - I.N.P. Grenoble 46 Avenue Félix Viallet 38031 Grenoble Cedex FRANCE Abstract A mobile

More information

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor Andy Sayler & Constantin Berzan November 30, 2010 Abstract In the past two weeks, we implemented and tested landmark extraction based

More information

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows

Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows M. Barnada, C. Conrad, H. Bradler, M. Ochs and R. Mester Visual Sensorics and Information Processing

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Practical Robotics (PRAC)

Practical Robotics (PRAC) Practical Robotics (PRAC) A Mobile Robot Navigation System (1) - Sensor and Kinematic Modelling Nick Pears University of York, Department of Computer Science December 17, 2014 nep (UoY CS) PRAC Practical

More information

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters ISSN (e): 2250 3005 Volume, 06 Issue, 12 December 2016 International Journal of Computational Engineering Research (IJCER) A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Efficient Techniques for Dynamic Vehicle Detection

Efficient Techniques for Dynamic Vehicle Detection Efficient Techniques for Dynamic Vehicle Detection Anna Petrovskaya and Sebastian Thrun Computer Science Department Stanford University Stanford, California 94305, USA { anya, thrun }@cs.stanford.edu Summary.

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING Inženýrská MECHANIKA, roč. 15, 2008, č. 5, s. 337 344 337 DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING Jiří Krejsa, Stanislav Věchet* The paper presents Potential-Based

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios Sensor Fusion for Car Tracking Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios, Daniel Goehring 1 Motivation Direction to Object-Detection: What is possible with costefficient microphone

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

High Accuracy Navigation Using Laser Range Sensors in Outdoor Applications

High Accuracy Navigation Using Laser Range Sensors in Outdoor Applications Proceedings of the 2000 IEEE International Conference on Robotics & Automation San Francisco, CA April 2000 High Accuracy Navigation Using Laser Range Sensors in Outdoor Applications Jose Guivant, Eduardo

More information

Context Aided Multilevel Pedestrian Detection

Context Aided Multilevel Pedestrian Detection Context Aided Multilevel Pedestrian Detection Fernando García, Arturo de la Escalera and José María Armingol Intelligent Systems Lab. Universidad Carlos III of Madrid fegarcia@ing.uc3m.es Abstract The

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS

NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS Proceedings, 11 th FIG Symposium on Deformation Measurements, Santorini, Greece, 003. NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS D.Stathas, O.Arabatzi, S.Dogouris, G.Piniotis,

More information

EKF Localization and EKF SLAM incorporating prior information

EKF Localization and EKF SLAM incorporating prior information EKF Localization and EKF SLAM incorporating prior information Final Report ME- Samuel Castaneda ID: 113155 1. Abstract In the context of mobile robotics, before any motion planning or navigation algorithm

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Detection and Classification of Painted Road Objects for Intersection Assistance Applications

Detection and Classification of Painted Road Objects for Intersection Assistance Applications Detection and Classification of Painted Road Objects for Intersection Assistance Applications Radu Danescu, Sergiu Nedevschi, Member, IEEE Abstract For a Driving Assistance System dedicated to intersection

More information

E80. Experimental Engineering. Lecture 9 Inertial Measurement

E80. Experimental Engineering. Lecture 9 Inertial Measurement Lecture 9 Inertial Measurement http://www.volker-doormann.org/physics.htm Feb. 19, 2013 Christopher M. Clark Where is the rocket? Outline Sensors People Accelerometers Gyroscopes Representations State

More information

Detection and Motion Planning for Roadside Parked Vehicles at Long Distance

Detection and Motion Planning for Roadside Parked Vehicles at Long Distance 2015 IEEE Intelligent Vehicles Symposium (IV) June 28 - July 1, 2015. COEX, Seoul, Korea Detection and Motion Planning for Roadside Parked Vehicles at Long Distance Xue Mei, Naoki Nagasaka, Bunyo Okumura,

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Localization of Multiple Robots with Simple Sensors

Localization of Multiple Robots with Simple Sensors Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Ground Plane Motion Parameter Estimation For Non Circular Paths

Ground Plane Motion Parameter Estimation For Non Circular Paths Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Heuristic Optimization for Global Scan Matching with Focused Sonar on a Mobile Robot

Heuristic Optimization for Global Scan Matching with Focused Sonar on a Mobile Robot Heuristic Optimization for Global Scan Matching with Focused Sonar on a Mobile Robot Kristijan Lenac 1,2, Enzo Mumolo 1, Massimiliano Nolich 1 1 DEEI, Università degli Studi di Trieste, 34127 Trieste,

More information

Determination of 6D-workspaces of Gough-type parallel. manipulator and comparison between different geometries. J-P. Merlet

Determination of 6D-workspaces of Gough-type parallel. manipulator and comparison between different geometries. J-P. Merlet Determination of 6D-workspaces of Gough-type parallel manipulator and comparison between different geometries J-P. Merlet INRIA Sophia-Antipolis, France Abstract: We consider in this paper a Gough-type

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY Francesco Amigoni, Vincenzo Caglioti, Umberto Galtarossa Dipartimento di Elettronica e Informazione, Politecnico di Milano Piazza

More information

Loop detection and extended target tracking using laser data

Loop detection and extended target tracking using laser data Licentiate seminar 1(39) Loop detection and extended target tracking using laser data Karl Granström Division of Automatic Control Department of Electrical Engineering Linköping University Linköping, Sweden

More information

Neural Networks for Obstacle Avoidance

Neural Networks for Obstacle Avoidance Neural Networks for Obstacle Avoidance Joseph Djugash Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 josephad@andrew.cmu.edu Bradley Hamner Robotics Institute Carnegie Mellon University

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM Jorma Selkäinaho, Aarne Halme and Janne Paanajärvi Automation Technology Laboratory, Helsinki University of Technology, Espoo,

More information

Locating 1-D Bar Codes in DCT-Domain

Locating 1-D Bar Codes in DCT-Domain Edith Cowan University Research Online ECU Publications Pre. 2011 2006 Locating 1-D Bar Codes in DCT-Domain Alexander Tropf Edith Cowan University Douglas Chai Edith Cowan University 10.1109/ICASSP.2006.1660449

More information

Application of Characteristic Function Method in Target Detection

Application of Characteristic Function Method in Target Detection Application of Characteristic Function Method in Target Detection Mohammad H Marhaban and Josef Kittler Centre for Vision, Speech and Signal Processing University of Surrey Surrey, GU2 7XH, UK eep5mm@ee.surrey.ac.uk

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

The Application of Spline Functions and Bézier Curves to AGV Path Planning

The Application of Spline Functions and Bézier Curves to AGV Path Planning IEEE ISIE 2005, June 20-23, 2005, Dubrovnik, Croatia The Application of Spline Functions and Bézier Curves to AGV Path Planning K. Petrinec, Z. Kova i University of Zagreb / Faculty of Electrical Engineering

More information

arxiv: v1 [cs.ro] 6 Jun 2017

arxiv: v1 [cs.ro] 6 Jun 2017 A Flexible Modeling Approach for Robust Multi-Lane Road Estimation Alexey Abramov, Christopher Bayer, Claudio Heller and Claudia Loy arxiv:176.1631v1 [cs.ro] 6 Jun 217 Abstract A robust estimation of road

More information

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving

More information

On Board 6D Visual Sensors for Intersection Driving Assistance Systems

On Board 6D Visual Sensors for Intersection Driving Assistance Systems On Board 6D Visual Sensors for Intersection Driving Assistance Systems S. Nedevschi, T. Marita, R. Danescu, F. Oniga, S. Bota, I. Haller, C. Pantilie, M. Drulea, C. Golban Sergiu.Nedevschi@cs.utcluj.ro

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Jingyu Xiang, Yuichi Tazaki, Tatsuya Suzuki and B. Levedahl Abstract This research develops a new roadmap

More information

Bounding Volume Hierarchies

Bounding Volume Hierarchies Tutorial: Real-Time Collision Detection for Dynamic Virtual Environments Bounding Volume Hierarchies Stefan Kimmerle WSI/GRIS University of Tübingen Outline Introduction Bounding Volume Types Hierarchy

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM

CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM 1 Introduction In this homework, we revisit the concept of local control for robot navigation, as well as integrate our local controller

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 25 IEEE Personal use of this material is permitted Permission from IEEE must be obtained for all other uses in any current or future media including reprinting/republishing this material for advertising

More information

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation 242 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 3, JUNE 2001 Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation José E. Guivant and Eduardo

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Robot Motion Control Matteo Matteucci

Robot Motion Control Matteo Matteucci Robot Motion Control Open loop control A mobile robot is meant to move from one place to another Pre-compute a smooth trajectory based on motion segments (e.g., line and circle segments) from start to

More information

Simultaneous local and global state estimation for robotic navigation

Simultaneous local and global state estimation for robotic navigation Simultaneous local and global state estimation for robotic navigation The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As

More information

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive

More information

6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception

6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception 6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception Uwe Franke, Clemens Rabe, Hernán Badino, and Stefan Gehrig DaimlerChrysler AG, 70546 Stuttgart, Germany {uwe.franke,clemens.rabe,hernan.badino,stefan.gehrig}@daimlerchrysler.com

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

MULTI-ROBOT research has gained a broad attention. A Novel Way to Implement Self-localization in a Multi-robot Experimental Platform

MULTI-ROBOT research has gained a broad attention. A Novel Way to Implement Self-localization in a Multi-robot Experimental Platform 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 FrC16.5 A Novel Way to Implement Self-localization in a Multi-robot Experimental Platform Sheng Zhao and Manish

More information

Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm

Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm ALBERTO FARO, DANIELA GIORDANO, CONCETTO SPAMPINATO Dipartimento di Ingegneria Informatica e Telecomunicazioni Facoltà

More information

HOW MULTI-FRAME MODEL FITTING AND DIFFERENTIAL MEASUREMENTS CAN IMPROVE LIDAR-BASED VEHICLE TRACKING ACCURACY

HOW MULTI-FRAME MODEL FITTING AND DIFFERENTIAL MEASUREMENTS CAN IMPROVE LIDAR-BASED VEHICLE TRACKING ACCURACY HOW MULTI-FRAME MODEL FITTING AND DIFFERENTIAL MEASUREMENTS CAN IMPROVE LIDAR-BASED VEHICLE TRACKING ACCURACY By Steven J. Chao A THESIS Submitted to Michigan State University in partial fulfillment of

More information

Fusion of Radar and EO-sensors for Surveillance

Fusion of Radar and EO-sensors for Surveillance of Radar and EO-sensors for Surveillance L.J.H.M. Kester, A. Theil TNO Physics and Electronics Laboratory P.O. Box 96864, 2509 JG The Hague, The Netherlands kester@fel.tno.nl, theil@fel.tno.nl Abstract

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Collision Detection II. These slides are mainly from Ming Lin s course notes at UNC Chapel Hill

Collision Detection II. These slides are mainly from Ming Lin s course notes at UNC Chapel Hill Collision Detection II These slides are mainly from Ming Lin s course notes at UNC Chapel Hill http://www.cs.unc.edu/~lin/comp259-s06/ Some Possible Approaches Geometric methods Algebraic techniques Hierarchical

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information