P 1. e 1. Sectormap S. P 2 e 2

Size: px
Start display at page:

Download "P 1. e 1. Sectormap S. P 2 e 2"

Transcription

1 In: Proc Int. Conf. on Advanced Robotics (ICAR'97), Monterey Active Controlled Exploration of 3D Environmental Models Based on a Binocular Stereo System Darius Burschka and Georg Farber Laboratory for Process Control and Real-Time Systems Technische Universitat Munchen D Munchen, Germany ~burschka Abstract This paper describes the exploration of indoor environments based on a binocular stereo system mounted on a mobile vehicle. It introduces the algorithms for the control of the vehicle and the camera mount depending on the explored and the a-priori known information about the environment. The controlled vehicle does not follow a precompiled path but nds its goals depending on the perceived information. The resulting environmental model is stored in a multilayer map storing the three-dimensional boundary lines of the objects at their geometrical positions in the world. The line description in the model is adapted to meet the requirements of a successive exploration with a video camera. Keywords: path planning, environmental models, active camera control, navigation 1 Motivation The knowledge about the current position and the surrounding environment is essential for the ecient use of an autonomous mobile robot (AMR). The localisation and navigation tasks require a reliable and exact representation of the environment. The stored information is continuously compared to the current sensor reading to compensate the errors of the deadreckoning. We have already presented an environmental model handling the a-priori known information about the environment [9]. This model was suitable for the navigation in a known environment supporting dierent kinds of sensor systems [6, 7]. The exact knowledge about the environment from architect's plans or measurements were necessary to generate this model. A typical indoor environment changes because of the human inuence or the operation of other mobile systems. The resulting discrepancies between the stored model and the modied environment may cause misinterpretations resulting in a false pose estimation of the AMR. An extension of our model, able to handle the uncertain information from the sensor during exploration, was presented in [2]. This new approach bases on the dynamic local map (DLM) ltering the explored information. This map stores the information on a low level of abstraction equivalent to the capabilities of the applied sensor system (section 3.1). The environmental representation in the DLM consists of the boundary line segments of the objects. The missing information about the planes in the map aggravates the use of this map for path planning. The problem is to decide based on this simple description whether the Figure 1: MARVIN planes enclosed by the line segments are passable for the vehicle or not (g. 2). The solution to this problem will be shown in this paper. It presents our exploration system and the way the sensor data is used to control the vehicle. 2 System structure The dynamic local map (DLM) storing the explored information of a local area is the essential part of the system presented in this paper. This map is constructed based on the data from three sources shown in gure 3: sensor system - the information from the current sensor reading 971

2 PSC 1 - known objects or structures are identi- ed and missing features are generated as hypothetical lines to be veried by the sensor system (section 3.4) global model - the a-priori information about the environment and the information from prior missions each other deteriorating the quality of the extracted information. This errors can be corrected as will be shown in section 3.3. Figure 3: System structure Figure 2: These two situations cannot be distinguished without any additional information ( a door in the wall $ a locker standing at a line on the oor). Our vehicle MARVIN based on the TRC-platform (Labmate) is equipped with a stereo camera mounted on a pan-tilt-vergence camera mount (g. 1). All processing is performed on two PC-computers with Intel Pentium{processor running Linux OS. All modules use the RPC 2 -communication to exchange data. This mechanism allows to distribute the processing among dierent machines to keep the average load at an uniform level without changes in the program code. The hardware of the vehicle and the camera mount are controlled by two RPC-servers allowing communication with dierent processes even on dierent computers. These servers (g. 3) are commanded by the path planning module, which determines their actions depending on the information stored in the DLM (section 4.2). The current position and orientation can be requested directly from the server. The sensor information from the stereo system is matched in the sensor feature extraction module to restore the threedimensional description of the extracted line segments. These features are stored in the dynamic local map (DLM) to improve their accuracy and to lter the misinterpretations. The stored information is used by the sensor system for the fast establishment of known correspondences between the features in the two camera images [3] and for the recalibration of the cameras. The recalibration is important for the exploration, because the knowledge about the exact position in the world is fundamental for the map construction. The vibrations of the cameras change their orientation to 1 Predictive Spatial Completion - a module reconstructing planes and known structures from the data stored in the DLM 2 Remote Procedure Call - interprocess communication allowing a communication between processes even on dierent machines. 3 Environmental model The \world" of the AMR can be subdivided into local operating areas, like: rooms, proximities of machines or docking stations, etc, relevant for specic tasks. These areas can be referenced relative to signicant structures, which can easily be recognized by the sensor system. These structures dene the origins of the according coordinate systems. These local areas are stored in the global map holding the information between the particular missions. But there are also areas, which are not relevant for a mission or change continuously and should not be stored permanently. In those areas the local map is only used for collision avoidance. The information stored in the DLM changes continuously because of the limited view angle and accuracy of the sensor. The inserted information is matched in a three-dimensional matching process to the already known content and stored eciently to reduce the storage space requirements of the map. On the other hand the access time to the stored information must be low to allow a closed-loop operation between the sensor and the map. The structure of the DLM is designed to cope with these contradictory requirements. 3.1 Internal structure The DLM has a multilayer structure to meet the requirements for short access time and simple update of the stored information (g. 4)[2]. We have tested dierent index structures storing the geometrical descriptions of the environment for their access times and eciencies [9, 11]. The access structures for extended spatial objects are surveyed in [10]. The spatial trees subdivide the world depending on the objects' distribution. They proved the shortest access times and lowest storage space requirements because of the adaptation of their structure to the environment. The 972

3 DLM uses an octree to store the environmental description. The octree subdivides a voxel 3 in four equal parts, which can further be subdivided until the stored elements are enclosed with the smallest possible cube. This increases the selectivity of the access functions. They supply only information stored in voxels intersected by the current sensor cone. A smaller voxel size results in a better approximation of the visible space. Only the visible elements are delivered. It is dicult to exchange parts of the tree if a new space segment is approached. The computation of all branches of the octree is necessary. Therefore, the upper layer of our map has a grid structure. Each grid element contains an octree storing a local part of the environment. If the AMR approaches a new space segment only the corresponding grid elements must be replaced. object recognition and path planning. The line segments are stored with additional information about their source and quality. They are described by their endpoints P 1 P 2. The accuracy e x is attached to each endpoint separately, because it depends on the distance from the sensor and the orientation of the projected line in the image in case of the stereo camera. The distance from the sensor is often dierent for the two endpoints. The resulting error is described by an error sphere around the estimated endpoint (g. 5). P 2 e 2 P 1 e 1 Sectormap S Figure 5: Line representation The sector map describes the directions from which the line segment was seen. The line segments are additionally described by their condence and age values. The condence c describes the reliability for the real existence of the line. If a new feature could be matched to one stored in the DLM, this value is increased according to the following equation: Figure 4: Internal structure of the DLM. The size of the DLM is adjusted to the typical size of the local operating area. The map has a virtual mapping of the world coordinates to the grid elements to reduce the data transfers within it (g. 4). The location in the \real world" is quantied by the size of a voxel. This so{called virtual coordinate is mapped 4 to the size of the grid array resulting in its real coordinate. The real coordinates identify the voxel's location, so every grid element may correspond to dierent locations in the real world, i.e. the world is being mapped on a torus. Every voxel contains the virtual coordinates of the represented real world location. This is similar to the tag{eld in computer caches. 3.2 Line representation The DLM is the source for the recent information about the environment. This information can be used for dierent purposes, like: sensor data processing, 3 a rectangular cube enclosing a space segment 4 the mapping is done using a modulus operation. c(x + x) = 1? e?k(x+x) ; 0 x 1 c(x + x) = 1? [1? c(x)] e?kx (1) The value x describes the condence in the current update (0!unsure, 1!sure). The factor k determines the speed at which the condence is changed and depends on the quality of the feature extraction algorithm. Line segments resulting from wrong correspondences are also stored in the map. These line segments cannot be veried from other positions. Each time the segment appears in the local view of the sensor and cannot be veried, the age of the segment is decreased. The feature is removed from the map if its age falls below a threshold. This method allows also to remove vanished obstacles from the map. An important information about the line segment is stored in a bitmap describing its sources. The possible sources for a feature are: the global map, the sensor system and the PSC. This information evaluates the features for the use in dierent tasks. For example hypothetical features generated by the PSC should not be used for path planning or localisation, because their real existence is not yet veried. The bitmap describes the history of a line. This information is very useful. 973

4 It helps to identify for example lines from the global model veried or rejected in the current sensor reading. 3.3 Interaction with the sensor system We use this information to recalibrate the sensor system and to correct the drifts of the dead-reckoning. The procedure is quite simple and shows the usefulness of the DLM. In each step the sensor system asks the map for the already known line segments in the local view. The returned lines are sorted according to their bitmap contents, condence values and accuracies. All lines originating from the global model or with high condence and accuracy are projected on the image planes of the sensors and matched with the current readings. The resulting matches are used for the calibration of the extrinsic camera parameters [13]. The resulting values are used to correct the orientation errors of the cameras or drifts of the vehicle's dead-reckoning. The remaining lines are matched in the same way to the extracted image line segments in both cameras. This step reduces the number of matchable lines to the still unknown lines, which are matched using geometric constraints [5]. If there are several matching candidates they are all stored in the DLM to be veri- ed from a dierent position exploiting the movement of the vehicle. The DLM is used to accelerate the sensor data processing and as an alternative to a third camera [1]. The DLM acts as a \reference camera" supplying three-dimensional information to the verication process. 3.4 Interaction with the PSC A new approach is the interaction between the local map and the predictive spatial completion module (PSC). The PSC scans the map and it tries to reconstruct planes and objects from the information explored by the sensor system [4]. An important result of this interaction are the hypothetical lines generated as a completion of assumed planes and known objects. These lines are stored back into the DLM to be veried by the sensor system in consecutive sensor readings. They are used to establish known correspondences as described in section 3.3. They help to nd correct correspondences in ambiguous situations. 4 Exploration The focus of this paper concentrates on the exploration of an unknown or partially known environment. There are two dierent ways to explore the environment depending on the current task and the knowledge level about the environment. The exploration can be done during a normal mission or be an exclusive task. We subdivide the path planning module (g. 3) into two parts with dierent knowledge levels about the abilities of the sensor system and the current mission. The higher level module -\strategist"- is responsible for the strategy, how to explore the whole \world" considering the time restrictions of the current mission. This module knows, how much time it has to explore a local part of the environment. It does not know how to control the sensors. It does not even know, what kind of sensors are used. It uses two commands to communicate with the lower module -\navigator"- responsible for the vehicle and sensor control. The rst command describes the coordinates of the next goal, the time to reach this goal and gives hints whether the vehicle may explore or not (exploration ag). It is allowed for the strategist to specify the starting point as the goal. This special case can be used in two situations: the vehicle is waiting for another vehicle and should stay at its current position or the vehicle has some time to explore the environment, but it should return to the starting point after this time. In unknown environments this is often the only known free point at the beginning. The exploration ag determines the behavior. The second command is used to rate the explored areas. Only the navigator can access the DLM and he is able to rate the quality of the stored information. The strategist does not know anything about the sensors. After an exploration of a local area the strategist may choose the next goal depending on the ratings of the possible goals proposed by the navigator and the global mission goal. The lower level module -\navigator"- operates in a local area and has no knowledge about global goals or missions. It knows the abilities of the used sensors and controls them. In our case the navigator controls the camera mount and the vehicle. It triggers also the sensor data processing. The sensor data processing is triggered while the vehicle does not move to reduce errors caused by vibrations. The binocular stereo system does not need any movement to reconstruct the three-dimensional information. The navigator interprets the explored information to estimate next goals for the exploration (section 4.2). 4.1 Estimation of free space The information stored in the DLM consists of single three-dimensional lines representing objects' boundaries (g. 6). There is no information about planes 974

5 or objects. Therefore, a new method was developed to estimate the free space based on this kind of data. The algorithm bases on the assumption that the current vehicle's position is a part of the free space. This space is restricted by the objects surrounding the vehicle. The boundaries of these objects are stored in the DLM. The space is subdivided into three regions. Figure 6: CAD-Model of an environment and the corresponding content of the DLM. The gray lines have insucient condence for path planning. The rst region lies above the vehicle and contains all lines, which cause no collisions with the AMR. The second region extends from the vehicle's height to some centimeters above the ground and contains obstacles causing a certain collision with the vehicle. The third region extends in a region passable for the vehicle (lines on the oor and small bumps). The interesting lines lie in the second and third region. The space between the vehicle and those lines is assumed to be free. There is a diculty to decide whether a line on the oor is a real obstacle or not. The information stored in the DLM does not support any hidden line removal. The vehicle decides to move across such a line, if there are further lines behind it and no line in the second region is found. The DLM supplies occasional the information about the area behind the walls, which is invisible in the current sensor reading, if it is already explored. Only the sensor can decide the visibility from a given position, but the wrong correspondences may lead to wrong decisions. The solution to this problem is an additional access function to the DLM ltering the current sensor reading without adding any further information not contained in the current view. Only the features from the current sensor view, that could be veried in the DLM, are used to solve the described problem (g. 7). 4.2 Generation of goals The section 3.2 describes the additional information stored with each line segment. This information is Figure 7: Free space estimation. used to determine the possible goals for the next step. All hypothetical features stored by the PSC or uncertain lines detected by the sensor system are clustered and each cluster is rated according to parameters describing the expected improvements, if the given cluster will be explored next. The \behavior" of the vehicle is determined by the results of this step. The covered path can be minimized or the inspection of the hypothetical lines can be preferred. The choice of the parameters changes the \behavior". The vehicle can decide to explore a fast uncertain model of the environment, if the costs for movements are low and uninspected areas in the map promise a high increase of the information content, or it can try to verify the detected features from an other side, if the costs for movement are high and the stored hypothesis have a high priority. The detected clusters are encircled in gure 8. Figure 8: Path planning in the environment shown in gure 6 The clusters determine the regions of interest in the local area. The goal point for the vehicle and the view angle for the cameras is computed to reach an optimal distance from the obstacle to achieve the best results with the applied sensor.??? 975

6 4.3 Path planning The unsteady diusion equation strategy is used to plan the path in the extracted free space [12]. The following equation describes the diusion from the starting point x 0 to the goal x G = a 2 r 2 u? g u; u(t; x G ) = 1; x G 2 R n The distribution of the concentration u(t; x) for a given room structure and the resulting path is shown in gure 9. log Path approximation with straight line seg- Figure 10: ments is an iterative approach and the quality of the generated paths increases with the number of computed iterations. The time restrictions require a compromise between the quality and speed of the path planning. The gure 11 shows the generated paths for dierent numbers of iterations. The path generation takes ca. 2.5 s for a eld containing 8000 grid elements of a size 10cm x 10cm on a Pentium processor at 90 MHz Figure 9: The distribution of the concentration u(t; x) and the planned path The resulting paths are approximated by straight line segments generated by algorithms used in the computer vision. The pixel chain is rst subdivided at the point where it deviated most from a straight line joining its endpoints. Subdivision repeats recursively until the edge is reduced to short segments representing the minimal way length [14]. The algorithm can be optimized for minimal deviation from the planned path or for optimal way lengths (g. 10). The free space is rastered with a 10cm x 10cm grid for which the diusion is computed. We use a non-oscillating equation 4 to compute the diusion. ~u k+1;r = 1 MX 1 + M ( m=1 8< u k+1;r = u k;m + u k;r )? g u k;r (3) : eqn: 3 for r 2 0 0; for r 2 0 1; for r = r G (4) The value M describes the number of the neighbors of a grid element, 0 is the free space, 0 are the grid elements containing obstacles and r G is the goal. 5 Results The applied diusion equation strategy requires a high computational eort, but it generates paths in a secure distance from the objects in the world. It Figure 11: Generated paths after 100 and 400 iterations The DLM helps to navigate in an unknown environment and gives valuable hints about the local area. The stored information is used to control the vehicle and the camera orientation. The introduced strategy allows a maximum increase of the information content and helps to explore the local area eciently. 6 Future work The introduced system operates in a global coordinate system. It is dicult to keep the position error of the modelled objects small for long distances. The operation far away from the origin of the coordinate system is aliated with errors in the dead-reckoning and the modelled world. We change our models to local maps of the environment referenced to signicant objects in the local area instead of a global reference now. This step requires a robust object recognition to identify these reference objects to switch to the appropriate coordinate system, when a new area is approached. The object recognition operating on three{dimensional features stored in the DLM is already implemented. A topological modeling is aspired, where the distance between the particular local maps does not need to be known exactly. 976

7 Acknowledgment The work presented in this paper was supported by the Deutsche Forschungsgemeinschaft as a part of an interdisciplinary research project on \Information Processing in Autonomous Mobile Robots"(SFB331). References [1] N. Ayache and F. Lustman. Fast and Reliable Passive Trinocular Stereovision. Proceedings 1. Int. Conf. on Computer Vision, pages 165 { 243, [2] D. Burschka and C. Eberst. Exploration of Unknown or Partially Known Environments. In 2. Asian Conference on Computer Vision, pages (II) 727{731, Dec [3] D. Burschka, C. Eberst, and C. Robl. Vision Based Model Generation for Indoor Environments. Proc. IEEE Int. Conf. on Robotics and Automation, pages 1940{1945, [4] C. Eberst and J. Sicheneder. Generation of hypothetical landmarks supporting fast object recognition with autonomous mobile robots. In Proc. IEEE/RSJ Int. Conf. on Intelligent Robot Systems, Nov [10] B.Ch. Ooi. Ecient Query Processing in Geographic Information Systems. Lecture Notes in Computer Science No. 471, Springer Verlag, [11] Achim Ru. An Environmental Model with Real- Time Access Functions for Use in Autonomous Mobile Robots operating in Complex structured Manufacturing Environments. ROVPIA `94, Ipoh, May [12] G. Schmidt and K. Azarm. Mobile Robot Navigation in a Dynamic World Using an Unsteady Diffusion Equation Strategy. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 642{647, [13] Roger Y. Tsai. A Versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using O-the-Shelf TV Cameras and Lenses. IEEE Transactions of Robotics and Automation, RA-3(4):323{344, August [14] Ch.M. Williams. An ecient algorithm for the piecewise linear approximation of planar curves. Computer Graphics and Image Processing, 8:286{ 293, [5] Oliver Faugeras. Three-Dimensional Computer Vision. Massachusetts Institute of Technology, The MIT Press, Cambridge, Massachusetts London, England, [6] A. Hauck and N. O. Stoer. A hierarchic world model supporting video-based localisation, exploration and object identication. In 2. Asian Conference on Computer Vision, Singapore, 5. { 8. Dec., pages (III) 176{180, [7] J. Horn and A. Ru. Localization System for a Mobile Robot based on a 3D-Laser-Range- Camera and an Environmental Model. Proc. IEEE Int. Conf. on Intelligent Vehicles, Paris, [8] Xavier Lebegue and J.K. Aggarwal. Automatic Creation of Architectural CAD Models. Proceedings: Second CAD-based Vision Workshop Seven Springs, PA, pages 82{89, February 8{ [9] Gunter Magin, Achim Ru, Darius Burschka, and Georg Farber. A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots. Robotics and Autonomous Systems, 14:119 { 131,

Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera

Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera Exploration of Unknown or Partially Known Environments? Darius Burschka, Christof Eberst Institute of Process Control Computers Prof. Dr. -Ing. G. Farber Technische Universitat Munchen 80280 Munich, Germany

More information

2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence

2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence In: Proc. of the 3rd Asian Conference on Computer Vision, Hong Kong, Vol. I, January 998. Identication of 3D Reference Structures for Video-Based Localization? Darius Burschka and Stefan A. Blum Laboratory

More information

VISION-BASED HANDLING WITH A MOBILE ROBOT

VISION-BASED HANDLING WITH A MOBILE ROBOT VISION-BASED HANDLING WITH A MOBILE ROBOT STEFAN BLESSING TU München, Institut für Werkzeugmaschinen und Betriebswissenschaften (iwb), 80290 München, Germany, e-mail: bl@iwb.mw.tu-muenchen.de STEFAN LANSER,

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Darius Burschka, Stephen Lee and Gregory Hager Computational Interaction and Robotics Laboratory Johns Hopkins University

More information

MODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS

MODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS MODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS NORBERT O. STÖFFLER TU München, Lehrstuhl für Prozeßrechner, Prof. Dr.-Ing. G. Färber, 80290 München, Germany, e-mail: stoffler@lpr.e-technik.tu-muenchen.de

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

and implemented in parallel on a board with 3 Motorolla DSP's. This board, part of the DMA machine, has been designed and built jointly by INRIA

and implemented in parallel on a board with 3 Motorolla DSP's. This board, part of the DMA machine, has been designed and built jointly by INRIA In Vision-based Vehicle Guidance, Springer, New York, 1992 edited by I. Masaki, Chapter 13, pages 268--283 Obstacle Avoidance and Trajectory Planning for an Indoors Mobile Robot Using Stereo Vision and

More information

User Interface. Global planner. Local planner. sensors. actuators

User Interface. Global planner. Local planner. sensors. actuators Combined Map-Based and Case-Based Path Planning for Mobile Robot Navigation Maarja Kruusmaa and Bertil Svensson Chalmers University of Technology, Department of Computer Engineering, S-412 96 Gothenburg,

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,

More information

Bitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4

Bitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4 Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen 81667 Munich, Germany email: bueschin@informatik.tu-muenchen.de 1 Introduction Pictographs are ubiquitous

More information

Department of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan

Department of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141

More information

Design Specication. Group 3

Design Specication. Group 3 Design Specication Group 3 September 20, 2012 Project Identity Group 3, 2012/HT, "The Robot Dog" Linköping University, ISY Name Responsibility Phone number E-mail Martin Danelljan Design 072-372 6364 marda097@student.liu.se

More information

with respect to some 3D object that the CAD model describes, for the case in which some (inexact) estimate of the camera pose is available. The method

with respect to some 3D object that the CAD model describes, for the case in which some (inexact) estimate of the camera pose is available. The method Error propagation for 2D{to{3D matching with application to underwater navigation W.J. Christmas, J. Kittler and M. Petrou Vision, Speech and Signal Processing Group Department of Electronic and Electrical

More information

Real-Time Object Detection for Autonomous Robots

Real-Time Object Detection for Autonomous Robots Real-Time Object Detection for Autonomous Robots M. Pauly, H. Surmann, M. Finke and N. Liang GMD - German National Research Center for Information Technology, D-53754 Sankt Augustin, Germany surmann@gmd.de

More information

1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by

1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by Physics-based 3D Position Analysis of a Soccer Ball from Monocular Image Sequences Taeone Kim, Yongduek Seo, Ki-Sang Hong Dept. of EE, POSTECH San 31 Hyoja Dong, Pohang, 790-784, Republic of Korea Abstract

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

are now opportunities for applying stereo ranging to problems in mobile robot navigation. We

are now opportunities for applying stereo ranging to problems in mobile robot navigation. We A Multiresolution Stereo Vision System for Mobile Robots Luca Iocchi Dipartimento di Informatica e Sistemistica Universita di Roma \La Sapienza", Italy iocchi@dis.uniroma1.it Kurt Konolige Articial Intelligence

More information

Using Layered Color Precision for a Self-Calibrating Vision System

Using Layered Color Precision for a Self-Calibrating Vision System ROBOCUP2004 SYMPOSIUM, Instituto Superior Técnico, Lisboa, Portugal, July 4-5, 2004. Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jüngel Institut für Informatik, LFG Künstliche

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Intelligent overhead sensor for sliding doors: a stereo based method for augmented efficiency

Intelligent overhead sensor for sliding doors: a stereo based method for augmented efficiency Intelligent overhead sensor for sliding doors: a stereo based method for augmented efficiency Luca Bombini, Alberto Broggi, Michele Buzzoni, and Paolo Medici VisLab Dipartimento di Ingegneria dell Informazione

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

Exploiting Depth Camera for 3D Spatial Relationship Interpretation

Exploiting Depth Camera for 3D Spatial Relationship Interpretation Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships

More information

Sensor Calibration. Sensor Data. Processing. Local Map LM k. SPmap Prediction SPmap Estimation. Global Map GM k

Sensor Calibration. Sensor Data. Processing. Local Map LM k. SPmap Prediction SPmap Estimation. Global Map GM k Sensor Inuence in the Performance of Simultaneous Mobile Robot Localization and Map Building J.A. Castellanos J.M.M. Montiel J. Neira J.D. Tardos Departamento de Informatica e Ingeniera de Sistemas Universidad

More information

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,

More information

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992 Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,

More information

A Stochastic Environment Modeling Method for Mobile Robot by using 2-D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Poh

A Stochastic Environment Modeling Method for Mobile Robot by using 2-D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Poh A Stochastic Environment Modeling Method for Mobile Robot by using -D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Pohang University of Science and Technology, e-mail: jsoo@vision.postech.ac.kr

More information

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data Jens Hensler, Michael Blaich, and Oliver Bittel University of Applied Sciences Brauneggerstr. 55, 78462 Konstanz,

More information

estimate of change in the motion direction and vertical segment correspondence provides for precise 3D information about objects close to the robot. A

estimate of change in the motion direction and vertical segment correspondence provides for precise 3D information about objects close to the robot. A MOBILE ROBOT NAVIGATION AND SCENE MODELING USING STEREO FISH-EYE LENS SYSTEM SHISHIR SHAH AND J. K. AGGARWAL Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 522

More information

Exploration of an Indoor-Environment by an Autonomous Mobile Robot

Exploration of an Indoor-Environment by an Autonomous Mobile Robot IROS '94 September 12-16, 1994 Munich, Germany page 1 of 7 Exploration of an Indoor-Environment by an Autonomous Mobile Robot Thomas Edlinger edlinger@informatik.uni-kl.de Ewald von Puttkamer puttkam@informatik.uni-kl.de

More information

Dominant plane detection using optical flow and Independent Component Analysis

Dominant plane detection using optical flow and Independent Component Analysis Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgart Geschwister-Scholl-Strae 24, 70174 Stuttgart, Germany

More information

Multiple Sensorprocessing for High-Precision Navigation and. Environmental Modeling with a mobile Robot

Multiple Sensorprocessing for High-Precision Navigation and. Environmental Modeling with a mobile Robot Multiple Sensorprocessing for High-Precision Navigation and Environmental Modeling with a mobile Robot P. Weckesser, R. Dillmann, M. Elbs and S. Hampel Institute for Real-Time Computer Systems & Robotics

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Real-Time Self-Localization in Unknown Indoor Environments using a Panorama Laser Range Finder

Real-Time Self-Localization in Unknown Indoor Environments using a Panorama Laser Range Finder Real-Time Self-Localization in Unknown Indoor Environments using a Panorama Laser Range Finder Tobias Einsele Laboratory for Process Control and Real Time Systems Prof Dr Ing Georg Färber Technische Universität

More information

A Robot Recognizing Everyday Objects

A Robot Recognizing Everyday Objects A Robot Recognizing Everyday Objects -- Towards Robot as Autonomous Knowledge Media -- Hideaki Takeda Atsushi Ueno Motoki Saji, Tsuyoshi Nakano Kei Miyamato The National Institute of Informatics Nara Institute

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

ACCURACY EVALUATION OF 3D RECONSTRUCTION FROM CT-SCAN IMAGES FOR INSPECTION OF INDUSTRIAL PARTS. Institut Francais du Petrole

ACCURACY EVALUATION OF 3D RECONSTRUCTION FROM CT-SCAN IMAGES FOR INSPECTION OF INDUSTRIAL PARTS. Institut Francais du Petrole ACCURACY EVALUATION OF 3D RECONSTRUCTION FROM CT-SCAN IMAGES FOR INSPECTION OF INDUSTRIAL PARTS H. Delingette y,o.seguin z,r.perrocheau z,p.menegazzi z yinria Sophia-Antipolis 2004, Route des Lucioles

More information

HIGH PRECISION SURVEY AND ALIGNMENT OF LARGE LINEAR COLLIDERS - HORIZONTAL ALIGNMENT -

HIGH PRECISION SURVEY AND ALIGNMENT OF LARGE LINEAR COLLIDERS - HORIZONTAL ALIGNMENT - HIGH PRECISION SURVEY AND ALIGNMENT OF LARGE LINEAR COLLIDERS - HORIZONTAL ALIGNMENT - A. Herty, J. Albert 1 Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany with international partners * 1. INTRODUCTION

More information

arxiv: v1 [cs.cv] 18 Sep 2017

arxiv: v1 [cs.cv] 18 Sep 2017 Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de

More information

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES DEPTH ESTMATON USNG STEREO FSH-EYE LENSES Shishir Shah and J. K. Aggamal Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 520 The University of Texas At Austin

More information

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION David Samper 1, Jorge Santolaria 1,

More information

OPTIMAL LANDMARK PATTERN FOR PRECISE MOBILE ROBOTS DEAD-RECKONING

OPTIMAL LANDMARK PATTERN FOR PRECISE MOBILE ROBOTS DEAD-RECKONING Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 2001 OPTIMAL LANDMARK PATTERN FOR PRECISE MOBILE ROBOTS DEAD-RECKONING Josep Amat*, Joan Aranda**,

More information

Vol. 21 No. 6, pp ,

Vol. 21 No. 6, pp , Vol. 21 No. 6, pp.69 696, 23 69 3 3 3 Map Generation of a Mobile Robot by Integrating Omnidirectional Stereo and Laser Range Finder Yoshiro Negishi 3, Jun Miura 3 and Yoshiaki Shirai 3 This paper describes

More information

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily

More information

A thesis submitted in partial fulllment of. the requirements for the degree of. Bachelor of Technology. Computer Science and Engineering

A thesis submitted in partial fulllment of. the requirements for the degree of. Bachelor of Technology. Computer Science and Engineering R N O C A thesis submitted in partial fulllment of the requirements for the degree of Bachelor of Technology in Computer Science and Engineering Rahul Bhotika Anurag Mittal Supervisor : Dr Subhashis Banerjee

More information

thresholded flow field structure element filled result

thresholded flow field structure element filled result ISPRS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5, pp. 676-683, Hakodate, 1998. A TRACKER FOR BROKEN AND CLOSELY-SPACED LINES Naoki CHIBA Chief Researcher, Mechatronics

More information

Chapter 12 3D Localisation and High-Level Processing

Chapter 12 3D Localisation and High-Level Processing Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Two possible points. Start. Destination. Obstacle

Two possible points. Start. Destination. Obstacle A Two-Dimensional Path Planning Algorithm Richard Fox, Antonio Garcia Jr. and Michael Nelson Department of Computer Science The University of Texas Pan American Edinburg, TX 78539, USA Phone: (956) 381-3635

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

BROWN UNIVERSITY Department of Computer Science Master's Project CS-95-M17

BROWN UNIVERSITY Department of Computer Science Master's Project CS-95-M17 BROWN UNIVERSITY Department of Computer Science Master's Project CS-95-M17 "Robotic Object Recognition: Utilizing Multiple Views to Recognize Partially Occluded Objects" by Neil A. Jacobson ,'. Robotic

More information

Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition

Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition 19 th World Conference on Non-Destructive Testing 2016 Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition Philipp KLEIN 1, Frank

More information

EE631 Cooperating Autonomous Mobile Robots

EE631 Cooperating Autonomous Mobile Robots EE631 Cooperating Autonomous Mobile Robots Lecture: Multi-Robot Motion Planning Prof. Yi Guo ECE Department Plan Introduction Premises and Problem Statement A Multi-Robot Motion Planning Algorithm Implementation

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Three-Dimensional Laser Scanner. Field Evaluation Specifications

Three-Dimensional Laser Scanner. Field Evaluation Specifications Stanford University June 27, 2004 Stanford Linear Accelerator Center P.O. Box 20450 Stanford, California 94309, USA Three-Dimensional Laser Scanner Field Evaluation Specifications Metrology Department

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

N. Hitschfeld. Blanco Encalada 2120, Santiago, CHILE.

N. Hitschfeld. Blanco Encalada 2120, Santiago, CHILE. Generalization of modied octrees for geometric modeling N. Hitschfeld Dpto. Ciencias de la Computacion, Univ. de Chile Blanco Encalada 2120, Santiago, CHILE E-mail: nancy@dcc.uchile.cl Abstract. This paper

More information

Progress in Image Analysis and Processing III, pp , World Scientic, Singapore, AUTOMATIC INTERPRETATION OF FLOOR PLANS USING

Progress in Image Analysis and Processing III, pp , World Scientic, Singapore, AUTOMATIC INTERPRETATION OF FLOOR PLANS USING Progress in Image Analysis and Processing III, pp. 233-240, World Scientic, Singapore, 1994. 1 AUTOMATIC INTERPRETATION OF FLOOR PLANS USING SPATIAL INDEXING HANAN SAMET AYA SOFFER Computer Science Department

More information

Vision-Motion Planning with Uncertainty

Vision-Motion Planning with Uncertainty Vision-Motion Planning with Uncertainty Jun MIURA Yoshiaki SHIRAI Dept. of Mech. Eng. for Computer-Controlled Machinery, Osaka University, Suita, Osaka 565, Japan jun@ccm.osaka-u.ac.jp Abstract This paper

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Adaption of Robotic Approaches for Vehicle Localization

Adaption of Robotic Approaches for Vehicle Localization Adaption of Robotic Approaches for Vehicle Localization Kristin Schönherr, Björn Giesler Audi Electronics Venture GmbH 85080 Gaimersheim, Germany kristin.schoenherr@audi.de, bjoern.giesler@audi.de Alois

More information

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Object Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University

Object Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University Object Modeling from Multiple Images Using Genetic Algorithms Hideo SAITO and Masayuki MORI Department of Electrical Engineering, Keio University E-mail: saito@ozawa.elec.keio.ac.jp Abstract This paper

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

1998 IEEE International Conference on Intelligent Vehicles 587

1998 IEEE International Conference on Intelligent Vehicles 587 Ground Plane Obstacle Detection using Projective Geometry A.Branca, E.Stella, A.Distante Istituto Elaborazione Segnali ed Immagini - CNR Via Amendola 166/5, 70126 Bari Italy e-mail: [branca,stella,distante]@iesi.ba.cnr.it

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

ToF Camera for high resolution 3D images with affordable pricing

ToF Camera for high resolution 3D images with affordable pricing ToF Camera for high resolution 3D images with affordable pricing Basler AG Jana Bartels, Product Manager 3D Agenda Coming next I. Basler AG II. 3D Purpose and Time-of-Flight - Working Principle III. Advantages

More information

APCS. Start. Is a Flat Wall Found? No. Position Correction. Corrected Positon. Yes. Yes. Locomotion Module. Ultrasonic Sensor Measured Range Data

APCS. Start. Is a Flat Wall Found? No. Position Correction. Corrected Positon. Yes. Yes. Locomotion Module. Ultrasonic Sensor Measured Range Data n Implementation of Landmark-based Position Estimation Function as an utonomous and Distributed System for a Mobile Robot Takashi YMMOTO 3 Shoichi MEYM kihisa OHY Shin'ichi YUT Intelligent Robot Laboratory,

More information

Object Segmentation and Tracking in 3D Video With Sparse Depth Information Using a Fully Connected CRF Model

Object Segmentation and Tracking in 3D Video With Sparse Depth Information Using a Fully Connected CRF Model Object Segmentation and Tracking in 3D Video With Sparse Depth Information Using a Fully Connected CRF Model Ido Ofir Computer Science Department Stanford University December 17, 2011 Abstract This project

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

SKELETONIZATION AND SEGMENTATION OF POINT CLOUDS USING OCTREES AND GRAPH THEORY

SKELETONIZATION AND SEGMENTATION OF POINT CLOUDS USING OCTREES AND GRAPH THEORY SKELETONIZATION AND SEGMENTATION OF POINT CLOUDS USING OCTREES AND GRAPH THEORY A.Bucksch a, H. Appel van Wageningen a a Delft Institute of Earth Observation and Space Systems (DEOS), Faculty of Aerospace

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

From Orientation to Functional Modeling for Terrestrial and UAV Images

From Orientation to Functional Modeling for Terrestrial and UAV Images From Orientation to Functional Modeling for Terrestrial and UAV Images Helmut Mayer 1 Andreas Kuhn 1, Mario Michelini 1, William Nguatem 1, Martin Drauschke 2 and Heiko Hirschmüller 2 1 Visual Computing,

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Efficient Surface and Feature Estimation in RGBD

Efficient Surface and Feature Estimation in RGBD Efficient Surface and Feature Estimation in RGBD Zoltan-Csaba Marton, Dejan Pangercic, Michael Beetz Intelligent Autonomous Systems Group Technische Universität München RGB-D Workshop on 3D Perception

More information

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS STEREO-VISION SYSTEM PERFORMANCE ANALYSIS M. Bertozzi, A. Broggi, G. Conte, and A. Fascioli Dipartimento di Ingegneria dell'informazione, Università di Parma Parco area delle Scienze, 181A I-43100, Parma,

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

Lecture 17: Solid Modeling.... a cubit on the one side, and a cubit on the other side Exodus 26:13

Lecture 17: Solid Modeling.... a cubit on the one side, and a cubit on the other side Exodus 26:13 Lecture 17: Solid Modeling... a cubit on the one side, and a cubit on the other side Exodus 26:13 Who is on the LORD's side? Exodus 32:26 1. Solid Representations A solid is a 3-dimensional shape with

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have

More information

Learning Semantic Environment Perception for Cognitive Robots

Learning Semantic Environment Perception for Cognitive Robots Learning Semantic Environment Perception for Cognitive Robots Sven Behnke University of Bonn, Germany Computer Science Institute VI Autonomous Intelligent Systems Some of Our Cognitive Robots Equipped

More information

An Automatic Method for Adjustment of a Camera Calibration Room

An Automatic Method for Adjustment of a Camera Calibration Room An Automatic Method for Adjustment of a Camera Calibration Room Presented at the FIG Working Week 2017, May 29 - June 2, 2017 in Helsinki, Finland Theory, algorithms, implementation, and two advanced applications.

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Finite-Resolution Simplicial Complexes

Finite-Resolution Simplicial Complexes 1 Finite-Resolution Simplicial Complexes Werner Hölbling, Werner Kuhn, Andrew U. Frank Department of Geoinformation Technical University Vienna Gusshausstrasse 27-29, A-1040 Vienna (Austria) frank@geoinfo.tuwien.ac.at

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information