2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence
|
|
- Jean Ball
- 6 years ago
- Views:
Transcription
1 In: Proc. of the 3rd Asian Conference on Computer Vision, Hong Kong, Vol. I, January 998. Identication of 3D Reference Structures for Video-Based Localization? Darius Burschka and Stefan A. Blum Laboratory for Process Control and Real-Time Systems Technische Universitat Munchen D-8333 Munchen, Germany Abstract The bootstrap{problem of self{localization in indoor environments is a demanding task for initial localization and topological navigation. The ability to determine the position in an a priori known or already explored environment allows unsupervised use of mobile robots in environments such as private households. This paper presents our approach to identify a given set of known reference structures in a three{dimensional map of the local environment. This map is constructed from the data extracted by a line-based stereo camera system mounted on a mobile vehicle. We present the method used to identify objects and to compute the vehicle's position in a world frame. Motivation Navigation in indoor environments requires dependable knowledge about the robot's position and a precise model of the environment. Dynamic changes in the environment caused by human inuence or the operation of other autonomous mobile vehicles increase the dierences between the sensor data and the model prediction that is based on an a priori model of the environment. The a priori model becomes gradually useless [4, 5] as the environment changes. The explored information is often incomplete and uncertain. The signicance of the correct interpretation decreases compared to other possible matches. Therefore, continuous exploration of the changes in the environment is necessary. In [2], a three{dimensional model capable of storing the changing sensor data during an exploration of the environment was presented. The three{dimensional lines are stored at their geometrical position and are be used for prediction of future sensor readings or for obstacle avoidance. All objects are referenced to a local structure visible in this region. The size of this local region is restricted by the range of the applied sensor or the structure of the environment. For example, the walls of a room can reduce the possible size of the local region dened by the sensor range.? The work presented in this paper was supported by the Deutsche Forschungsgemeinschaft as a part of an interdisciplinary research project on \Information Processing in Autonomous Mobile Robots"(SFB33). 28
2 2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence between extracted elements (e.g., lines) is established and the exact position of the cameras is known. The number of possible matching candidates can be reduced by applying constraints such as the epipolar constraint, uniqueness and continuity [3]. Our sensor system computes the 3D information for the endpoints of the detected lines []. It is almost impossible to align the two optical systems precisely and stably to a parallel arrangement. This orientation error must be taken into account to get accurate results. Most algorithms compute the geometry of the epipolar lines for a given situation. We propose an alternative approach for transforming the original data into the parallel case with minimum computation. For example, a rotation ' around the y{axis results in the following transformation of the image coordinates: x cos '+sin ' p = x z = x z y p = y z =? x z xpcos '+sin ' sin '+cos' =?xpsin '+cos ' y z? x sin '+cos' = yp z?xpsin '+cos' : () The rotation around the z{axis results in a simple rotation of the image coordinates. The advantage of these equations is that the world coordinates of the projected obstacles may remain unknown. We use these equations () to transform the camera system to the parallel cameras case. The initial orientation errors are estimated in an o-line calibration of extrinsic and intrinsic camera parameters [7]. The orientation of the cameras may change during a mission due to vibrations and camera repositioning errors. It is corrected in an on-line re-calibration method based on the explored information [2]. 2.2 Virtual 3D Sensor In our system we use a model of a \virtual 3D sensor" situated between the two cameras. The properties of this sensor can be achieved by dierent real sensors. It is possible to replace the stereo system by any other sensor capable of reconstruction of three{dimensional lines. 3 Dynamic Local Feature Map 3. Possible Input Sources The Dynamic Local Map (DLM) stores a local region at the abstraction level of the applied sensor system to support topological modeling. The data stored in the DLM come from dierent sources [2]. The most important source is the sensor system, which registers the recent changes in the environment. Another source is a global model of the environment that stores reliable information generated from CAD{models or previously-veried lines. The map also stores hypothetical features of the environment generated by the Predictive Spatial Completion (PSC) module. These hypothetical features are based on statistics and are used to control the path planning and to support the sensor system. 29
3 3.2 Data Formats Currently, we store only line shaped features described by their three-dimensional endpoints and orientation. This information is rened in consecutive steps. In addition, each feature in the map is described by its condence and accuracy. This information is used in navigation tasks to decide, which features should be employed for localization to reduce the errors caused by 3D lines resulting from false correspondences in the stereo system. 3.3 Internal Structure Flaws in the sensor feature extraction can be reduced by comparing new explored data with the former sensor readings stored in the DLM. The DLM stores a local region of the environment described by the three{dimensional lines. Multiple and partially contradictory requirements forced us to develop a multi{level indexing structure. The upper layers consist of a two{dimensional grid containing octrees to store the explored features eciently and to minimize time-consuming memory transfers for feature updates or localization changes [, 2]. This structure adapts to the features' distribution in the local environment. 3.4 Update of the Stored Information Our fast reconstruction of the 3D information is based on fast interaction with the DLM. The possible match candidates for a given feature exceeding a specied quality value are stored in the DLM. Therefore, the DLM also stores false information. In consecutive sensor readings this information is veried from dierent positions. False features are deleted if they cannot be veried. Therefore, it is important that our sensor data processing not be free-running, but be triggered from the planning instance. In this way, sensor readings from dierent positions are guaranteed. Multiple readings from the same position would also stabilize the false features. This procedure permits the handling of moving objects in the environment. Each endpoint is described by its precision. The precision p of a detected endpoint depends on its distance from the camera. The precision of an endpoint describes its maximum position error and is adjusted each time the feature can be veried. The feature is specied by a condence value c for its existence. The condence value c is modied with an exponential function c(f) =? e?g(f old+f ) =? (? c old )e?gf ; f each time it can be matched. The value f describes the condence in the current step. It may vary from (only one matching candidate) to (useless information). The value g describes the speed at which this value is changed. It must be adapted to the applied matching algorithm. 4 Recognition Our approach is based on a hypothesize&test strategy underlying an interpretation tree. Because of the high computational cost of these algorithms, several tests for truncations are added, including rigidity and visibility constraints. Therefore, a relevant-rst search is implied. In our application, a set O = fo ; O 2 ; : : : ; O r ; : : : ; O R g of relevant objects that are described with 3D line 3
4 segments is presumed. Therefore, an object O r consists of a set of segment lines S r = fs ; S 2 ; : : : ; S j ; : : : ; S m g 2. Our approach to identify those a priori known reference structures from a list of line features S = fs ; S 2 ; : : : ; S i ; : : : ; S ng delivered from the DLM is composed of two parts: an o-line object preprocessing to gain a minimal set of relevant line segment triplets and on-line identication, triggered by these triplets. 4. Preprocessing Our approach to preprocess the reference objects is to nd some of their properties that are not explicitly derivable from their geometric description. We propose to use only the most relevant features for generation of an object's hypotheses. Grouping of three linearly independent line features at a time seems to be suitable for this application (section 4.2). The visibility of the groups is decided from a set of synthetic projections P = fp ; P 2 ; : : : ; P p ; : : : ; P Z g of the object's CAD-model generated for dierent aspects. A virtual camera is positioned at dierent locations on a spheric surface enclosing the model. A facility to exclude regions on the sphere's surface is provided. The visibility of each single line segment is determined for each projection with a Z-buering algorithm. Because only discrete points on the surrounding sphere are selected, a binary criterion v for the visibility of a line feature is sucient: v pjr = ; if 7% of the object's r segment j are visible in projection p ; otherwise. (2) We created a simple heuristic calculation of a rating R j for each line segment S j consisting of a rating r l of the segment length l j, an evaluation of the information content that is derived from a term A jj 3, and the number of aspects in which a segment is visible in all projections. R j = r l (l j ) (ld mx j = A jj + )? Z ZX p= v pj : (3) The obtained visibility information is used for the generation of triple line groups for each object. At least one visible segment triplet must exist in each projection P p. The number T of chosen triplets is considered to be as small as possible 4. A generated triplet is supposed to describe the sight of an object uniquely. The distances and angles between line segments should be as large as possible combined with a high rating R j. If possible, a segment line should only be used once for triplet generation because even high rated segment lines may not be detected, due to poor illumination for example. This leads to another heuristic rating for each triplet combination T ^ u : U u = (R u + R u2 + R u3 ) jdet(m u )j (d u;u2 + d u2;u3 + d u3;u ); (4) For occlusion check (described below), a plane-based description is also necessary. 2 In case of uniqueness, the index r will be left out in the following. 4 The cost of the online identication is proportional to T. 3
5 where M u is a matrix of the unit direction vectors, d uk;uk is the distance between the two line segments ^Suk and ^Suk 5. An iterative strategy is applied to extract the best triplet set reducing stepwise the number of projections and resulting in a number of 3 to 9 relevant triplets. 4.2 Identication Clustering of Image Features The lines delivered from the DLM are clustered rst. The implied clustering algorithm groups neighboring, connected segments, tolerating small errors. Clusters consisting of only a few line segments are deleted. The remaining position clusters are processed in another clustering step to build groups of approximately parallel segments called direction clusters. At least three direction clusters must emerge from each position cluster, otherwise the corresponding position cluster is deleted. This condition is necessary for unique determination of the object's pose. Initial Guess for 's Pose All emerged segment triplets are searched, primarily to build a correspondence tree for each position cluster without pose restrictions. Neighborhood relationships such as the angle and distance between two segment lines [6] are criteria for establishing correspondences between image and reference segments. Matching candidates for the second and third feature of a group must be searched only in certain direction clusters, reducing the number of required comparisons. A prehypothesis is established if all three reference segments of a triplet could be associated. Based on the prehypothesis we are able to guess the pose and orientation of the reference object. We dene two 3 3-matrices K and K, that store the unit direction vectors of the image and the reference triplets. A transformation between the reference frame and the image frame can be expressed as a matrix R = K K?, which is, in the best case, an orthogonal rotation matrix, in which small errors are tolerated. R is normalized in order to compute the three rotational degrees of freedom (DOF). The remaining three translational DOF are calculated by simply using a reference point for each reference triplet and associated image triplet. The translation can be determined by T = a + S? b, where the oset between the segment triplet's center and the object's center in the object frame is denoted as a. S is the oset between the position of center C T of model segment triplet in the object frame and the center CT of the associated triplet in the image space. Because b = R a, the translation can be denoted as T = S + (I? R) a, where I is identity matrix. Hypothesis Generation The probability of the existence of a certain object depends on the existence and, especially, on the accuracy of the associations of the predicted features. A segment wise linear accuracy function a(e) is dened by orientation accuracy a ('), longitudinal accuracy a l (u) and parallel accuracy a p (d). 5 Note that the number of possible combinations is about :25 m 3, assuming that the probability that a single line feature is visible in a single projection is about :5. 32
6 A total accuracy A ji of a pairing of predicted line segment ~ Sj and image line segment S i is obtained by A ji = a (') a l (u) a p (d): (5) Any non-zero accuracy value eects an association between S j and Si and is kept. The obtained accuracy of a model line segment S j is denoted as A j. A probability P of the object's prehypothesis is calculated by P m P j= = R j A j v p? j P m R. (6) j= j v p? j A probability of the hypothesis P is created with v p? j := : 8 j by using (6). Only if P and P exceed certain thresholds will a hypothesis be generated. Hypothesis Pruning Hypotheses belonging to the same object are fused if they are consistent with respect to their poses. Then their probability is recalculated. Often, two or more hypotheses are generated for dierent, but similar objects at similar pose. Even if an object has moderate symmetries, dierent hypotheses for its orientation may be created. In our approach, the object's hypothesis with the highest probability survives. The threshold for recognition P reg has to be chosen high in order to keep the probability for misdecisions low. Correction of Recognized 's Pose The method of gradient descent is applied to minimize a squared error sum S(t x ; t y ; t z ; ; ; ), which is dened by mx S = m i= mx = m i= jm i B i j2 + jm i2 B i2 j2 B 2 d 2 i + d 2 i2 ; (7) S i B M d E S j B2 M 2 d 2 B 2 where m is the number of predicted model segment lines to which any image line can be associated Results of Recognition The object recognition algorithm was tested on several objects by measuring the hypothesis' probability with dierent degrees of distortion. Here, a parallel shift error distorting image features within a range of [; p max ] and an orientation error of their direction within a range of [; max ] were simulated. The maximal errors were varied between cm and 5cm in 5cm increments and and 2 in 2 increments, respectively. In order to normalize and keep the results independent from a certain aspect, all features of a object's model were registrated into the DLM. The hypothesis' probability was averaged on 25 measurements at a time, the bounds for parallel accuracy were selected to cm d min 5cm, and orientation accuracy to 5 ' min. The threshold for 6 m is used only as a scale factor and has no inuence on the result of the gradient descent method. E 33
7 the prehypothesis' probability P was chosen to be zero to enable the desired behavior as stated above. Fig. shows the testing results of the objects "quader" (a) and the object "cubicle" (c). The inuence of a threshold was also examined, whereby the value.75 has to be exceeded for a hypothesis' probability to count as a recognized object. probability probability (a) 5 (c) 5 δmax 5 δmax p max 2 3 p max.5.5 recognition rate recognition rate 5 (b) 5 (d) 5 δmax 5 δmax p max 2 3 p max 5 Localization The pose estimation of the mobile robot is performed by comparing object's pose referred to the exploration frame to its reference pose in the landmark description. A recognized object ^O b consistent with its stored position and orientation (jz r? Z l j z max ; j r? l j max and j r? l j max7 generates a local hypothesis L b. Fig. 2 shows the relationships between the frame of a local map and the exploration frame for a single unique recognized object. The transformation parameters (X,Y,') stored in the local hypothesis are calculated as: cos '? sin ' X Y = Xl Y l? sin ' Fig.. Probability of object's hypotheses cos ' X l X Xm x local map frame Y ϕ xr ye x e x Y m y r mobile robot ϕ m x ϕ e y Y l y exploration frame αr α l ϕ y recognized object Fig. 2. Ane transformations between frames xr y r ; ' = l? r : (8) Hence, if possible local hypotheses can be generated for similar transformation parameters, an algorithm to fuse those candidates will be applied. If both of the following conditions are true, two local hypotheses L i and L j will be combined into a new local hypothesis denoted as L ij : D ij = j(x i ; Y i ) T? (X j ; Y j ) T j D max and ' ij = j' i? ' j j ' max : (9) The thresholds D max and ' max can be chosen depending on the required accuracy of the localization. The new generated combined local hypothesis L ij is provided with weighted mean values. Its quality is dened by Q ij = (Q i + Q j ) (? D ij 4 D max? ' ij 4 ' max ): () 7 r-indexed parameters are recognized parameters and l-indexed are stored landmarks parameters 34
8 This fusing procedure has to be repeated until all local hypotheses are combined. As a result, a set of combined and not combined local hypotheses W = fl ; L 2 ; : : : ; L b ; : : : ; L B g emerges. "table6x" (4) The highest qualied local hypothesis is D5 D denoted as L b? = (X? ; Y? ; '? ). The resulting "table6x" (3) "cubicle5x95" (2) C C5 B A B2 C3 D4 C4 "table6x" (5) D3 A "table+box" Fig. 3. The local hypotheses in an exemplary scenario transformation parameters (X?,Y?,'? ) are used to transform the stored DLM content. To determine the 3 DOFs (X m ; Y m ; ' m ) of the mobile robot referring to the local map frame from (X?,Y?,'? ), a similar ane transformation as in (8) is applied. This strategy was tested in a project room consisting of various table groups, walls, cubicles and other items (g. 3). The four landmarks A?D were a priori stored in the representation of the robot. The object recognition algorithm delivered ve identied objects ()? (5), whereby object (5) was a misinterpretation 8. As a result, eight local hypotheses emerged. Landmark A and B produced correct local hypotheses at each case. Because landmark C and D are related to the same object and three interpretations were created, six further local hypotheses were generated. 6 Future Work We plan to enhance particularly the precision of the explored information by fusing camera data with data retrieved from the laser range nder. A mission expert for topological navigation will be developed to use the presented tools. References. D. Burschka, C. Eberst, and C. Robl. Vision Based Model Generation for Indoor Environments. In ICRA97, pages 94{945, D. Burschka and G. Farber. Active Controlled Exploration of 3D Environmental Models Based on abinocular Stereo System. In ICAR97, pages 97{977, Monterey, California, USA, July Oliver Faugeras. Three-Dimensional Computer Vision. Massachusetts Institute of Technology, The MIT Press, Cambridge, Massachusetts London, England, A. Hauck and N. O. Stoer. A hierarchic world model supporting video-based localisation, exploration and object identication. In 2. Asian Conference on Computer Vision, Singapore, 5. { 8. Dec., pages (III) 76{8, Gunter Magin, Achim Ru, Darius Burschka, and Georg Farber. A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots. Robotics and Autonomous Systems, 4:9 { 3, G. Stockman. recognition and localization via pose clustering. Computer Vision, 4:36{387, Roger Y. Tsai. A Versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using O-the-Shelf TV Cameras and Lenses. IEEE Transactions of Robotics and Automation, RA-3(4):323{344, August 987. () 8 The recognized object was a smaller table that did not belong to the a priori knowledge of the scenario. 35
Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera
Exploration of Unknown or Partially Known Environments? Darius Burschka, Christof Eberst Institute of Process Control Computers Prof. Dr. -Ing. G. Farber Technische Universitat Munchen 80280 Munich, Germany
More informationP 1. e 1. Sectormap S. P 2 e 2
In: Proc. 1997 Int. Conf. on Advanced Robotics (ICAR'97), Monterey 1997. Active Controlled Exploration of 3D Environmental Models Based on a Binocular Stereo System Darius Burschka and Georg Farber Laboratory
More informationBitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4
Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen 81667 Munich, Germany email: bueschin@informatik.tu-muenchen.de 1 Introduction Pictographs are ubiquitous
More informationVISION-BASED HANDLING WITH A MOBILE ROBOT
VISION-BASED HANDLING WITH A MOBILE ROBOT STEFAN BLESSING TU München, Institut für Werkzeugmaschinen und Betriebswissenschaften (iwb), 80290 München, Germany, e-mail: bl@iwb.mw.tu-muenchen.de STEFAN LANSER,
More informationwith respect to some 3D object that the CAD model describes, for the case in which some (inexact) estimate of the camera pose is available. The method
Error propagation for 2D{to{3D matching with application to underwater navigation W.J. Christmas, J. Kittler and M. Petrou Vision, Speech and Signal Processing Group Department of Electronic and Electrical
More informationObject Recognition Robust under Translation, Rotation and Scaling in Application of Image Retrieval
Object Recognition Robust under Translation, Rotation and Scaling in Application of Image Retrieval Sanun Srisuky, Rerkchai Fooprateepsiri? and Sahatsawat Waraklang? yadvanced Machine Intelligence Research
More informationMODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS
MODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS NORBERT O. STÖFFLER TU München, Lehrstuhl für Prozeßrechner, Prof. Dr.-Ing. G. Färber, 80290 München, Germany, e-mail: stoffler@lpr.e-technik.tu-muenchen.de
More informationPRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationDRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri
The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,
More informationObject Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University
Object Modeling from Multiple Images Using Genetic Algorithms Hideo SAITO and Masayuki MORI Department of Electrical Engineering, Keio University E-mail: saito@ozawa.elec.keio.ac.jp Abstract This paper
More informationModel Based Perspective Inversion
Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk
More informationare now opportunities for applying stereo ranging to problems in mobile robot navigation. We
A Multiresolution Stereo Vision System for Mobile Robots Luca Iocchi Dipartimento di Informatica e Sistemistica Universita di Roma \La Sapienza", Italy iocchi@dis.uniroma1.it Kurt Konolige Articial Intelligence
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationImage-Based Memory of Environment. homing uses a similar idea that the agent memorizes. [Hong 91]. However, the agent nds diculties in arranging its
Image-Based Memory of Environment Hiroshi ISHIGURO Department of Information Science Kyoto University Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp Saburo TSUJI Faculty of Systems Engineering
More informationBowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere
Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere Pietro Cerri, Oscar Gerelli, and Dario Lodi Rizzini Dipartimento di Ingegneria dell Informazione Università degli Studi
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationModel Based Pose Estimation. from Uncertain Data. Thesis submitted for the degree \Doctor of Philosophy" Yacov Hel-Or
Model Based Pose Estimation from Uncertain Data Thesis submitted for the degree \Doctor of Philosophy" Yacov Hel-Or Submitted to the Senate of the Hebrew University in Jerusalem (1993) ii This work was
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationVision-Motion Planning with Uncertainty
Vision-Motion Planning with Uncertainty Jun MIURA Yoshiaki SHIRAI Dept. of Mech. Eng. for Computer-Controlled Machinery, Osaka University, Suita, Osaka 565, Japan jun@ccm.osaka-u.ac.jp Abstract This paper
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationRobot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1
Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationarxiv: v1 [cs.cv] 18 Sep 2017
Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de
More informationDetecting Planar Homographies in an Image Pair. submission 335. all matches. identication as a rst step in an image analysis
Detecting Planar Homographies in an Image Pair submission 335 Abstract This paper proposes an algorithm that detects the occurrence of planar homographies in an uncalibrated image pair. It then shows that
More informationFigure (5) Kohonen Self-Organized Map
2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;
More informationCamera calibration. Robotic vision. Ville Kyrki
Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis
More informationAutomatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera
Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Wolfgang Niem, Jochen Wingbermühle Universität Hannover Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung
More informationLocal qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:
Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu
More informationPredicted. position. Observed. position. Optical center
A Unied Procedure for Calibrating Intrinsic Parameters of Spherical Lenses S S Beauchemin, R Bajcsy and G Givaty GRASP Laboratory Department of Computer and Information Science University of Pennsylvania
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationPATTERN CLASSIFICATION AND SCENE ANALYSIS
PATTERN CLASSIFICATION AND SCENE ANALYSIS RICHARD O. DUDA PETER E. HART Stanford Research Institute, Menlo Park, California A WILEY-INTERSCIENCE PUBLICATION JOHN WILEY & SONS New York Chichester Brisbane
More informationVision-based Manipulator Navigation. using Mixtures of RBF Neural Networks. Wolfram Blase, Josef Pauli, and Jorg Bruske
Vision-based Manipulator Navigation using Mixtures of RBF Neural Networks Wolfram Blase, Josef Pauli, and Jorg Bruske Christian{Albrechts{Universitat zu Kiel, Institut fur Informatik Preusserstrasse 1-9,
More informationClassifier C-Net. 2D Projected Images of 3D Objects. 2D Projected Images of 3D Objects. Model I. Model II
Advances in Neural Information Processing Systems 7. (99) The MIT Press, Cambridge, MA. pp.949-96 Unsupervised Classication of 3D Objects from D Views Satoshi Suzuki Hiroshi Ando ATR Human Information
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More information3D Modeling using multiple images Exam January 2008
3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Model Based Object Recognition 2 Object Recognition Overview Instance recognition Recognize a known
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationMeasurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation
2016 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-8-0 Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point
More informationSeminar Heidelberg University
Seminar Heidelberg University Mobile Human Detection Systems Pedestrian Detection by Stereo Vision on Mobile Robots Philip Mayer Matrikelnummer: 3300646 Motivation Fig.1: Pedestrians Within Bounding Box
More informationA thesis submitted in partial fulllment of. the requirements for the degree of. Bachelor of Technology. Computer Science and Engineering
R N O C A thesis submitted in partial fulllment of the requirements for the degree of Bachelor of Technology in Computer Science and Engineering Rahul Bhotika Anurag Mittal Supervisor : Dr Subhashis Banerjee
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationStability Study of Camera Calibration Methods. J. Isern González, J. Cabrera Gámez, C. Guerra Artal, A.M. Naranjo Cabrera
Stability Study of Camera Calibration Methods J. Isern González, J. Cabrera Gámez, C. Guerra Artal, A.M. Naranjo Cabrera Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationZ (cm) Y (cm) X (cm)
Oceans'98 IEEE/OES Conference Uncalibrated Vision for 3-D Underwater Applications K. Plakas, E. Trucco Computer Vision Group and Ocean Systems Laboratory Dept. of Computing and Electrical Engineering Heriot-Watt
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationTopological Mapping. Discrete Bayes Filter
Topological Mapping Discrete Bayes Filter Vision Based Localization Given a image(s) acquired by moving camera determine the robot s location and pose? Towards localization without odometry What can be
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More information3D Reconstruction of a Hopkins Landmark
3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known
More informationArm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot
Czech Technical University, Prague The Center for Machine Perception Camera Calibration and Euclidean Reconstruction from Known Translations Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech
More informationModel-based segmentation and recognition from range data
Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationMOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS
MOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS ZHANG Chun-sen Dept of Survey, Xi an University of Science and Technology, No.58 Yantazhonglu, Xi an 710054,China -zhchunsen@yahoo.com.cn
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationDISTANCE MEASUREMENT USING STEREO VISION
DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationand implemented in parallel on a board with 3 Motorolla DSP's. This board, part of the DMA machine, has been designed and built jointly by INRIA
In Vision-based Vehicle Guidance, Springer, New York, 1992 edited by I. Masaki, Chapter 13, pages 268--283 Obstacle Avoidance and Trajectory Planning for an Indoors Mobile Robot Using Stereo Vision and
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationReal-Time Self-Localization in Unknown Indoor Environments using a Panorama Laser Range Finder
Real-Time Self-Localization in Unknown Indoor Environments using a Panorama Laser Range Finder Tobias Einsele Laboratory for Process Control and Real Time Systems Prof Dr Ing Georg Färber Technische Universität
More informationSimultaneous Localization and Mapping
Sebastian Lembcke SLAM 1 / 29 MIN Faculty Department of Informatics Simultaneous Localization and Mapping Visual Loop-Closure Detection University of Hamburg Faculty of Mathematics, Informatics and Natural
More information3D Models and Matching
3D Models and Matching representations for 3D object models particular matching techniques alignment-based systems appearance-based systems GC model of a screwdriver 1 3D Models Many different representations
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationStereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration
Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Darius Burschka, Stephen Lee and Gregory Hager Computational Interaction and Robotics Laboratory Johns Hopkins University
More informationHorus: Object Orientation and Id without Additional Markers
Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky
More informationA General Framework for Image Retrieval using Reinforcement Learning
A General Framework for Image Retrieval using Reinforcement Learning S. Srisuk 1, R. Fooprateepsiri 3, M. Petrou 2, S. Waraklang 3 and K. Sunat 1 Department of Computer Engineering 1, Department of Information
More information1998 IEEE International Conference on Intelligent Vehicles 587
Ground Plane Obstacle Detection using Projective Geometry A.Branca, E.Stella, A.Distante Istituto Elaborazione Segnali ed Immagini - CNR Via Amendola 166/5, 70126 Bari Italy e-mail: [branca,stella,distante]@iesi.ba.cnr.it
More informationRecognition. Clark F. Olson. Cornell University. work on separate feature sets can be performed in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 907-912, 1996. Connectionist Networks for Feature Indexing and Object Recognition Clark F. Olson Department of Computer
More informationA High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More information3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT
3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationTowards the completion of assignment 1
Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?
More information[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera
[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationLong-term motion estimation from images
Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras
More informationBuilding Reliable 2D Maps from 3D Features
Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;
More informationLaboratory for Computational Intelligence Main Mall. The University of British Columbia. Canada. Abstract
Rigidity Checking of 3D Point Correspondences Under Perspective Projection Daniel P. McReynolds danm@cs.ubc.ca David G. Lowe lowe@cs.ubc.ca Laboratory for Computational Intelligence Department of Computer
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationFrom Structure-from-Motion Point Clouds to Fast Location Recognition
From Structure-from-Motion Point Clouds to Fast Location Recognition Arnold Irschara1;2, Christopher Zach2, Jan-Michael Frahm2, Horst Bischof1 1Graz University of Technology firschara, bischofg@icg.tugraz.at
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationA Stochastic Environment Modeling Method for Mobile Robot by using 2-D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Poh
A Stochastic Environment Modeling Method for Mobile Robot by using -D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Pohang University of Science and Technology, e-mail: jsoo@vision.postech.ac.kr
More informationExploiting Depth Camera for 3D Spatial Relationship Interpretation
Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships
More informationAutomatic Feature Extraction of Pose-measuring System Based on Geometric Invariants
Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,
More informationProduction of Video Images by Computer Controlled Cameras and Its Application to TV Conference System
Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol.2, II-131 II-137, Dec. 2001. Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System
More informationImproving Vision-Based Distance Measurements using Reference Objects
Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,
More informationUser Interface. Global planner. Local planner. sensors. actuators
Combined Map-Based and Case-Based Path Planning for Mobile Robot Navigation Maarja Kruusmaa and Bertil Svensson Chalmers University of Technology, Department of Computer Engineering, S-412 96 Gothenburg,
More informationBinocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationRegistration of Moving Surfaces by Means of One-Shot Laser Projection
Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,
More informationUsing Mean Shift Algorithm in the Recognition of Industrial Data Matrix Codes
Using Mean Shift Algorithm in the Recognition of Industrial Data Matrix Codes ION-COSMIN DITA, VASILE GUI, MARIUS OTESTEANU Politehnica University of Timisoara Faculty of Electronics and Telecommunications
More information