VISION-BASED HANDLING WITH A MOBILE ROBOT
|
|
- Steven Jesse Morton
- 6 years ago
- Views:
Transcription
1 VISION-BASED HANDLING WITH A MOBILE ROBOT STEFAN BLESSING TU München, Institut für Werkzeugmaschinen und Betriebswissenschaften (iwb), München, Germany, bl@iwb.mw.tu-muenchen.de STEFAN LANSER, CHRISTOPH ZIERL TU München, Institut für Informatik, Forschungsgruppe Bildverstehen (FG BV), München, Germany, flanser,zierlg@informatik.tu-muenchen.de ABSTRACT Mobile systems become more and more important in the area of modern manufacturing. In order to handle an object with a manipulator mounted on an autonomous mobile system (AMS) within a changing environment, the object has to be identied and its 3D pose relative to the manipulator has to be determined with sucient accuracy, because in general its exact position is not known a priori. The object recognition unit of the presented system accomplishes this 3D pose estimation task using a single CCD camera mounted in the gripper exchange system of a mobile robot. The reliability of the results is checked by an independent fault-detection unit. A recovery unit handles most of the possible faults autonomously increasing the availability of the system. KEYWORDS: vision-based handling, autonomous mobile system, 3D object recognition, fault-detection and recovery. INTRODUCTION The automation of manipulation tasks in manufacturing environments is often based on industrial robots. To make this automation more protable, a mobile robot can be used to perform manipulations at dierent places, wherever it is needed. Within a joint research project 1 towards the development of autonomous mobile systems located at the TU Munchen the mobile robot MobRob has been developed to full manipulation tasks autonomously, even in a changing environment. The required autonomy increases the demands on the sensors of such systems, because both the position of the autonomous mobile system (AMS) and the pose of the object to be grasped are aected by uncertainty. In order to handle an object with a manipulator mounted on an AMS, the object has to be identied and its 3D pose relative to the manipulator has to be determined with sucient accuracy. The presented vision-based object recognition system uses images taken from the camera mounted in the gripper exchange system of the mobile robot (see Fig. 1). The system architecture is shown in Fig. 2. The vision-based object recognition unit consists of a recognition and a localization module described in the following section. 1 This work was supported by Deutsche Forschungsgemeinschaft within the Sonderforschungsbereich 331, \Informationsverarbeitung in autonomen, mobilen Handhabungssystemen", projects L9 and M2.
2 (a) (b) (c) Figure 1. (a) MobRob (Mobile Robot) at the Institut fur Werkzeugmaschinen und Betriebswissenschaften (iwb) with (b) a CCD camera mounted in the gripper exchange system of the robot. (c) Vision-based grasping of a workpiece. Fault detection Decision Object recognition and localization Order No.: 0815 IWBDECKEL Result x: 320,02 320,1 y: 298,71 98,1 z: 753, 22 88,5? OK not OK Recovery Figure 2. Closed-loop architecture of the presented grasping system. The fault-detection module of the error-handling unit presented in the subsequent section analyzes the grabbed image and the result of the object localization. If there are any faults obstructing the correct handling of the object the recovery module tries to clear the fault autonomously increasing the availabilty of the system. VISION-BASED OBJECT RECOGNITION In this section the object recognition unit of the proposed system is briey described. Based on a single image from a calibrated CCD camera it identies objects and computes their 3D pose relative to the robot manipulator. For a general introduction into this eld of research see e.g. [1] or [9]. Calibration In order to obtain the 3D object pose from the grabbed video image the internal camera parameters (mapping the 3D world into pixels) as well as the external camera parameters (pose of the CCD camera relative to the manipulator) have to be determined with sucient accuracy. Camera Model. The camera model describes the projection of a 3D point P W in the scene into the 2D pixel [c; l] T of the video image of the CCD camera. The proposed
3 Figure 3. The calibration table mounted on the mobile robot seen from dierent viewpoints with known relative movements of the manipulator. (R V ; T V ) (R 1 M ; T 1 M ) (RC ; T C ) (R V ; T V ) (R 2 M ; T 2 M ) (RC ; T C ) (R V ; T V ) (R K M ; T K M ) (R C ; T C ) (a) (b) Figure 4. (a) Estimation of the camera pose (R C ; T C ) relative to the tool center point (hand-eye calibration) based on known relative movements (R k M ; T M k ) of the manipulator. (b) Each triangle of the tesselated Gaussian sphere denes a 2D view of an object. approach uses the model of pinhole camera with radial distortions [11]: It includes a rotational matrix R describing the orientation, a vector T describing the position of the camera in the world (external parameters), and the internal parameters b (eective focal length), (distortion coecient), S x and S y (scaling factors), and [C x ; C y ] T (image center). Internal Camera Parameters. In the rst stage of the calibration process the internal camera parameters b,, S x, S y, and [C x ; C y ] T are computed by simultanously evaluating images of a 2D calibration table with N circular marks taken from K dierent viewpoints, see Fig. 3. This multiview calibration [5] minimizes the distances between the projected 3D midpoints of the marks and the corresponding 2D points in the video images. The 3D pose of the camera R; T is estimated during the minimization process. Thus, only the model of the calibration table itself has to be known a priori. Hand-Eye Calibration. Once the internal camera parameters have been determined the 3D pose of the camera relative to the tool center point is estimated in the second stage of the calibration process (hand-eye calibration). In the case of a camera mounted on the manipulator of a mobile robot the 3D pose of the camera (R; T ) is the composition of the pose of the robot (R V ; T V ), the relative pose of the manipulator (R M ; T M ), and the relative pose of the camera (R C ; T C ), see Fig. 4(a). Performing controlled movements (R k M ; T M) k of the manipulator similar to
4 [12] (R C ; T C ) can be determined by minimizing KX NX e(~x) = k~s k i c i (M i ; ~x; R k M ; T M)k k 2?! min k=1 i=1 with ~s k i the normalized vector of the line of sight through the 2D point ~m k i in the k th video image and c i (M i ; ~x k ; : : :) the 3D midpoint of a mark on the calibration table transformed in the camera coordinate system, for details see [5]. Since the used 2D calibration table is mounted on the mobile robot itself, the manipulator can move to the dierent viewpoints for the multiview calibration automatically. Thus, the calibration can be accomplished in only a few minutes. 3D Pose Estimation Our object recognition system uses a priori known models of 3D objects, which are generated in a oine process, and a single intensity image of the scene. The pose estimation is performed in two steps: First, hypotheses of visible objects and their rough pose are generated by a recognition module. In a second step, these hypotheses are veried and rened by a localization module. For details see [4]. In case of multiple instances of the same object appearing in a scene, this process can be iterated. After each iteration the image features already mapped to previously detected objects are eliminated. Model Generation. Using a tesselated Gaussian sphere each object is represented by a set of up to 320 normalized perspective 2D views, see Fig. 4(b). This model generation process is based on a geometric model of the environment described in [10]. The underlying boundary representation (B-Rep) of a polyhedral 3D object can be derived from a CAD modeler. The comparability between the highly detailed CAD model and the extracted image features (which are limited by the resolution of the CCD camera) is ensured by simulating the image preprocessing on the model features. Object Recognition. The aim of the object recognition module is to identify objects and to determine their rough 3D pose by searching for the appropriate 2D model view matching the image. This is done by establishing correspondences between image lines extracted from the CCD image and model lines from a 2D view of an object. First, a set of associations is built. An association is dened as a quadruple (I j ; M i ; v; c a ) where I j is an image feature, M i is a model feature, v is one of the 320 2D model views of an object, and c a is a condence value of the correspondence between I j and M i. This value can be obtained by traversing aspect-trees [7] or by a simple geometrical comparison of the features incorporating topological constraints. In order to select the \correct" view the associations are used to build hypotheses f(object; A i ; v i ; c i )g. For each 2D view v i all corresponding associations with sucient condence are considered. From this set of associations the subset of associations A i with the highest rating forming a consistent labeling of image features is selected. The condence value c i depends on the condence values of the included associations and the percentage of mapped model features. The result of the described recognition process is a ranked list of possible hypotheses (see Fig. 5(a)) which are veried and rened by the localization module. Object Localization. In the case of a successful verication the localization module computes a modied hypothesis where some correspondences may be changed based on the viewpoint consistency constraint. This renement of correspondences can be
5 (a) (b) (c) Figure 5. (a) Extracted image lines of a toolbox with the object Iwbdeckel. Projection of object Iwbdeckel into the original video image according to the (b) initial and (c) rened pose estimation. Reflections Blurredness Contrast Wrong image part Hidden object Figure 6. Typical problems encountered by vision-based systems in a manufacturing environment. accelerated by computing specic search spaces in the video image [3]. By aligning model and image lines the nal object pose (R; t) with full 6 DOF is computed using a weighted least squares technique similar to [6]. If only coplanar features are visible which are seen from a large distance compared to the size of the object (Fig. 5), the 6 DOF estimation is quite unstable because some of the pose parameters are highly correlated. In this case, a priori knowledge of the orientation of the manipulator with respect to the ground plane of the object might be used to determine two angular degrees of freedom. Naturally, this approach decreases the exibilty of the system. Tilted objects cannot be handled any longer. A more exible solution is the use of a second image of the scene taken from a dierent viewpoint with known relative movement of the manipulator (motion stereo). By simultanously aligning the model to both images the at minimum of the 6 DOF estimation can be avoided. Note, that for well structured objects with some not coplanar model features a 6 DOF estimation based on a single video image yield good results as well. ERROR-HANDLING In a manufacturing environment there are some factors aggravating vision-based pose estimation. The fault-detection module detects these faults also using additional information not available to the object recognition unit. In case of a detected failure the recovery module is activated in order to overcome the problem autonomously. Typical Problems in a Manufacturing Environment Applying vision-based object recognition methods in a manufacturing environment is aected by a wide range of disturbing factors, e.g. reections and shadows due to
6 Image Fault-detection indicators Recovery planning module Recovery operators Cleared Image Faults cause Figure 7. The error-handling unit consist of fault-indicators and recovery operators connected by the recovery planning module. specic illumination conditions and surface characteristics, or objects, from which only a fraction is visible, see Fig. 6. In general, these faults result in additional or missing edges in the image obstructing the interpretation. This may lead to dierent pose hypotheses all compatible with the extracted image features or to a complete failure of the object recognition unit. Most of these spurious hypotheses can be detected by exploiting external information like the expected distance to the object. On the other side, it is very dicult to determine the reason for a failure as listed in Fig. 6. However, in some specic environments, e.g. a toolbox with a homogenous surface, low-level indicators analyzing selected image characteristics can be used to detect these problems. Recovery Strategies Corresponding to the various faults listed in Fig. 6 the system can choose between dierent strategies to clear a detected fault: Considering the next pose hypothesis (object recognition unit) Adapting parameters for the image preprocessing (object recognition unit) Adapting parameters for the image interpretation (object recognition unit) Adapting the aperture or focus of the CCD camera (robot guiding system) Moving the manipulator to a more suitable position (robot guiding system) Reporting to the external error-handling (manufacturing control system) A controlled movement of the manipulator can be used to increase the accuracy of a successful pose estimation as well, see the previous section. Structure of the error-handling unit Most of the faults spoiling the object recognition are a superposition of several faults what makes detection of these faults even more dicult. The denition of some basic faults leads to specialized error-handling modules. These modules consist of an indicator to detect a specic fault and a recovery operator to clear it, see Fig. 7. Based on the results of all indicators analyzing the current image as well as the result and condence of the localization, a decision is made, whether the results are reliable enough to grasp the object. Otherwise the detected fault has to be cleared by the system. In this case, based on the results obtained, the recovery planning module generates a plan to clear the fault. The plan is executed by the recovery operators, which are together with an indicator part of a fault-specic error-handling module.
7 (a) (b) (c) Figure 8. Successful localization of two workpieces of the type Iwbdeckel in a toolbox: (a) the detected image lines, (b) the rst Iwbdeckel found in the image, and (c) the other Iwbdeckel found. (a) (b) (c) Figure 9. Example for a successful error-handling: (a) no reliable pose estimation of the workpiece due to invisible image features, (b) video image after a controlled movement of the camera mounted on the manipulator, and (c) nal pose estimation of the workpiece. ROBUST POSE ESTIMATION WITH ERROR-HANDLING In Fig. 2 the structure of the whole system is shown. The sequence is started by the robot guiding system [8], instructing the object recognition unit to detect and localize an object. After grabbing a video image the object recognition unit generates hypotheses about the 3D pose of the object as described in the second section. At the same time, the fault-detection indicators analyze the image and forward their results to the decision module. Considering the results of the object recognition and the fault-detection indicators, a decision is made, whether the pose estimation is reliable and can be returned to the robot guiding system, see the previous section. Fig. 8 shows an example for a successful 3D pose estimation. If the result of the pose estimation is considered to be unreliable a recovery plan is generated automatically. Depending on the chosen recovery plan, the next hypotheses of the object recognition unit are tested, the object recognition is re-parameterized or a request to move the manipulator or to adapt the camera parameters is sent to the robot guiding system. This sequence is iterated (closed-loop) until a reliable pose estimation for grasping the object is found or no internal error-handling is possible, see Fig. 9. In case of a successful pose estimation the 3D simulation system Usis [2] is activated to perform a collision-free grasp planning for the manipulation process. Finally, this online generated robot program is downloaded and executed, see Fig. 1(c).
8 SUMMARY AND FUTURE WORK An architecture for a grasping system on an autonomous mobile robot was presented consisting of a vision-based object recognition unit and an explicit error-handling unit. Based on a priori known models the object recognition unit identies objects in single video images and determines their 3D pose (all 6 DOF). The aim of the error-handling unit is to detect and to clear possible failures increasing the availability of the whole system. Future research will be focused on improving the fault indicators and the corresponding recovery strategies. For intelligent error-handling a database should be integrated in the error-handling system, storing information from all recognitions and error-handling procedures. With this knowledge-based error detection the faults then will be detected and their reasons concluded more reliable. REFERENCES 1. T. Y. Young (Ed.). Handbook of Pattern Recognition and Image Processing: Computer Vision, volume 2. Academic Press, Inc., D. Kugelmann. Autonomous Robotic Handling Applying Sensor Systems and 3D Simulation. In IEEE International Conference on Robotics and Automation, volume 1, pages 196{201. IEEE Computer Society Press, S. Lanser and T. Lengauer. On the Selection of Candidates for Point and Line Correspondences. In International Symposium on Computer Vision, pages 157{162. IEEE Computer Society Press, S. Lanser, O. Munkelt, and C. Zierl. Robust Video-based Object Recognition using CAD Models. In U. Rembold, R. Dillmann, L.O. Hertzberger, and T. Kanade, editors, Intelligent Autonomous Systems IAS-4, pages 529{536. IOS Press, S. Lanser and Ch. Zierl. Robuste Kalibrierung von CCD-Sensoren fur autonome, mobile Systeme. In R. Dillmann, U. Rembold, and T. Luth, editors, Autonome Mobile Systeme, Informatik aktuell, pages 172{181. Springer-Verlag, D. G. Lowe. Fitting Parameterized Three-Dimensional Models to Images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 13(5):441{450, O. Munkelt. Aspect-Trees: Generation and Interpretation. CVGIP: Image Understanding, 61(3):365{386, May K. Pischeltsrieder. Steuerung autonomer mobiler Roboter in der Produktion. iwb Forschungsberichte. Springer-Verlag, To appear. 9. A. R. Pope. Model-Based Object Recognition. Technical report 94-04, University of British Columbia, January N. O. Stoer and T. Troll. Model Update by Radar- and Video-based Perceptions of Environmental Variations. In International Symposium on Robotics and Manufacturing. ASME Press, New York, To appear. 11. R. Y. Tsai. An Ecient and Accurate Camera Calibration Technique for 3D Machine Vision. In Computer Vision and Pattern Recognition, pages 364{374. IEEE Computer Society Press, C. C. Wang. Extrinsic Calibration of a Vision Sensor Mounted on a Robot. Transactions on Robotics and Automation, 8(2):161{175, April 1992.
MODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS
MODEL UPDATE BY RADAR- AND VIDEO-BASED PERCEPTIONS OF ENVIRONMENTAL VARIATIONS NORBERT O. STÖFFLER TU München, Lehrstuhl für Prozeßrechner, Prof. Dr.-Ing. G. Färber, 80290 München, Germany, e-mail: stoffler@lpr.e-technik.tu-muenchen.de
More informationExploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera
Exploration of Unknown or Partially Known Environments? Darius Burschka, Christof Eberst Institute of Process Control Computers Prof. Dr. -Ing. G. Farber Technische Universitat Munchen 80280 Munich, Germany
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More informationBitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4
Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen 81667 Munich, Germany email: bueschin@informatik.tu-muenchen.de 1 Introduction Pictographs are ubiquitous
More information2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence
In: Proc. of the 3rd Asian Conference on Computer Vision, Hong Kong, Vol. I, January 998. Identication of 3D Reference Structures for Video-Based Localization? Darius Burschka and Stefan A. Blum Laboratory
More informationP 1. e 1. Sectormap S. P 2 e 2
In: Proc. 1997 Int. Conf. on Advanced Robotics (ICAR'97), Monterey 1997. Active Controlled Exploration of 3D Environmental Models Based on a Binocular Stereo System Darius Burschka and Georg Farber Laboratory
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationObject Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University
Object Modeling from Multiple Images Using Genetic Algorithms Hideo SAITO and Masayuki MORI Department of Electrical Engineering, Keio University E-mail: saito@ozawa.elec.keio.ac.jp Abstract This paper
More informationPattern Feature Detection for Camera Calibration Using Circular Sample
Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More informationIntelligent Robotics
64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter University of Hamburg Faculty of Mathematics, Informatics
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg
More informationCalibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland
Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationA NEW LESS INVASIVE APPROACH TO KNEE SURGERY USING A VISION-GUIDED MANIPULATOR
A NEW LESS INVASIVE APPROACH TO KNEE SURGERY USING A VISION-GUIDED MANIPULATOR M. ROTH, CH. BRACK, A. SCHWEIKARD Bay. Forschungszentrum für wissensbasierte Systeme, Forschungsgruppe Kognitive Systeme (FG
More informationPRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationRobot Vision without Calibration
XIV Imeko World Congress. Tampere, 6/97 Robot Vision without Calibration Volker Graefe Institute of Measurement Science Universität der Bw München 85577 Neubiberg, Germany Phone: +49 89 6004-3590, -3587;
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationZ (cm) Y (cm) X (cm)
Oceans'98 IEEE/OES Conference Uncalibrated Vision for 3-D Underwater Applications K. Plakas, E. Trucco Computer Vision Group and Ocean Systems Laboratory Dept. of Computing and Electrical Engineering Heriot-Watt
More informationA thesis submitted in partial fulllment of. the requirements for the degree of. Bachelor of Technology. Computer Science and Engineering
R N O C A thesis submitted in partial fulllment of the requirements for the degree of Bachelor of Technology in Computer Science and Engineering Rahul Bhotika Anurag Mittal Supervisor : Dr Subhashis Banerjee
More informationVision-based Manipulator Navigation. using Mixtures of RBF Neural Networks. Wolfram Blase, Josef Pauli, and Jorg Bruske
Vision-based Manipulator Navigation using Mixtures of RBF Neural Networks Wolfram Blase, Josef Pauli, and Jorg Bruske Christian{Albrechts{Universitat zu Kiel, Institut fur Informatik Preusserstrasse 1-9,
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationModel-based segmentation and recognition from range data
Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationDetecting Planar Homographies in an Image Pair. submission 335. all matches. identication as a rst step in an image analysis
Detecting Planar Homographies in an Image Pair submission 335 Abstract This paper proposes an algorithm that detects the occurrence of planar homographies in an uncalibrated image pair. It then shows that
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationLocalization of Piled Boxes by Means of the Hough Transform
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing, University of Freiburg, Georges-Koehler-Allee 52, D-79110 Freiburg,
More informationOn Satellite Vision-aided Robotics Experiment
On Satellite Vision-aided Robotics Experiment Maarten Vergauwen, Marc Pollefeys, Tinne Tuytelaars and Luc Van Gool ESAT PSI, K.U.Leuven, Kard. Mercierlaan 94, B-3001 Heverlee, Belgium Phone: +32-16-32.10.64,
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationBritish Machine Vision Conference 2 The established approach for automatic model construction begins by taking surface measurements from a number of v
Segmentation of Range Data into Rigid Subsets using Planar Surface Patches A. P. Ashbrook, R. B. Fisher, C. Robertson and N. Wergi Department of Articial Intelligence The University of Edinburgh 5, Forrest
More informationCS 534: Computer Vision 3D Model-based recognition
CS 534: Computer Vision 3D Model-based recognition Spring 2004 Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 Outlines Geometric Model-Based Object Recognition Choosing features
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationSkill. Robot/ Controller
Skill Acquisition from Human Demonstration Using a Hidden Markov Model G. E. Hovland, P. Sikka and B. J. McCarragher Department of Engineering Faculty of Engineering and Information Technology The Australian
More informationPredicted. position. Observed. position. Optical center
A Unied Procedure for Calibrating Intrinsic Parameters of Spherical Lenses S S Beauchemin, R Bajcsy and G Givaty GRASP Laboratory Department of Computer and Information Science University of Pennsylvania
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Model Based Object Recognition 2 Object Recognition Overview Instance recognition Recognize a known
More informationare now opportunities for applying stereo ranging to problems in mobile robot navigation. We
A Multiresolution Stereo Vision System for Mobile Robots Luca Iocchi Dipartimento di Informatica e Sistemistica Universita di Roma \La Sapienza", Italy iocchi@dis.uniroma1.it Kurt Konolige Articial Intelligence
More informationLaboratory for Computational Intelligence Main Mall. The University of British Columbia. Canada. Abstract
Rigidity Checking of 3D Point Correspondences Under Perspective Projection Daniel P. McReynolds danm@cs.ubc.ca David G. Lowe lowe@cs.ubc.ca Laboratory for Computational Intelligence Department of Computer
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationInterpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar
Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgart Geschwister-Scholl-Strae 24, 70174 Stuttgart, Germany
More informationCalibration and Synchronization of a Robot-Mounted Camera for Fast Sensor-Based Robot Motion
IEEE Int. Conf. on Robotics and Automation ICRA2005, Barcelona, Spain, April 2005 Calibration and Synchronization of a Robot-Mounted Camera for Fast Sensor-Based Robot Motion Friedrich Lange and Gerd Hirzinger
More informationAN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe
AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T
More information(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than
An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,
More informationModel-Based Segmentation of Impression Marks
Model-Based Segmentation of Impression Marks Christoph Brein Institut für Mess- und Regelungstechnik, Universität Karlsruhe (TH), D-76128 Karlsruhe, Germany ABSTRACT Impression marks are commonly found
More informationCamera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah
Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric
More informationZero Robotics Autonomous Space Capture Challenge Manual
Zero Robotics Autonomous Space Capture Challenge Manual v1.3 1 Introduction 1.1 Conventions Vectors All vectors in this document are denoted with a bold face font. Of special note is the position vector
More informationCarlo Tomasi John Zhang David Redkey. coordinates can defeat the most sophisticated algorithm. for a good shape and motion reconstruction system.
Preprints of the Fourth International Symposium on Experimental Robotics, ISER'95 Stanford, California, June 0{July, 995 Experiments With a Real-Time Structure-From-Motion System Carlo Tomasi John Zhang
More informationImage Processing Methods for Interactive Robot Control
Image Processing Methods for Interactive Robot Control Christoph Theis 1, Ioannis Iossifidis 2 and Axel Steinhage 3 1,2 Institut für Neuroinformatik, Ruhr-Univerität Bochum, Germany 3 Infineon Technologies
More informationCIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM
CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and
More informationCamera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah
Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D
More informationThere have been many wide-ranging applications of hybrid dynamic systems, most notably in manufacturing systems and network protocols. To date, they h
Proceedings of SYROCO '97, Nantes France ROBUST DISCRETE EVENT CONTROLLER SYNTHESIS FOR CONSTRAINED MOTION SYSTEMS David J. Austin, Brenan J. McCarragher Department of Engineering Faculty of Engineering
More informationBIN PICKING APPLICATIONS AND TECHNOLOGIES
BIN PICKING APPLICATIONS AND TECHNOLOGIES TABLE OF CONTENTS INTRODUCTION... 3 TYPES OF MATERIAL HANDLING... 3 WHOLE BIN PICKING PROCESS... 4 VISION SYSTEM: HARDWARE... 4 VISION SYSTEM: SOFTWARE... 5 END
More informationwith respect to some 3D object that the CAD model describes, for the case in which some (inexact) estimate of the camera pose is available. The method
Error propagation for 2D{to{3D matching with application to underwater navigation W.J. Christmas, J. Kittler and M. Petrou Vision, Speech and Signal Processing Group Department of Electronic and Electrical
More informationVision-based Mobile Robot Localization and Mapping using Scale-Invariant Features
Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett
More informationLOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES. Karin Sobottka Ioannis Pitas
LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES Karin Sobottka Ioannis Pitas Department of Informatics, University of Thessaloniki 540 06, Greece e-mail:fsobottka, pitasg@zeus.csd.auth.gr Index
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationFace Tracking Implementation with Pose Estimation Algorithm in Augmented Reality Technology
Available online at www.sciencedirect.com Procedia - Social and Behavioral Sciences 57 ( 2012 ) 215 222 International Conference on Asia Pacific Business Innovation and Technology Management Face Tracking
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationBinocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationTOWARDS AN IMAGE UNDERSTANDING ARCHITECTURE FOR A SITUATED ARTIFICIAL COMMUNICATOR C. Bauckhage, G.A. Fink, G. Heidemann, N. Jungclaus, F. Kummert, S.
TOWARDS AN IMAGE UNDERSTANDING ARCHITECTURE FOR A SITUATED ARTIFICIAL COMMUNICATOR C. Bauckhage, G.A. Fink, G. Heidemann, N. Jungclaus, F. Kummert, S. Posch, H. Ritter, G. Sagerer, D. Schluter University
More informationAll human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.
All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction
More informationAnd. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi
Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation
More information3D Computer Vision. Structure from Motion. Prof. Didier Stricker
3D Computer Vision Structure from Motion Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Structure
More informationRecognizing Buildings in Urban Scene of Distant View ABSTRACT
Recognizing Buildings in Urban Scene of Distant View Peilin Liu, Katsushi Ikeuchi and Masao Sakauchi Institute of Industrial Science, University of Tokyo, Japan 7-22-1 Roppongi, Minato-ku, Tokyo 106, Japan
More informationLearning Indexing Functions for 3-D Model-Based Object. Recognition. Jerey S. Beis and David G. Lowe. University of British Columbia
Learning Indexing Functions for 3-D Model-Based Object Recognition Jerey S. Beis and David G. Lowe Dept. of Computer Science University of British Columbia Vancouver, B.C., Canada email: beis@cs.ubc.ca
More informationHomogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.
Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is
More informationHUMAN COMPUTER INTERFACE BASED ON HAND TRACKING
Proceedings of MUSME 2011, the International Symposium on Multibody Systems and Mechatronics Valencia, Spain, 25-28 October 2011 HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Pedro Achanccaray, Cristian
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More informationClassifier C-Net. 2D Projected Images of 3D Objects. 2D Projected Images of 3D Objects. Model I. Model II
Advances in Neural Information Processing Systems 7. (99) The MIT Press, Cambridge, MA. pp.949-96 Unsupervised Classication of 3D Objects from D Views Satoshi Suzuki Hiroshi Ando ATR Human Information
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More information3D Reconstruction from Scene Knowledge
Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW
More informationCourse 23: Multiple-View Geometry For Image-Based Modeling
Course 23: Multiple-View Geometry For Image-Based Modeling Jana Kosecka (CS, GMU) Yi Ma (ECE, UIUC) Stefano Soatto (CS, UCLA) Rene Vidal (Berkeley, John Hopkins) PRIMARY REFERENCE 1 Multiple-View Geometry
More informationRealtime Object Recognition Using Decision Tree Learning
Realtime Object Recognition Using Decision Tree Learning Dirk Wilking 1 and Thomas Röfer 2 1 Chair for Computer Science XI, Embedded Software Group, RWTH Aachen wilking@informatik.rwth-aachen.de 2 Center
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More information2 Abstract In this paper, we showhow an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method
1 Calibration of an Active Binocular Head Sheng-Wen Shih y, Yi-Ping Hung z, and Wei-Song Lin y y Institute of Electrical Engineering, National Taiwan University, Taipei, Taiwan. z Institute of Information
More informationImage Transformations & Camera Calibration. Mašinska vizija, 2018.
Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationβ R x cl z cl d L α L α R x ir f R γ R γ L x il
Visual Determination of 3D Grasping Points on Unknown Objects with a Binocular Camera System Alexa Hauck, Johanna Ruttinger, Michael Sorg, Georg Farber Lab. for Process Control and Real-Time Systems Technische
More informationRobotics - Projective Geometry and Camera model. Marcello Restelli
Robotics - Projective Geometr and Camera model Marcello Restelli marcello.restelli@polimi.it Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Ma 2013 Inspired from Matteo
More informationTechniques. IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale. Phone: Fax:
Incorporating Learning in Motion Planning Techniques Luca Maria Gambardella and Marc Haex IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale Corso Elvezia 36 - CH - 6900 Lugano Phone: +41
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationMULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES
MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationA Hierarchical Statistical Framework for the Segmentation of Deformable Objects in Image Sequences Charles Kervrann and Fabrice Heitz IRISA / INRIA -
A hierarchical statistical framework for the segmentation of deformable objects in image sequences Charles Kervrann and Fabrice Heitz IRISA/INRIA, Campus Universitaire de Beaulieu, 35042 Rennes Cedex,
More informationModel Based Pose Estimation. from Uncertain Data. Thesis submitted for the degree \Doctor of Philosophy" Yacov Hel-Or
Model Based Pose Estimation from Uncertain Data Thesis submitted for the degree \Doctor of Philosophy" Yacov Hel-Or Submitted to the Senate of the Hebrew University in Jerusalem (1993) ii This work was
More informationestimate of change in the motion direction and vertical segment correspondence provides for precise 3D information about objects close to the robot. A
MOBILE ROBOT NAVIGATION AND SCENE MODELING USING STEREO FISH-EYE LENS SYSTEM SHISHIR SHAH AND J. K. AGGARWAL Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 522
More information