Perception and Control for a Humanoid Robot using Vision
|
|
- Molly Bradley
- 5 years ago
- Views:
Transcription
1 Dept. Elect. And Computer Systems Engineering, Academic Research Forum (Feb 11, 2003) 1 Perception and Control for a Humanoid Robot using Vision Geoffrey Taylor and Lindsay Kleeman Department of. Electrical. and Computer Systems Engineering Monash University, Clayton 3800 Victoria, Australia {Geoffrey.Taylor;Lindsay.Kleeman}@eng.monash.edu.au Abstract This paper provides a summary of our research towards developing a humanoid robot capable of performing interactive manipulation tasks in a domestic/office environment. Visual sensing makes important contributions to all areas of task specification, planning and actuation by enabling the robot to interact through gesture recognition, recognize and locate objects and obstacles, and employ visual feedback for robust control of robotic limbs. An overview of the steps involved in the sensing, planning and actuation of a typical grasping task are described and demonstrated experimentally on our upper-torso humanoid platform. Accurate sensing of range data in a domestic/office environment required the development of a novel laser stripe scanner to overcome the limitations of existing sensors. Our scanner uses stereo measurements to eliminate sensor noise, spurious reflections and cross talk from other robots, and can capture registered colour/range measurements of arbitrary objects in ambient indoor light. Task planning is performed by processing the range data to identify and model objects of interest using simple geometric primitives, and calculating a suitable grasp. Finally, we present a position-based visual servoing framework that provides the robot with automatic hand-eye calibration, allowing accurate and robust placement of the limbs. The fusion of these components allows the humanoid robot to locate and grasp a class of a priori unknown objects in its workspace. 1. Introduction Humanoid robotics has enjoyed a great deal of attention from both popular culture and the research community. This popularity may be due to the perception that humanoids are a natural vehicle by which mechatronics will find application outside assembly lines and in the wider community. The office/domestic environment is clearly designed for the anthropomorphic form, and humanoid modes of communication (speech, gestures, etc) facilitate a natural interface for human-robot interaction. Thus, humanoid robots allow humans and machines to work cooperatively without training or modifications to infrastructure. It is only recently that the first steps have been taken Figure 1. Metalman: an experimental uppertorso humanoid robot platform. towards a fully autonomous humanoid robot [1, 2, 3]. Current research projects have focused on specific aspects such as human-computer interaction, mobility, cognition and learnging. In contrast, the objectives of this research are deliberately task-driven: to develop a domestic humanoid robot capable of aiding an elderly or disabled person in performing daily tasks. Such a domestic robot faces a number of distinct challenges. Task specifications are likely to be ad hoc in nature as users find random tasks for the robot to perform, so the robot must not require special operating or initial conditions. Task planning in a cluttered environment may involve obstacle avoidance in addition to manipulating specific targets. Importantly, the robot must be capable of interacting and performing tasks at the same speed as its human companion. While a fully autonomous robot would benefit from a multi-modal sensory framework (touch, vision, smell, etc) we have focused our attention on visual perception as it provides a basis for addressing significant aspects such as recognizing and locating objects, feedback control of the robot limbs and the ability to interact through gesture recognition. This paper summarizes our contributions to the field of visual perception that enable an experimental humanoid robot to perform the basic task of recognizing and grasping an a priori unknown object. Figure 1 shows the experimental humanoid robot plat-
2 Dept. Elect. And Computer Systems Engineering, Academic Research Forum (Feb 11, 2003) 2 form used in this research. The arms are approximately anthropomorphic in configuration and scale, and consist of two 6-DOF Puma 260 robots with 1-DOF prosthetic hands. Red LEDs used for tracking are attched to the hands and actuated via HC11 microcontrollers. Vision is provided by a pair of PAL cameras on a Biclops pan/tilt/verge robotic head. The cameras capture stereo images at PAL frame rate and half-pal resolution. A laser stripe generator is mounted above the cameras, consisting of a 5 mw laser diode module and cylindrical len to generate a vertical stripe, and a DC motor with an optical encoder to drive the stripe across a scene. Motor control and encoder measurements are implemented on a PIC microcontroller. The hareware components are coordinated via serial links from a dual 2.2 GHz Intel Xeon PC, which also performs image processing. This paper is structured to follow the data acquisition, processing and actuation steps involved in grasping an unknown object. Section 2 describes the novel laser stripe scanner developed to acquire accurate and robust depth/colour measurements. Section 3 overviews the object recognition and grasp planning algorithms that bridge perception and control. Finally, Section 4 describes a position-based visual servoing framework enabling the robot to accurately and robustly execute a grasp. 2. Robust Laser Stripe Scanning left image plane x L 2b L left camera frame laser stripe x R X f R left camera frame A crucial aspect of the robot s task is the acquisition of dense and reliable colour/range measurements. Passive stereo is usually associated with humanoid sensing, but the accuracy and reliability of current techniques often depend on the content of the scene. Laser stripe range sensing offers a computationally efficient alternative, but also unique challenges when used on a domestic robot; the sensor must operate in normal ambient light and be capable of rejecting sensor noise, spurious reflections and cross talk from other robots. Existing single-camera scanners have limited application in robotics due to an inability to distinguish the laser stripe from these spurious noise mechanisms. A number of robust scanning methods have been proposed in other work to overcome this problem [4, 5, 6, 7], but these suffer from issues including assumed scene structure, inability to capture colour, and lack of error recovery. This led the authors to developed a robust stereo laser stripe scanner which provides dense, accurate colour/range measurements for robotic applications [8]. Figure 2 illustrates the basic operation of our scanner. A vertical laser stripe is projected into the scene, and its position is measured on stereo image planes. Triangulation allows the 3D location of each point on the stripe to be recovered, and a complete depth image is obtained by sweeping the laser across the scene. Noise rejection is achieved by exploiting redundancy in the stereo measurements to disambiguate the light stripe from other features. Correct error modelling also allows the scanner to achieve more optimal reconstructions than exisiting contarget object right image plane Figure 2. Stereoscopic stripe scanner. figurations. Furthermore, a simple framework exists for on-line calibration of the system parameters using measurements of an arbitrary non-planar target. Thus, the sensor may be calibrated at any time during normal operation. After capturing a stereo image of the stripe, each scanline is processed using edge filters to determine a set of candidate stripe locations on the left and right image plane, x Li and x Ri. Most candidates result from noise in the detection process, except for a single pair corresponding to the actual stripe, x L and x R. The valid measurements are identified as the pair which minimizes the following error distance over all left/right candidates: d 2 = (x Li + αx Rj + βy + γf) 2 /(α 2 + 1) where f is the camera focal length, and α, β and γ are related to the laser plane position, determined by online calibration. A further validation step is applied by requiring the minimum error be below a fixed threshold. The above function is related to the distance on the image plane between a measurement pair and a projection of the corresponding optimal reconstruction on the laser plane. The optimal 3D reconstruction is only calculated for valid measurements x L and x R. To compare the performance of our robust scanner with other proposed methods, we implemented our system along with two common techniques on the same experimental platfrom. The first comparative method was a simple single-camera scanner without any optimal or robust properties, and the second was a robust method which requires consensus between two independent single-camera reconstructions for validation. In the latter case, the 3D data from the two independent sensors was averaged to obtain the final reconstruction. Figure 3 shows the test scene designed to assess the robustness of the three scanning methods for rejecting cross talk and reflections. The scene is scanned in normal indoor light and contains typical domestic objects, while a mirror creates a reflected image to simulate the effect of
3 Dept. Elect. And Computer Systems Engineering, Academic Research Forum (Feb 11, 2003) 3 Figure 3. Mirror experiment arrangement cross talk and specularities. The resulting colour/range scans from the single scanner, double scanner and our robust stereo method are shown in Figure 4. The inability of the single-camera scanner to distinguish the laser from its reflection is clear from Figure 4(a), while the lack of data points in Figure 4(b) arises from the inability of the double-camera method to resolve ambiguities. Coversely, our method results in dense, robust colour/range data suitable for furher processing by the robot. The reconstruction accuracy of the three methods were compared by examining the depth variance from 200 samples of a single point on the laser stripe. The experiment was repeated for target points at varying distance to reveal the effect of range on reconstruction error, and the results are shown in Figure 5. As expected, a simple single-camera triangulation using either the left or right camera produces a significantly larger error than the robust methods, which use optimal estimation from stereo measurements. The accurate error model used in our method results in the lowest error variance. (a) Single-camera scan (b) Double-camera scan 3. Object Recognition and Grasp Planning Once the measurements have been acquired, the robot must localize objects of interest and plan the manipulation task. The range/colour scan of a typical scene shown in Figure 6(a) will be used as an example in the following discussion. In this case the system fits textured rectangular prisms to the colour/range data, although more recent versions have been expanded to include the recognition of cylinders, spheres and cones. The use of texture allows the robot to distinguish objects by colour in addition to geometry; in Figure 6(a) the robot is given the task of locating and grasping the yellow box. The work presented in this section was published recently in [9]. The first step in scene analysis is to segment and parameterise the raw range data into planar regions. A surface normal is determined for each small patch of (c) Robust stereoscopic scan Figure 4. Experimental results for mirror experiment.
4 Dept. Elect. And Computer Systems Engineering, Academic Research Forum (Feb 11, 2003) Robust Double Single Left Single Right Error Variance (mm^2) Encoder Count Figure 5. Measured Depth Variance points using least squares plane fitting [10]. The data is then segmented into planar regions using a two step process: an initial segmentation designed to over-segment the data, followed by region merging. The initial segmentation is similar to the technique presented in [11], based on the familiar sequential binary connectivity algorithm. Each neighbouring pair of range points is tested for coplanarity, and coplanar pairs are assigned an identical initial region label. The initial regions are then selectively merged using an iterative boundary cost minimization algorithm [12, 13]. For each pair of regions sharing a common boundary, the residual error of a plane fitted to both regions is calulated. The regions which result in the minimum combined error are merged, and the process is repeated until the minimum error exceeds a threshold. Merging only a single pair of regions at each iteration ensures a controlled growth of the total error for all regions. Figure 6(b) demonstrates the result of applying the segmentation algorithm to the range data shown in Figure 6(a). After segmentation, planes describing distinct convex objects may be grouped by examining the boundaries between neighbouring planes (covex edge or concave corner). A generic box is modelled as three pairs of parallel planes, with each pair parameterised by a common normal and two perpendicular distances to the origin. The system must be able to view at least two sides of the box to determine all parameters of the model. The visible faces of the box are used to determine the surface normal vectors and up to three distance parameters. The distance parameters for the hidden rear faces of the box are calculated by fitting planes to the edges of the visible faces using a numerical method similar to the Hough transform. The final step in model construction is to extract surface textures from the colour component of the laser scan. The colour/range data is projected onto the plane associated with each visible face, tessellated into triangles and rendered onto a texture map. Figure 6(c) shows the final 3D rendered models of the boxes extracted from the scan in Figure 6(a). (a) Raw range/colour data. (b) Extracted planes labelled with uniform colour. (c) Extracted box models and planned grasp. Figure 6. Experimental results for object detection and grasp planning.
5 Dept. Elect. And Computer Systems Engineering, Academic Research Forum (Feb 11, 2003) 5 After the object has been modelled and localized, a grasp planner calculates the pose of the gripper for a stable grasp. We adopt a typical approach to grasp planning: generate a set of candidate grasps using heuristics, then apply appropriate criteria to select the best grasp. Calculation of candidate grasps is partly based on the ideas developed in [14]. The force applied by the fingers should be normal to the gripped surface to minimize the effect of unknown surface friction, and the object should be grasped near the centre of mass to minimize load torque when lifted. These rules are easily applied to a box under the reasonable assumption of a uniformly distributed mass, resulting in two distinct grasps. For each grasp, we calculate the transformation that aligns the fingertips of the robot gripper to the candidate contact points and check that the pose is physically realizable. When both candidate grasps are reachable, we simply choose the preferred grasp as the one that minimizes the angle between the wrist and forearm of the robot. The wireframe model of the robot gripper in Figure 6(c) shows the final planned grasp for the yellow box. 4. Visual Servoing The visual servoing component of the system is based on our work first published in [15]. Visual servoing describes the feedback control of a robot using measurments from a camera, and proposed techniques are commonly classified as image-based, position-based or hybrid depending on how the control error is formulated. Our position-based visual servoing technique overcomes common problems including recovery from loss of feedback, handling large initial errors, and planning motions in Cartesian space. Furthermore, our technique allows visual servoing to be performed without knowledge of the transformation between the camera frame and robot base frame, provided the kinematics of the robot arm are reasonably well known. This is achieved by measuring the pose of the robot gripper directly, and formulating the pose error relative to the gripper. Figure 7 illustrates the basic visual servoing task. Artificial cues in the form of red LEDs are attached to the gripper to simplify image processing and increase tracking robustness. The position of the LEDs in the gripper frame G are manually calibrated and form an internal model which is used to determine its pose. At each iteration of the control loop, the position of the visible LEDs are measured in the camera frame C and processed using a Kalman filter to provide an optimal estimate of the gripper pose. A simple proportional control law is used for visual servoing, based on the translation and rotation required to move the gripper to the target pose T, expressed in the current frame of the gripper G. These differential pose parameters are passed to the Puma controller which calculates the appropriate joint motions. At the commencement of a new servoing task, the initial state of the gripper is determined autonomously. First, right camera stereo baseline target object C camera frame left camera T target gripper pose gripper LEDs G initial gripper pose Figure 7. Visual servoing framework. the cameras scan the workspace to find the LEDs, and the Kalman filter is initialised with the average measured position. The LEDs are then flashed individually to provide unambiguous position measurements, which are processed by the Kalman filter to estimate the initial pose of the gripper. During servoing, loss of visual feedback is minimized by actively tracking the motion of the gripper with the robotic head. However, if tracking is lost, the initialisation procedure provides an automatic recovery mechanism. Figure 8 shows the robot executing the grasping task planned in Figure 6(c). A view through the right camera in Figure 8(b) shows the current and target position of the gripper, and the location of the box (overlaid as wireframe models) during the servoing task. In this experiment, the entire process of data acquisition, modelling and visually servoed grasping required about 90 seconds. 5. Summary and Future Work We have presented a fusion of self-calibrated visual servoing, robust laser stripe scanning and object modelling to enable our experimental humanoid robot to perform simple grasping tasks. The significance of the system is that target objects may be a priori unknown, making the robot suitable for performing ad hoc tasks in an office/domestic environment. Work has already progressed on expanding the variety of objects that can be recognized and grasped by the robot. However, the issue of obstacle avoidance still needs to be addressed by implementing a motion planner to ensure that the arm avoids any collisions while moving towards its target. We also intend to exploit the colour and geometric information in the textured models to track the target while servoing, and thus improve grasping robustness and accuracy. This project is funded by the Strategic Monash University Research Fund for the Humanoid Robotics: Perception, Intelligence and Control project at IRRC.
6 Dept. Elect. And Computer Systems Engineering, Academic Research Forum (Feb 11, 2003) 6 References (a) Initial pose of robot and target object. (b) View through right camera of tracked gripper and box. (c) Successful completion of grasping task. Figure 8. Experimental results for visually servoed grasping. [1] B. Adams et al, Humanoid robots: A new kind of tool, IEEE Intelligent Systems, vol. 15, no. 4, pp , [2] K. Hirai, The honda humanoid robot: Development and future perspectives, Industrial Robot, vol. 26, no. 4, pp , [3] A. Price, R. Jarvis, R.A. Russell, and L. Kleeman, A lightweight plastic robotic humanoid, in Proc IEEE/RSJ International Conference on Intelligent Robots and Systems, 2000, pp [4] J. Haverinen and J. Röning, An obstacle detection system using a light stripe identification based method, in Proc. IEEE International Joint Symposium on Intelligence and Systems, 1998, pp [5] M. Magee, R. Weniger, and E. A. Franke, Location of features of known height in the presence of reflective and refractive noise using a stereoscopic light-striping approach, Optical Engineering, vol. 33, no. 4, pp , April [6] J. Nygards and Å. Wernersson, Specular objects in range cameras: Reducing ambiguities by motion, in Proc. of the IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, 1994, pp [7] E. Trucco et al, Calibration, data consistency and model acquisition with a 3-d laser striper, Int. Journal of Computer Integrated Manufacturing, vol. 11, no. 4, pp , [8] G. Taylor and L. Kleeman, Grasping unknown objects with a humanoid robot, in Proc Australasian Conference on Robotics and Automation, [9] G. Taylor, L. Kleeman, and Å. Wernersson, Robust colour and range sensing for robotic applications using a stereoscopic light stripe scanner, in Proc IEEE/RSJ International Conference on Intelligent Robots and Systems, [10] O.D. Faugeras, Three Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, [11] I. Stamos and P.K. Allen, 3-d model construction using range and image data, in Proc. International Conference on Computer Vision and Pattern Recognition, 2000, vol. 1, pp [12] J.R. Goldschneider and A.Q. Li, Variational segmentation by piecewise facet models with application to range imagery, in Proc IEEE International Converence on Image Processing, 2001, vol. 1, pp [13] D. Cobzas and H. Zhang, Planar patch extraction with noisy depth data, in Proc. Third International Conference on 3D Digital Imaging and Modeling, 2001, pp [14] G. Smith, E. Lee, K. Goldberg, K. Böringer, and J. Craig, Computing parallel-jaw grips, in Proc International Conference on Robotics and Automation, 1999, pp [15] G. Taylor and L. Kleeman, Flexible self-calibrated visual servoing for a humanoid robot, in Proc Australasian Conference on Robotics and Automation, 2001.
A Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationStereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration
Geoffrey Taylor Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 3800 Victoria, Australia geoffrey.taylor@eng.monash.edu.au lindsay.kleeman@eng.monash.edu.au
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationWhat is computer vision?
What is computer vision? Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images in terms of the properties of the
More informationStructured light 3D reconstruction
Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More informationDETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION
DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,
More information10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators
Robotics and automation Dr. Ibrahim Al-Naimi Chapter two Introduction To Robot Manipulators 1 Robotic Industrial Manipulators A robot manipulator is an electronically controlled mechanism, consisting of
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationSIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationRobotized Assembly of a Wire Harness in Car Production Line
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Robotized Assembly of a Wire Harness in Car Production Line Xin Jiang, Member, IEEE, Kyong-mo
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More information3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa
3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationGenerating 3D Meshes from Range Data
Princeton University COS598B Lectures on 3D Modeling Generating 3D Meshes from Range Data Robert Kalnins Robert Osada Overview Range Images Optical Scanners Error sources and solutions Range Surfaces Mesh
More informationA Modular Software Framework for Eye-Hand Coordination in Humanoid Robots
A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots Jurgen Leitner, Simon Harding, Alexander Forster and Peter Corke Presentation: Hana Fusman Introduction/ Overview The goal of their
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More information3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots
3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationAUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER
AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationROBOT SENSORS. 1. Proprioceptors
ROBOT SENSORS Since the action capability is physically interacting with the environment, two types of sensors have to be used in any robotic system: - proprioceptors for the measurement of the robot s
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationHumanoid Manipulation
Humanoid Manipulation Tamim Asfour Institute for Anthropomatics, Computer Science Department, Humanoids and Intelligence Systems Lab (Prof. Dillmann) wwwiaim.ira.uka.de www.sfb588.uni-karlsruhe.de KIT
More informationInverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector
Inverse Kinematics Given a desired position (p) & orientation (R) of the end-effector q ( q, q, q ) 1 2 n Find the joint variables which can bring the robot the desired configuration z y x 1 The Inverse
More informationMCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction
MCE/EEC 647/747: Robot Dynamics and Control Lecture 1: Introduction Reading: SHV Chapter 1 Robotics and Automation Handbook, Chapter 1 Assigned readings from several articles. Cleveland State University
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 15 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationMargarita Grinvald. Gesture recognition for Smartphones/Wearables
Margarita Grinvald Gesture recognition for Smartphones/Wearables Gestures hands, face, body movements non-verbal communication human interaction 2 Gesture recognition interface with computers increase
More informationRobot Vision without Calibration
XIV Imeko World Congress. Tampere, 6/97 Robot Vision without Calibration Volker Graefe Institute of Measurement Science Universität der Bw München 85577 Neubiberg, Germany Phone: +49 89 6004-3590, -3587;
More informationAutomatic 3-D 3 D Model Acquisition from Range Images. Michael K. Reed and Peter K. Allen Computer Science Department Columbia University
Automatic 3-D 3 D Model Acquisition from Range Images Michael K. Reed and Peter K. Allen Computer Science Department Columbia University Introduction Objective: given an arbitrary object or scene, construct
More informationVisual Perception for Robots
Visual Perception for Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Cognitive Robots Complete systems for example scenarios Equipped with rich sensors Flying robot
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationRobotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors
INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 13, No. 3, pp. 387-393 MARCH 2012 / 387 DOI: 10.1007/s12541-012-0049-8 Robotic Grasping Based on Efficient Tracking and Visual Servoing
More informationIFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission
IFAS Citrus Initiative Annual Research and Extension Progress Report 2006-07 Mechanical Harvesting and Abscission Investigator: Dr. Tom Burks Priority Area: Robotic Harvesting Purpose Statement: The scope
More informationBehavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism
Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate
More informationL1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming
L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the
More informationMICRO-CONTROLLER BASED ROBOT ARM WITH THREE-DIMENSIONAL REACH
- 111 - MICRO-CONTROLLER BASED ROBOT ARM WITH THREE-DIMENSIONAL REACH R.A.D.M.P.Ranwaka 1, T. J. D. R. Perera, J. Adhuran, C. U. Samarakoon, R.M.T.P. Rajakaruna ABSTRACT Department of Mechatronics Engineering,
More information3D-2D Laser Range Finder calibration using a conic based geometry shape
3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationA Best Next View selection algorithm incorporating a quality criterion
A Best Next View selection algorithm incorporating a quality criterion Nikolaos A. Massios, Robert B. Fisher, Intelligent Autonomous Systems, Department of Artificial Intelligence, University of Amsterdam,
More informationHuman Upper Body Pose Estimation in Static Images
1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project
More informationRobust Perception and Control for Humanoid Robots in Unstructured Environments using Vision
Robust Perception and Control for Humanoid Robots in Unstructured Environments using Vision Geoffrey Taylor BSc in Applied Mathematics and Physics (1998) BE(HONs) in Electrical and Computer Systems Engineering
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationMULTI-SENSORY SYNERGIES IN HUMANOID ROBOTICS
International Journal of Humanoid Robotics World Scientific Publishing Company MULTI-SENSORY SYNERGIES IN HUMANOID ROBOTICS R. Andrew Russell, Geoffrey Taylor, Lindsay Kleeman and Anies H Purnamadjaja
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationAMR 2011/2012: Final Projects
AMR 2011/2012: Final Projects 0. General Information A final project includes: studying some literature (typically, 1-2 papers) on a specific subject performing some simulations or numerical tests on an
More informationDept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan
An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationLighting- and Occlusion-robust View-based Teaching/Playback for Model-free Robot Programming
Lighting- and Occlusion-robust View-based Teaching/Playback for Model-free Robot Programming *Yusuke MAEDA (Yokohama National University) Yoshito SAITO (Ricoh Corp) Background Conventional Teaching/Playback
More informationCSc Topics in Computer Graphics 3D Photography
CSc 83010 Topics in Computer Graphics 3D Photography Tuesdays 11:45-1:45 1:45 Room 3305 Ioannis Stamos istamos@hunter.cuny.edu Office: 1090F, Hunter North (Entrance at 69 th bw/ / Park and Lexington Avenues)
More informationResearch Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)
Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationBuilding Reliable 2D Maps from 3D Features
Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;
More informationACE Project Report. December 10, Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University
ACE Project Report December 10, 2007 Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University 1. Introduction This report covers the period from September 20, 2007 through December 10,
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationCeilbot vision and mapping system
Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More informationMOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS
MOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS ZHANG Chun-sen Dept of Survey, Xi an University of Science and Technology, No.58 Yantazhonglu, Xi an 710054,China -zhchunsen@yahoo.com.cn
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationIndustrial Robots : Manipulators, Kinematics, Dynamics
Industrial Robots : Manipulators, Kinematics, Dynamics z z y x z y x z y y x x In Industrial terms Robot Manipulators The study of robot manipulators involves dealing with the positions and orientations
More informationAccurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion
007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationInverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm
Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh
More informationMethod for designing and controlling compliant gripper
IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Method for designing and controlling compliant gripper To cite this article: A R Spanu et al 2016 IOP Conf. Ser.: Mater. Sci.
More informationShading Languages. Seminar Computer Graphics. Markus Kummerer
Shading Languages Markus Kummerer ABSTRACT Shading Languages provide a highly flexible approach for creating visual structures in computer imagery. The RenderMan Interface provides an API for scene description,
More informationPlanning, Execution and Learning Application: Examples of Planning for Mobile Manipulation and Articulated Robots
15-887 Planning, Execution and Learning Application: Examples of Planning for Mobile Manipulation and Articulated Robots Maxim Likhachev Robotics Institute Carnegie Mellon University Two Examples Planning
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More information3D shape from the structure of pencils of planes and geometric constraints
3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.
More informationMapping textures on 3D geometric model using reflectance image
Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More informationGeometrical Feature Extraction Using 2D Range Scanner
Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798
More informationOn-line and Off-line 3D Reconstruction for Crisis Management Applications
On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationActive Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth
Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different
More information3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment
3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment Özgür ULUCAY Sarp ERTÜRK University of Kocaeli Electronics & Communication Engineering Department 41040 Izmit, Kocaeli
More information