Trajectory Reconstruction with NURBS Curves for Robot Programming by Demonstration

Similar documents
UNIVERSITÀ DEGLI STUDI DI PARMA

Symbolic Representation of Trajectories for Skill Generation

Task analysis based on observing hands and objects by vision

An Interactive Technique for Robot Control by Using Image Processing Method

Grasp Recognition using a 3D Articulated Model and Infrared Images

Recognition and Performance of Human Task with 9-eye Stereo Vision and Data Gloves

Skill. Robot/ Controller

Programming by Demonstration of Pick-and-Place Tasks for Industrial Manipulators using Task Primitives

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Gesture Recognition Technique:A Review

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

COLLISION-FREE TRAJECTORY PLANNING FOR MANIPULATORS USING GENERALIZED PATTERN SEARCH

Kinesthetic Teaching via Fast Marching Square

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

A MULTI-ROBOT SYSTEM FOR ASSEMBLY TASKS IN AUTOMOTIVE INDUSTRY

Modeling the Virtual World

A Two-stage Scheme for Dynamic Hand Gesture Recognition

AMR 2011/2012: Final Projects

1 Trajectories. Class Notes, Trajectory Planning, COMS4733. Figure 1: Robot control system.

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Fast trajectory matching using small binary images

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

Robotized Assembly of a Wire Harness in Car Production Line

DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1

Human Skill Transfer System via Novint Falcon

A Reduced-Order Analytical Solution to Mobile Robot Trajectory Generation in the Presence of Moving Obstacles

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1

Operation Trajectory Control of Industrial Robots Based on Motion Simulation

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition

A New Algorithm for Measuring and Optimizing the Manipulability Index

SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS

Expanding gait identification methods from straight to curved trajectories

Instant Prediction for Reactive Motions with Planning

Using Artificial Neural Networks for Prediction Of Dynamic Human Motion

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Configuration Space of a Robot

Collided Path Replanning in Dynamic Environments Using RRT and Cell Decomposition Algorithms

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

Video 11.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar

Providing Haptic Hints to Automatic Motion Planners

Virtual Interaction System Based on Optical Capture

Approximation of 3D-Parametric Functions by Bicubic B-spline Functions

CHAPTER 1 Graphics Systems and Models 3

A Framework for Compliant Physical Interaction based on Multisensor Information

ME COMPUTER AIDED DESIGN COMPUTER AIDED DESIGN 2 MARKS Q&A

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping

Evolving the Neural Controller for a Robotic Arm Able to Grasp Objects on the Basis of Tactile Sensors

MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT

Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators

Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis

Inverse Kinematics Software Design and Trajectory Control Programming of SCARA Manipulator robot

3D Collision Avoidance for Navigation in Unstructured Environments

A Survey of Light Source Detection Methods

ACE Project Report. December 10, Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

On-line and Off-line 3D Reconstruction for Crisis Management Applications

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Short Survey on Static Hand Gesture Recognition

Hand-Eye Calibration from Image Derivatives

The Application of Spline Functions and Bézier Curves to AGV Path Planning

Calculating Optimal Trajectories from Contact Transitions

Written exams of Robotics 1

Path Planning with Motion Optimization for Car Body-In-White Industrial Robot Applications

Elastic Bands: Connecting Path Planning and Control

Automatic Control Industrial robotics

Les Piegl Wayne Tiller. The NURBS Book. Second Edition with 334 Figures in 578 Parts. A) Springer

Generality and Simple Hands

Optimization of a two-link Robotic Manipulator

Constraint-Based Task Programming with CAD Semantics: From Intuitive Specification to Real-Time Control

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

The Light Field and Image-Based Rendering

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena

Introduction to Autonomous Mobile Robots

Occluded Facial Expression Tracking

ROBOTICS (5 cfu) 09/02/2016. Last and first name Matricola Graduating

Planning in Mobile Robotics

DEVELOPMENT OF THE FORWARD KINEMATICS FOR ROBOT FINGERS BY USING ROBOREALM

Prof. Fanny Ficuciello Robotics for Bioengineering Trajectory planning

COORDINATE MEASUREMENTS OF COMPLEX-SHAPE SURFACES

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Manipulator trajectory planning

What is computer vision?

Automatic Generation of Contact State Graphs between a Polygon and a Planar Kinematic Chain

Construction Tele-robot System With Virtual Reality

Motion Planning of Multiple Mobile Robots for Cooperative Manipulation and Transportation

An Edge-Based Approach to Motion Detection*

Robot learning for ball bouncing

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

A Calligraphy Robot - Callibot: Design, Analysis and Applications

VALLIAMMAI ENGINEERING COLLEGE

Algorithmic Robotics and Motion Planning

Partitioning Contact State Space Using the Theory of Polyhedral Convex Cones George V Paul and Katsushi Ikeuchi

REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION

Tele-operation Construction Robot Control System with Virtual Reality Technology

Image Processing Methods for Interactive Robot Control

1. Introduction 1 2. Mathematical Representation of Robots

Mobile robots and appliances to support the elderly people

Introduction to Robotics

Transcription:

Trajectory Reconstruction with NURBS Curves for Robot Programming by Demonstration Jacopo Aleotti, Stefano Caselli, Giuliano Maccherozzi RIMLab - Robotics and Intelligent Machines Laboratory Dipartimento di Ingegneria dell Informazione University of Parma, Italy E-mail {aleotti,caselli}@ce.unipr.it giuliano.macchero@libero.it Abstract Service robots require simple programming techniques allowing users with little or no technical expertise to integrate new tasks in a robotic platform. A promising solution for automatic acquisition of robot behaviors is the Programming by Demonstration (PbD) paradigm. Its aim is to let robot systems learn new behaviors from a human operator demonstration. This paper describes a virtual reality based PbD system for pick-and-place and manipulation tasks. The system recovers smooth robot trajectories from single or multiple user demonstrations, thereby overcoming sensor noise and human inconsistency problems. More specifically, we investigate the benefits of the human hand trajectory reconstruction with NURBS curves by means of a best-fit data smoothing algorithm. Some experiments involving object transportation while avoiding obstacles in the workspace show the viability and effectiveness of the approach. Index Terms Robot Programming by Demonstration, NURBS curves, Virtual Reality. I. INTRODUCTION The development of effective personal and home service robots is becoming one of the major goals in the context of human-robot interaction. This challenge requires systems enabling robot programming for users with little or no technical competence. A promising solution that has been proposed for the automatic acquisition of robot behaviors is Programming by Demonstration (PbD) [6]. PbD offers a way to build user-friendly interfaces with natural methods of interaction. For manipulation tasks, a PbD system provides the user with a conceptually simple way to instruct a robotic platform just by showing how to solve a certain task, given the necessary initial knowledge to the system. PbD systems can be classified into two main categories depending on the way demonstration is carried out. The most general way is performing the demonstration in the real environment [6], [18], [10], [22], [23]. This approach requires object recognition techniques and routines for human gesture segmentation. At the current state of the art, however, this approach can deal only with highly structured environments including a small set of known objects. An alternative approach involves performing the demonstration in a simulated environment [16], [8], [9], [1]. The use of a simulated virtual environment to directly demonstrate the task can help to overcome some difficulties. First, tracking user s actions within a simulated environment is easier than in a real environment. Second, there is no need for object recognition, since the state of the manipulated objects is known in advance. Third, while performing the demonstration the user can always control the position of virtual cameras through which the operator views the virtual world. Finally, the virtual environment can be augmented with operator aids such as graphical or other synthetic fixtures [15], and force feedback. A major difficulty of this approach is that in cluttered virtual environments, such as for robotic assemblies, it requires that fine manipulation tasks can be tracked with high fidelity. Learning is one of the main topics of interest in PbD and can be pursued either at the task level or at the trajectory level. The present work focuses on trajectory learning. Previous research [1] has focused on the development of a PbD platform which handles basic assembly tasks where an operator, wearing a dataglove with a 3D tracker, demonstrates the tasks in a virtual environment. The imitation strategy was aimed at recognizing a sequence of hand-object actions such as pick-and-place, stacking, and peg-in-hole, without reproducing the exact movements of the hand. The task recognized was then performed in a simulated environment for validation and, possibly, in a real workspace exploiting basic straight line movements of the end effector. This approach, however, is not feasible in presence of obstacles in the workspace. The present work aims at generating a smooth path that approximates the actual motion of the grasped object. The method allows the system to fit data sets containing one or multiple example trajectories. As stated in [3], data smoothing yields several advantages for characterizing human motion, since it removes noise from tracking sensors and it reduces jitter and unwanted movements. NURBS (Non-Uniform Rational B-Spline) curves are used for trajectory reconstruction since they have been proven to be the best parametric curves for path planning both for 2D mobile robots [17] and 3D curve approximation [12]. NURBS curves provide a flexible way to represent both standard analytic and free-form curves, their evaluation is fast, and they can be manipulated either by directly modifying the position of the points lying on the curve or by indirectly changing the configuration of the control points. Moreover, NURBS curves can exploit powerful geometric tool kits such as knot insertion and removal. The rest of the paper is organized as follows. Section 2

reviews the state of the art regarding a sample of related PbD and motion learning research. Section 3 introduces the structure and the main components of the system. Section 4 shows some experimental results that demonstrate the potential of the proposed approach. The paper closes in section 5 summarizing the work and discussing our future investigation. II. RELATED WORK Ude and Dillmann [20] proposed a PbD system where a stereo vision system is used to measure the trajectory of the objects being manipulated by the user. Smoothing vector splines are utilized to reconstruct trajectories either in Cartesian or in joint coordinates. Riley et al. [14] focused on motion synthesis of dance movements for a humanoid robot. The approach includes collection of example human movements, handling of marker occlusions, extraction of motion parameters, and trajectory generation. In [19] the same authors presented a method for the formulation of joint trajectories based on B-spline basis functions. Lee [7] described a spline smoother for finding a bestfit trajectory from multiple examples of a given physical motion that can simultaneously fit position and velocity information. The system was tested for handwriting motions. Yang et al. [21] described a system for skill learning with application to telerobotics. Human skills are represented as a parametric model using HMM and the best action sequence can be selected from all previously measured actions. Experiments were carried out involving position trajectory learning in Cartesian space. Delson and West [3] investigated a method for generating a robot trajectory from multiple human demonstrations for both 2D and 3D tasks. The range of human inconsistency is used to define an obstacle free region. The generated robot trajectory is guaranteed to avoid obstacles and is shorter than any of the demonstrations. Ogawara et al. [11] described an approach for the estimation of essential interactions for manipulation tasks and applied a Hidden Markov Model (HMM) based clustering algorithm for recognizing and describing human behaviors. Billard and Schaal [2] proposed a biologically inspired model of human imitation. A dynamic simulation of a humanoid avatar was implemented and the system could learn the principal features of a 3 DOF arm trajectory. Drumwright and Mataric [5] [4] introduced a method for performing variations on a given demonstrated behavior with the aid of an interpolation mechanism. The method allows recognition of free-space movements for humanoid robots. This approach can achieve a more accurate interpolation of the examples compared to the one proposed in this paper, which is based on a global approximation algorithm. However, our method has fewer constraints, since the approach in [5] [4] requires an appropriate manual segmentation of the example trajectories into time-frames. The main difference between the previously cited works and our approach lies in the choice of the mathematical tool used for curve fitting. Our solution appears to be the only one that is based on NURBS curves. Moreover our method can be applied for fitting both single and multiple trajectories, as it will be shown in the experiments. The resulting curve can be also easily manipulated in presence of collisions. III. OVERVIEW OF THE SYSTEM The PbD system described in this paper handles basic manipulation operations in a 3D virtual environment. The system targets task-level program acquisition in a block world. An operator, wearing a dataglove with a 3D tracker, performs a manipulation task in the virtual environment. The virtual scene simulates the actual workspace and displays the relevant assembly components. The system recognizes, from the user s actions, a sequence of high level hand-object operations. In details, during the demonstration phase the user can provide multiple examples for each single operation. The movements of the grasped object are tracked; then, the system transforms each group of recorded trajectories, that correspond to the same operation, into a smooth curve, which is finally translated into a set of commands for a robot manipulator. After the demonstration phase the recognized task is performed in a simulated environment for validation. Then, if the operator agrees with the simulation, the task is executed in the real environment referring to actual object location in the workspace. The end effector is programmed to follow the generated curve by applying an inverse kinematics algorithm. It must be pointed out that the method for trajectory approximation proposed in this paper is demonstratordependent and is not guaranteed to generate collision-free trajectories. However, the operator can take advantage of the preventive simulation feature of the PbD environment to avoid unsafe path reconstructions. Moreover, the user interface allows editing of the computed trajectories by manipulating the parameters of the parametric curve, as will be shown in section IV. Figure 1 shows the main components of the PbD testbed. The actual robot controlled by the PbD application is a 6 dof Puma 5 manipulator. A vision system (currently operating in 2D) is exploited to recognize the objects in the real workspace and detect their initial configurations. The whole application is built on top of a CORBA-based framework which interconnects clients and servers while providing transparent access to the various heterogeneous subsystems. A. Demonstration interface A CyberTouch VR glove, from Immersion Corporation Inc., has been used throughout the experiments to track hand motions and gestures. This device is a tactile feedback instrumented glove with 18 resistive sensors for jointangle measurements and 6 vibrotactile actuators. The glove comprises two bend sensors on each finger, four abduction sensors, plus sensors measuring thumb crossover, palm arch, wrist flexion and wrist abduction. One vibrotactile actuator is located on the back of each finger, and one

HUMAN OPERATOR Hand Gesture Visual, Auditory and Vibrotactile feedback Visual feedback REAL WORKSPACE Vision system CyberTouch & Tracker Virtual Environment OPERATOR DEMONSTRATION INTERFACE Robot arm PUMA 5 Task Planner Robot controller (RCCL) Vision server Task Performer TASK SIMULATION CORBA INFRASTRUCTURE Fig. 1. Architecture of the programming by demonstration system. Fig. 2. The experimental setup. additional actuator is located in the palm. Each actuator can be individually programmed to vary the strength of vibrations. The system comprises also a FasTrack 3D motion tracking device (from Polhemus, Inc.). The FasTrack is an electro-magnetic sensor that computes the position and orientation of a small receiver mounted on the wrist of the CyberTouch (i.e., of an operator performing the task) as it moves through space. The human operator uses the glove and tracker combination as input devices. For demonstration purposes, operator s gestures are directly mapped to an anthropomorphic 3D model of the hand which is shown in the simulated workspace along with the objects. In the demonstration setup, the virtual environment is built upon the Virtual Hand Toolkit (VHT) provided by Immersion Corporation. VHT exploits a Haptic Scene Graph (HSG) to deal with geometrical information in an effective manner. Scene graphs are data structures providing highlevel descriptions of environment geometries. Each element of the HSG can be easily imported in VHT through a VRML parser included in the library. To achieve dynamic interaction between the virtual hand and the objects in the scene, VHT allows objects to be grasped. A collision detection algorithm (V-Clip) generates collision information between the hand and the objects, including the surface normal at the collision point. A grasp state is computed by VHT based on the contact normals; when the condition for a grasped object is no longer satisfied, the object is released. The system can also incorporate different kinds of virtual fixtures that help the user in grasping and releasing objects, such as vibrotactile, auditory, and visual feedbacks. B. Trajectory reconstruction The system analyzes the demonstration provided by the human operator through the task planner module (Figure 1). Segmentation of human actions is performed by means of a simple algorithm based on changes in the grasping state: a new operation is generated whenever a grasped object is released. In general, the specific trajectory followed by the user for a given object motion action should be learned, as it carries useful information about navigation and obstacle avoidance strategies, preferential approach directions, and so on. On the other end, user motions are noisy, often inconsistent, and may include irrelevant deviations and pauses. NURBS curves are the mathematical tool exploited in this work for trajectory approximation. NURBS have become popular in the CAD/CAM community as standard primitives for high-end 3D modelling applications and simulation environments. They provide the following advantages compared to other parametric curve formulations: their manipulation is easy, their evaluation is fast and computationally stable, and they are invariant under geometric transformations such as translation, rotation and affine transformations. A NURBS [12] is a vector-valued piecewise rational

Fig. 3. Demonstration interface, task simulation and real workspace for experiment 1. polynomial function of the form n i=0 C(u) = w ip i N i,p (u) n i=0 w in i,p (u) a u b (1) where the w i are scalars called the weights, the P i are the control points and the N i,p (u) are the pth degree B-spline basis functions defined recursively as { 1 if ui u u N i,0 (u) = i+1 0 otherwise N i,p (u) = u u i N i,p 1 (u) + u i+p u i + u i+p+1 u u i+p+1 u i+1 N i+1,p 1 (u) (2) where u i are real numbers called knots that act as breakpoints, forming a knot vector U = u 0..., u m with u i u i+1 for i = 0,..., m. The PbD system exploits the NURBS++ library (http://libnurbs.sourceforge.net/) that provides classes, data structures and algorithms to manipulate NURBS curves. As stated before, the complete task consists of several operations; for every operation the system tracks each demonstration provided by the user, sampling at constant rate the (x, y, z) coordinates of the grasped object. After a number of trials, the resulting data set can be written as (x t,s, y t,s, z t,s ), where t is the trial index and s indicates the index of the sample within the trial. The data points are then transformed into the reference frame of the simulation environment and sorted into a single list of points p i = (x i, y i, z i ) according to their euclidean distance to the starting point p of the current movement ( p 0 p p 1 p p n p ). The starting point is known, given the configuration of the objects in the environment. Finally, approximation algorithms are applied to the sorted data set to find the best-fit NURBS curve. Two approximation algorithms have been investigated [13]: the first is based on a least squares method, while the second is a global approximation algorithm with a specified error bound. IV. EXPERIMENTS The capabilities of the PbD system have been evaluated in assembly experiments consisting of pick-and-place operations on a small cubic box (shown in yellow or in clear shading in the following pictures) in a workspace comprising a working plane and two static obstacles (a larger rectangular box and a cylinder). We describe three of the experiments in the following. The first experiment (Figure 3) consists of a single demonstration. The user is required to grasp the small cubic box and to move it to its final configuration on the working plane, located on the other side of the obstacles, while avoiding the obstacles. In figure 3 the image on the left is a snapshot of the virtual environment for the demonstration phase; the image in the center shows the simulation environment with the approximating NURBS drawn in blue or dark line (the line was rendered using the NURBS interface provided by OpenGL); the center of the grasped object follows the curve. The image on the right shows the real workspace. Figure 4 shows a diagram with the resulting NURBS and the points sampled during the demonstration. The tight fitting of the sampled points highlights the effectiveness of the approximation algorithm. 48 46 44 42 38 36 34 32 Fig. 4. Generated NURBS and sampled points for experiment 1. The second experiment comprised four demonstrations, of the same task in an environment similar to that of experiment 1 (Figure 8 shows the simulation of the task). Across the four demonstrations the user followed quite different paths to reach the final position, and one of 90 100

Fig. 6. Experiment 3: demonstration phase (left image) and NURBS editor (right image). 48 46 44 42 38 36 34 32 28 75 Fig. 8. Task simulation for experiment 2. Fig. 5. Generated NURBS and sampled points for experiment 2 with sparse data sets (least squares algorithm). 20 75 85 20 75 85 Fig. 7. Generated NURBS and sampled points for experiment 2 with dense data set (least squares algorithm). the trajectories has strong wiggles. Figure 5 shows the resulting NURBS and the points sampled from each trial (drawn with different symbols). The spatial sampling of the curve was kept low resulting in an average of 7 points for each demonstration. These results show the effectiveness of the least squares algorithm with sparse data, since the best-fit trajectory is smooth and correctly removes the jitter of individual demonstrations. The same experiment has been exploited to assess the Fig. 9. Generated NURBS and sampled points for experiment 2 (global approximation algorithm with error bound). robustness of this algorithm with a dense data set, as shown in Figure 7. In this case an average of 26 samples were collected for each demonstration. The resulting NURBS is not satisfactory since it has wobbles, in particular in the central part of the trajectory, where the differences between the four user paths are remarkable. To deal with dense data set, a modified NURBS-based algorithm has been tested. The algorithm is based on a global approximation method that accepts an error bound as input parameter. Figure 9 shows the results obtained

with the same data set of experiment 2 and an error bound of 5cm. From the above and other similar experiments, we derive that if the error bound is higher than a certain threshold then the generated NURBS is smooth. User motion variability indicates that the robot can actually choose a path within a range of free space. The NURBS approximation with error bound takes advantage of this freedom to compute a smooth trajectory. Therefore, the global approximation algorithm should be preferred when the input data set is dense and there are many points close to each other. A final experiment consisted of a single demonstration, as in the first one. The user, while performing the task, (deliberately, in this case) moved the grasped object into an obstacle, as shown in figure 6 (left). At the end of the simulation, the user decided to manually modify the resulting trajectory to amend the colliding path segment. The PbD system provides an interactive editor that can be used to directly manipulate the generated NURBS by changing the position of some selected points on the curve. The new curve is redrawn in real time. The final NURBS can be saved and used as input for the execution phase of the task in the real environment. V. CONCLUSIONS In this paper we have described a PbD system based on a virtual reality teaching interface. We have investigated the effectiveness of a NURBS-based trajectory reconstruction algorithm that approximates both single and multiple demonstrations. This method allows noise reduction and filtering of unwanted movements. Although the generated paths are not guaranteed to be collision free, the user can take advantage of tools such as a preventive simulation for testing the task and of an interactive editor to manipulate the trajectories. Future work will include the possibility of mixing trajectory reconstruction and trajectory clustering in qualitatively different paths with a geometric approach. A Hidden Markov Model based algorithm will also be investigated to evaluate the consistency of the trajectories among each cluster. ACKNOWLEDGMENT This research is partially supported by MIUR (Italian Ministry of Education, University and Research) under project RoboCare (A Multi-Agent System with Intelligent Fixed and Mobile Robotic Components). We thank Monica Reggiani for her contribution to the initial part of this research. [4] E. Drumwright, O. Jenkins, and M. Mataric. Exemplar-Based Primitives for Humanoid Movement Classification and Control. In Int l Conf. on Robotics and Automation, New Orleans, LA, April 2004. [5] E. Drumwright and M. Mataric. Generating and recognizing freespace movements in humanoid robots. In IEEE/RSJ Int l Conf. on Intelligent Robots and Systems, pages 1672 1678, Las Vegas, Nevada, October 2003. [6] K. Ikeuchi and T. Suehiro. Towards an assembly plan from observation, Part I: Task recognition with polyhedral objects. IEEE Trans. Robot. Automat., 10(3):368 385, 1994. [7] C.H. Lee. A Phase Space Spline Smoother for Fitting Trajectories. IEEE Trans. on Systems, Man, and Cybernetics-Part B:Cybernetics, 34(1):346 6, February 2004. [8] E. Lloyd, J. S. Beis, D. K. Pai, and D. G. Lowe. Programming Contact Tasks Using a Reality-Based Virtual Environment Integrated with Vision. IEEE Trans. Robot. Automat., 15(3):423 434, jun 1999. [9] H. Ogata and T. Takahashi. Robotic Assembly Operation Teaching in a Virtual Environment. IEEE Trans. Robot. Automat., 10(3):391 399, June 1994. [10] K. Ogawara, S. Iba, T. Tanuki, H. Kimura, and K. Ikeuchi. Acquiring hand-action models by attention point analysis. IEEE Int. Conf. on Robotics and Automation, ICRA 2001, pages 4 4, may 2001. [11] K. Ogawara, J. Takamatsu, H. Kimura, and K. Ikeuchi. Modeling manipulation interactions by hidden Markov models. In Int l Conf. on Intelligent Robots and Systems, pages 1096 1101, Lausanne, Switzerland, October 2002. [12] L. Piegl. On NURBS: A Survey. IEEE Computer Graphics and Applications, 11(1): 71, January 1991. [13] Les Piegl and Wayne Tiller. The NURBS book (2nd ed.). Springer- Verlag, New York, NY, USA, 1997. [14] M. Riley, A. Ude, and C.G. Atkeson. Methods for motion generation and interaction with a humanoid robot: Case studies of dancing and catching. In AAAI and CMU Workshop on Interactive Robotics and Entertainment, Pittsburgh, Pennsylvania, April 2000. [15] C. Sayers. Remote Control Robotics. Springer-Verlag, New York, USA, 1999. [16] T. Takahashi and T. Sakai. Teaching robot s movement in virtual reality. IEEE/RSJ Int. Workshop on Intelligent robots and systems, IROS 1991, November 1991. [17] N. Tatematsu and K. Ohnishi. Tracking motion of mobile robot for moving target using NURBS curve. In Int l Conf. on Industrial Technology, pages 2 249, San Francisco, CA, December 2003. [18] H. Tominaga and K. Ikeuchi. Acquiring Manipulation Skills through Observation. IEEE Int l Conf. on Multisensor Fusion and Integration for Intelligent Systems, pages 7 12, aug 1999. [19] A. Ude, C.G. Atkeson, and M. Riley. Planning of joint trajectories for humanoid robots using B-spline wavelets. In Int l Conf. on Robotics and Automation, pages 2223 2228, San Francisco, CA, April 2000. [20] A. Ude and R. Dillmann. Trajectory reconstruction from stereo image sequences for teaching robot paths. In Int l Symp. Industrial Robots, pages 7 414, Tokyo, Japan, Nov. 1993. [21] J. Yang, Y. Xu, and C.S. Chen. Hidden Markov Model Approach to Skill Learning and Its Application to Telerobotics. IEEE Trans. on Robotics and Automation, 10(5):621 631, October 1994. [22] R. Zöllner, O. Rogalla, R. Dillmann, and M. Zöllner. Understanding Users Intention: Programming Fine Manipulation Tasks by Demonstration. IEEE/RSJ Int l Conference on Intelligent Robots and Systems, IROS 2002, September 2002. [23] R. D. Zollner and R. Dillmann. Using multiple probabilistic hypothesis for programming one and two hand manipulation by demonstration. In Int l Conf. on Intelligent Robots and Systems, pages 2926 2931, Las Vegas, NV, October 2003. REFERENCES [1] J. Aleotti, S. Caselli, and M. Reggiani. Leveraging on a virtual environment for robot programming by demonstration. Robotics and Autonomous Systems, 47(2-3):153 161, 2004. [2] A. Billard and S. Schaal. Robust learning of arm trajectories through human demonstration. In Int l Conf. on Intelligent Robots and Systems, pages 734 739, Maui, Hawaii, USA, Oct-Nov 2001. [3] N. Delson and H. West. Robot programming by human demonstration: The use of human inconsistency in improving 3D robot trajectories. In IEEE/RSJ Int l Conf. on Intelligent Robots and Systems, pages 1248 12, September 1994.