Exploration and Imitation for the Acquisition of Object Action Complexes

Similar documents
Humanoid Manipulation

ShanghAI Lectures December 2010

Object Action Complexes (OACs, pronounced oaks)

On-line periodic movement and force-profile learning for adaptation to new surfaces

Learning and Generalization of Motor Skills by Learning from Demonstration

On-line periodic movement and force-profile learning for adaptation to new surfaces

Toward a library of manipulation actions based on semantic object-action relations

Advances in Robot Programming by Demonstration

Visual Servoing for Dual Arm Motions on a Humanoid Robot

Action Sequence Reproduction based on Automatic Segmentation and Object-Action Complexes

Autonomous Acquisition of Visual Multi-View Object Representations for Object Recognition on a Humanoid Robot

Active Recognition and Manipulation of Simple Parts Exploiting 3D Information

Grasping Known Objects with Humanoid Robots: A Box-Based Approach

Learning Primitive Actions through Object Exploration

Visual Collision Detection for Corrective Movements during Grasping on a Humanoid Robot

Abstract. 1 Introduction. 2 Concept of the mobile experimental

Grasp Recognition and Mapping on Humanoid Robots

Active Multi-View Object Search on a Humanoid Head

IMITATION LEARNING OF DUAL-ARM MANIPULATION TASKS IN HUMANOID ROBOTS

Generalization of Example Movements with Dynamic Systems

Stereo-based Markerless Human Motion Capture for Humanoid Robot Systems

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

Learning Objects and Grasp Affordances through Autonomous Exploration

Design of an open hardware architecture for the humanoid robot ARMAR

Stereo-based 6D Object Localization for Grasping with Humanoid Robot Systems

Synthesizing Goal-Directed Actions from a Library of Example Movements

Visual Perception for Robots

Foveated Vision and Object Recognition on Humanoid Robots

Workspace Analysis for Planning Human-Robot Interaction Tasks

Learning Semantic Environment Perception for Cognitive Robots

A Kernel-based Approach to Direct Action Perception

Autonomous View Selection and Gaze Stabilization for Humanoid Robots

Semantic Grasping: Planning Robotic Grasps Functionally Suitable for An Object Manipulation Task

Accelerating Synchronization of Movement Primitives: Dual-Arm Discrete-Periodic Motion of a Humanoid Robot

A Framework for Visually guided Haptic Exploration with Five Finger Hands

Validation of Whole-Body Loco-Manipulation Affordances for Pushability and Liftability

Bio-inspired Learning and Database Expansion of Compliant Movement Primitives

Learning Hand Movements from Markerless Demonstrations for Humanoid Tasks

Image Processing Methods for Interactive Robot Control

Prediction of action outcomes using an object model

Skill. Robot/ Controller

An Interactive Technique for Robot Control by Using Image Processing Method

Learning of a ball-in-a-cup playing robot

Visual Servoing for Humanoid Grasping and Manipulation Tasks

Architecting for autonomy

Towards Cognitive Agents: Embodiment based Object Recognition for Vision-Based Mobile Agents Kazunori Terada 31y, Takayuki Nakamura 31, Hideaki Takeda

Semantic RGB-D Perception for Cognitive Robots

Combining Harris Interest Points and the SIFT Descriptor for Fast Scale-Invariant Object Recognition

Movement Learning and Control for Robots in Interaction

Grasp Affordances from Multi-Fingered Tactile Exploration using Dynamic Potential Fields

Movement Imitation with Nonlinear Dynamical Systems in Humanoid Robots

Robust Real-time Stereo-based Markerless Human Motion Capture

Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands

Perceptual Perspective Taking and Action Recognition

Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction

Execution of a Dual-Object (Pushing) Action with Semantic Event Chains

Robot Vision without Calibration

Visual Servoing on Unknown Objects

Distance-based Kernels for Dynamical Movement Primitives

The Organization of Cortex-Ganglia-Thalamus to Generate Movements From Motor Primitives: a Model for Developmental Robotics

Online movement adaptation based on previous sensor experiences

High-Level Robot Control with ArmarX

Efficient Motion Planning for Humanoid Robots using Lazy Collision Checking and Enlarged Robot Models

Tracking in Action Space

Bimanual Grasp Planning

LUMS Mine Detector Project

A Bio-Inspired Sensory-Motor Neural Model for a Neuro-Robotic Manipulation Platform

Imitation of Human Motion on a Humanoid Robot using Non-Linear Optimization

Human Skill Transfer System via Novint Falcon

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping

Visual Servoing on Unknown Objects

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation

Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks

Learning to bounce a ball with a robotic arm

Planning for Robust Execution of Humanoid Motions using Future Perceptive Capability

ANALYTICAL MODEL OF THE CUTTING PROCESS WITH SCISSORS-ROBOT FOR HAPTIC SIMULATION

Learning Motor Behaviors: Past & Present Work

Learning Relevant Features for Manipulation Skills using Meta-Level Priors

University of Applied Sciences. Hochschule Ravensburg-Weingarten Fakultät Elektrotechnik und Informatik. Master Thesis

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Visual Servoing for Floppy Robots Using LWPR

Digital Newsletter. Editorial. Second Review Meeting in Brussels

Can we quantify the hardness of learning manipulation? Kris Hauser Department of Electrical and Computer Engineering Duke University

Ontologies & Meta meta models at KU Leuven

Contextual priming for artificial visual perception

Simox: A Robotics Toolbox for Simulation, Motion and Grasp Planning

ROBOT SENSORS. 1. Proprioceptors

ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design

Robot vision review. Martin Jagersand

Interactive object classification using sensorimotor contingencies

Learning Geometry from Sensorimotor Experience

Building Kinematic and Dynamic Models of Articulated Objects with Multi-Modal Interactive Perception

Embodiment based Object Recognition for Vision-Based Mobile Agents

Localization and Map Building

Design of an anthropomorphic robot head

Task-based Grasp Adaptation on a Humanoid Robot

Robot Navigation with a Polar Neural Map

Efficient Motion and Grasp Planning for Humanoid Robots

Transcription:

Exploration and Imitation for the Acquisition of Object Action Complexes on Humanoids Tamim Asfour, Ales Ude, Andrej Games, Damir Omrcen, Kai Welke, Alexander Bierbaum, Martin Do, Pedram Azad, and Rüdiger Dillmann Karlsruhe Institute of Technology Institute for Anthropomatics Humanoids and Intelligence Systems Lab wwwiaim.ira.uka.deira Jozef Stefan Institute Department of Automatics, Biocybernetics, and Robotics Ljubljana, Slovenia http://www.ijs.si/ si/ Workshop, Humanoids 2009 07.012.2009, Paris, France

ARMAR III in a Kitchen Environment

Limitations and shortcuts Objects Complete model knowledge (shape, color, texture) Only visual representation is used How to learn new objects? How to acquire multi sensory representations of objects? Actions engineering approaches as place holders for learned primitive actions. How to learn new actions? How to adapt actions to new situations? ti How to chain different actions to achieve complex tasks?

Underlying Control Architecture High Level Reasoning Planning Language Mid Level Recognition Memory Consolidation Action Selection Language Grasp ioacs (episodic densities, memory) Pushing for Grasping Objects, Relations aoacs (semantic memory) symbolic representations Rule learning change detection semantic graphs generalization, abstraction Planning action grammars PKS planned actions Grasp densities, Pushing for Grasping, Filling and pouring Actions, Effects Task Intention Low Level Online Sensorimotor Processing object learning, detection, recognition vision attention n action recognition expectation verification Grasp recognition and mapping high level / compound features segmentation, grouping, learning haptics low level percepts proprioceptio on other... change detection grasp reflexes DMP, HMMs Grasp recognition and mapping action synthesis hand eye coordination Grasp reflex, box decomposition motor primitives mmands motor co

In this talk Autonomous Exploration: Visually guided haptic exploration Visual object exploration and search Coaching and Imitation Learning from Observation Goal directed Imitation

Direct Kinematics Inverse Kinematics Position/force control Hand: available skills Detection of contact and objectness Assessment of object deformability Precision grasps: Distance between fingertipsi Power grasps: Distance between fingertips i and palm

Multisensorial object exploration Fusion of tactile, proprioceptive and visual sensor data with a five fingered hand Verification of object size Verification of object deformability

Tactile Object Exploration Potential field approach to guide the robot hand along the object surface Oriented 3D point cloud from contact data Extract faces from 3D point cloud Geometric feature filter pipeline Parallelism Minimum face size Mutual visibility Face distance Association between objects and actions (grasps) ) Symbolic grasps (grasp affordances) RCP TCP

Active Visual Object Exploration and Search Generation of visual representations through exploration Application of generated representations in Application of generated representations in recognition tasks.

Active Visual Object Exploration Exploration of unknown object Perception, Action and Cognition through Learning of Object Action Complexes

Exploration Exploration of unknown object Background objectobject and hand object segmentation Generation of different views through manipulation

Representation Aspect Graph Multi view appearance based representation Each node corresponds to one view Edges describe neighbor relations Feature Pool Compact representation tti of views with prototypes Grouping based on visual similarity Vector quantization through incremental clustering

Active Visual Search Active Search Object search using perspective and foveal camera of Karlsruhe Humanoid Head Scene memory Integration of object hypotheses in an ego centric representation

Learning by Autonomous Exploration: Pushing Learning of actions on objects (Pushing) Learning relationship between point and angle of push and the actual movement of an object y world y obj angle of contact point of contact x obj actual object movement (x vel, y vel, Φ vel ) x world Use the knowledge in order to find the appropriate point and angle of push in order to bring an object to a goal

Pushing within the Formalized OAC concept Requirements: Initial motor knowledge: the robot need to know how to move the pusher along straight lines Assumed visual processing: segment and localize objects The pushing OAC oac push learns the prediction function updatet, which is implemented as a feedforward neural network. This network represents a forward modelfor object movements that have been recorded with each pushing action. Learning by exploration

Object independent pushing (generalization across objects). Learning relationship between point and angle of push and the actual movement of an object Direct association between the binarized object image and the response of the object with respect to the applied pushing action. Use the knowledge in order to find the appropriate point and angle of push in order to bring an object to a goal Pushing for grasping Pushing for grasping

In this talk Autonomous Exploration: Visually guided haptic exploration Visual object exploration and search Coaching and Imitation Learning from Observation Goal directed Imitation

Observation, Reproduction, Generalization Observation Reproduction Generalization

Build a library of motor primitives Coaching Markerless human motion tracking and object tracking Guiding

Build a library of motor primitives Master Motor Map (MMM) as an interface for the transfer of motor knowledge between different embodiments

Action representation using DMPs canonical system: nonlinear function: transformation system: τ u = α u P i ψ i(u) ( ) w i u f(u) = P i ψ i(u) τ v = K(g x) Dv +(g x 0 )f τẋ = v h i (u c i ) 2 ψ i (u) =e Locally weighted learning Gaussian process regression x 0,g, τ w i canonical system u nonlinear function ff(u) target transformation system y, ẏ, ÿ

Learning from multiple examples To relate actions to goals, we need to observe more than one movement Example movements M i encoded by a sequence of trajectory points {p ij, v ij, a ij } at times {t ij }, j = 1,, n i. Associated goals (query points) q i.

Parameters to be estimated To generate a new control policy, we need to estimate the DMP parameters w, g, and τ as a function of parameters q that characterize the task w F : q g τ Gaussian process regression to associate the query ypoints with the goal of an action, frequency and timing.

Parameters to be estimated To generate a new control policy, we need to estimate the DMP parameters w, g, and τ as a function of parameters q that characterize the task w F : q g τ Locally weighted regression to estimate the shape parameters at new situations.

Reaching and active vision 3 D active vision; errors corrected by GPR. No knowledge about the kinematics assumed; kinematics of the goal configuration learned from the data. d Generated DMPs avoid the table.

Perceptual feedback The properties of DMPs allow us to easily modify the final reaching position on line line.

Accuracy for reaching and grasping Difference between the means: [1.6, 4.2, 7.6] cm. Systematic modeling errors are successfully corrected.

Grasping It was not necessary to track the hand to correct modeling errors (vision + kinematics).

Coaching periodic motion The first system that does not need the frequency to be specified beforehand. The system allows training with the teacher in the control loop.

Periodic movements Extract the frequency and learn the waveform. Adaptive FrequencyOscillators for frequency extraction. Incremental regression for waveform learning Dynamic Movement Primitives.

Generalization of Periodic Movements The data needs to be first processed to obtain the optimal frequency for each example motion. To match the phases between the training trajectories, each example trajectory must end in the same configuration. When using recursive learning with a forgetting factor, we need to ensure that we parse all examples with the matching phases.

Periodic movements: Wiping

Training

Training Data The frequencies need to be estimated when acquiring the data. Height difference is used as a query parameter.

Generalization Performance

Changing the Height of the Drums

Sequencing of discrete DMPs On line learning of DMPs for Reach Transport Retreat Associating semantic information with DMPs sequencing of movement primitives planning

Thanks to the PACO PLUS Consortium Kungliga Tekniska Högskolan, Sweden J. O. Eklundh, D. Kragic University of Göttingen Germany F. Wörgötter Aalborg University Denmark V. Krüger University of Karlsruhe, Germany R. Dillmann, T. Asfour PACO PLUS PLUS Perception, Action and Cognition through Learning of Object Action Complexes www.paco plus.org plus org ATR, Kyoto, Japan M. Kawato & G. Cheng University of Liege Belgium J. Piater University of Southern Denmark N. Krüger University of Edinburgh United Kingdom M. Steedman Jozef Stefan Institute Slovenia A. Ude Leiden University Netherlands B. Hommel Consejo Superior de Investigaciones Científicas, Spain C. Torras & J. Andrade Cetto

This work has been conducted within the EU Cognitive Systems project PACO PLUS www.paco plus.org funded by the European Commission www.cognitivesystems.eu Thank you... Strong collaboration with the German Humanoid Research project SFB588 funded by the German Research Foundation (DFG) www.sfb588.uni uni karlsruhe.de the EU Cognitive Systems project GRASP funded by the European Commission www.grasp project.eu