Object Action Complexes (OACs, pronounced oaks)

Size: px
Start display at page:

Download "Object Action Complexes (OACs, pronounced oaks)"

Transcription

1 CoTeSys ROS Fall School on Cognition enabled Mobile Manipulation, 4. November, 2010, Munich Object Action Complexes (OACs, pronounced oaks) Tamim Asfour and PACO-PLUS Consortium Karlsruhe Institute of Technology, Institute for Anthropomatics, Germany INSTITUTE FOR ANTHROPOMATICS, DEPARTMENT OF INFORMATICS KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association

2 Building Humanoids BuildingHumanoids = BuildingHuman Centered Technologies Assistants/companions for people in different ages, situations, activities and environments in order to improve the quality of life Key technologies for future robotic systems Experimental platforms to study theories from other disciplines 2

3 Ultimate goals Integrated complete 24/7 humanoid robot systems rich with sensorimotor capabilities and able to act and interact in humancentered environments and to perform a variety of tasks as an indispensible requirement to implement cognitivecapabilities capabilities in technical systems Reproducible complete humanoid systems in terms of mechanical design, mechantronics, hardware and software architecture Autonomous humanoid systems 3

4 Humanoid KIT ARMAR, 2000 ARMAR-II, 2002 ARMAR-IIIa, 2006 ARMAR-IIIb, 2008 Collaborative Research Center 588: Humanoid Robots Learning and Cooperating Multimodal Robots (SFB 588) Funded by the German Research Foundation (DFG: Deutsche Forschungsgemeinschaft ) karlsruhe.de/ 4

5 ARMAR I and ARMAR II HURO 1999, ICAR199 Humanoids 2000, 2001 IROS 2003, 2004 ISER 2004 STAR2005 First demonstrator of the SFB 588 Demo at CEBIT

6 ARMAR-IIIa and ARMAR-IIIb 7 DOF head with foveated vision 2 cameras in each eye 6 microphones 7-DOF arms Position, velocity and torque sensors 6D FT-Sensors Sensitive Skin 8 DOF Hands Pneumatic actuators Weight 250g Holding force 2,5 kg 3 DOF torso 2 Embedded PCs 10 DSP/FPGA Units Holonomic mobile platform 3 laser scanner 3 Embedded PCs 2 Batteries Fully integrated autonomous humanoid system 6

7 ARMAR IV 63 DOF 170 cm 70 kg 7

8 Three key questions Grasping and manipulationin in human centered and open ended environments Learning through Observation of humans and imitation of human actions Interaction and natural communication SFB 588, Karlsruhe 8

9 Grasping is Integral Part for Humanoids in the Real lworld 9

10 Grasping and manipulation system Offline Analysis Execution asd Grasp planning Global model database Object models Vision system (Object recognition and localization) Grasp set for each object Arm path planner Grasp selection and execution 10

11 Object recognition and localization Colored objects (IROS 2006, IROS 2009) Segmentation by color Appearance-based recognition using a global approach Model-based generation of view sets Combination of stereo vision and stored orientation information for 6D pose estimation Textured objects (Humanoids 2006, IROS 2009) Recognition using local features Calculation of consistent features with respect to the pose of the object using the Hough transform 2D-localization using image point correspondences 6D pose estimation using stereo vision Integrating Vision Toolkit: 11 Correspondences between ee learned view and view in scene

12 Object Modeling KIT Object model database Generation of point 3D clouds using a laser scanner Post processing with triangulation results in high quality meshes Stored in several file formats (Open Inventor, VRML, Wavefront) Generation of multiple object views using stereo cameras Developed within the German Humanoid Robotics Project (SFB 588) 12 Web database

13 Object representation 13

14 Offline grasp analysis Feasible grasps for every object are computed offline and stored together with the object models. dl A grasp is defined by: Grasp type Grasp starting point Approaching direction Hand orientation Simulation environment: GraspStudio and OpenGrasp: opengrasp.sourceforge.netsourceforge net Hook Cylindrical Spherical Pinch Tripod 14

15 Execution: Position based Visual Servoing Differential Kinematics + θ = J (θ ) x θ Joint Controller Object Pose Δx Target Pose Target Object Position & Orientation (Visual Estimation) θ Hand Pose + - Joint Sensor Values Hand Orientation (Kinematic Approximation) Hand Position (Visual Estimation) Perception x target Target Pose x tcp Hand Pose Wrist Pose θ + = J (θ ) x x t = = x x t vision t + 1 t + 1 tcp x kinematic x + t kinematic t tcp 15

16 Interactive tasks in kitchen environment Object recognition and localization Vision based grasping Hybrid position/force control Vision based selflocalisation Combining force and vision for opening and closing door tasks Learning new objects, persons and words Collision free navigation Audio visual user tracking and localization Multimodal humanrobot dialogs Speech recognition for continuous speech Software framework MCA2: 16

17 Bimanual grasping gand manipulation Stereovision for object recognition and localization VisualServoing for dual hand grasping Zero force control for teaching of grasp poses Humanoids 2009 Loosely coupled dual armtasks arm Tightly coupled dual arm armtasks 17

18 Limitations and shortcuts Problem: Representational differences between high level AI planning and low level robotics/vision. Continuous Robotics vs. Discrete AI Planning Previous efforts have resulted in largely ad hoc solutions. Claim:Object Action Complexes (OACs) can be used as a lingua franca to bridge this representational divide. Formalize the requirements for an artificial system to approach some level of cognitive complexity. Grounding of entities in the motor sensory domain Data structures, which can be used on all levels EU project PACO PLUS: plus.org org 18

19 Main Paradigm Objects and Actions are inseparably intertwined Visually based object recognition fails Visual information is sparse and limited Activity involving the object decreases the uncertainty about theobject's nature considerably! CMU Graphics Lab Motion Capture Database / 19

20 Intuitions on Object Action Complexes Affordances (J.J. Gibson) Objects affords actions Object Action Complexes (OACs) Actions define the meaning of Objects and Objects suggest Actions OACs are associations of objects and affordances Affordances can be expressed by STRIPS rules comprising: Preconditions and Deletions/additions 20

21 Object Perspective OAC Thing & Attributes Movement Thing & changed Attributes Objects & Actions have arisen if this process is successful Code-similarity emerges only through the fact that both codes describe the same physical entity Human Neuronal code Robot code OACs are code-independent Success? Measured against: Consistency with world Novelty, Drives, etc. 21

22 Action Perspective OAC movement & Attributes Thing movement & changed Attributes Objects & Actions have arisen if this process is successful Code-similarity emerges only through the fact that both codes describe the same physical entity Human Neuronal code Robot code OACs are code-independent Success? Measured against: Consistency with world Novelty, Drives, etc. 22

23 Object, Action and Task Perspective Object Perspective Container (full) tilting Container (empty) Action Perspective Grasp Pen Pinch Grasp Success? Success? Consistency w. world Novelty, Drives, etc. Consistency w. world Novelty, Drives, etc. Grasp Bottle Power Grasp Task Perspective Success? drink Consistency w. world Novelty, Drives, etc. Grasp Cup Handle Grasp put on table Success? Grasp Cup From-top Grasp Consistency w. world Novelty, Drives, etc. Success? Consistency w. world Novelty, Drives, etc. 23

24 Formalization of OACs 24

25 Six design principlesp Attributes [P1] Aspects of the world required for and effected by the OAC Downward congruency between levels Attributes are measured and this measurement can be noisy or wrong Prediction [P2] Expectation of change through the OAC Usually defined on subspaces of the total state space Execution [P3] Every OAC must be performable in the real world Requires an embodiment 25

26 Six design principlesp Evaluation [P4] Evaluating the differences between the predicted state and the actual observed state Downward congruency must guarantee that the results are interpretable at the OAC s own level Learning [P5] State and action representations are dynamic entities that are refined and extended by learning Measuring Reliability [P6] Embodied physical experiences ( experiments ) with actions, predictions, and outcomes deliver the basis for learning 26

27 Formal definition of an OAC Krüger et al A Formal Definition of Object Action Complexes and Examples at Different Levels of the Processing Hierarchy (availableat at plus.org) plus.org) 27

28 Functions associated with OACs In addition to id,t and M, the following class functions need to be defined for each OAC. Furthermore, we define a system level and other functions for each OAC 28

29 The learning cycle 29

30 Limitations and shortcuts Objects Complete model knowledge (shape, color, texture) Only visual representation is used How to learn new objects? How to acquire multi sensory representations of objects? Actions Kinematics based approaches as place holders for learned primitive actions. How to learn new actions? How to adapt actions to new situations? How to chain different actions to achieve complex tasks? 30

31 How to relax the limitations in our scenario? Autonomous Exploration: Visually guided haptic exploration Visual object exploration and search Learning actions of objects Coaching and Imitation Learning from Observation Goal directed Imitation 31

32 Hand: available skills Direct/Inverse Kinematics Position/force control Detection of contact and objectness Assessment of deformability and object size Failed grasp no object Precision grasps: Distance between fingertips Power grasps: Distance between fingertips and palm 32

33 Tactile Object Exploration Potential field approach to guide the robot hand along the object surface Oriented 3D point cloud from contact data Extract faces from 3D point cloud in a geometric feature filter pipeline Parallelism Minimum face size Face distance Mutual visibility Association between objects and actions (grasps) Symbolic grasps (grasp affordances) RCP TCP 33

34 Examples: Bottle 34

35 Visually guided haptic exploration on ARMAR In simulation Physics extension for Open Inventor/VRML modeling of complex mechanical systems Modeling of virtual sensors VMC-based inverse kinematics 35

36 Active Visual Object Exploration and Search Generation of visual representations through exploration Application of generated representations in recognition tasks. 36

37 Visual exploration of unknownobject 37

38 Visual exploration of unknownobject Background object and hand object segmentation Generation of different views through manipulation 38

39 Representation Aspect Graph Multi view appearance based representation Each node corresponds to one view Edges describe neighbor relations Feature Pool Compact representation of views with prototypes Grouping based on visual similarity Vector quantization through incremental clustering 39

40 Active visual search Active Search Object search using perspective and foveal camera Scene memory Integration of object hypotheses in an ego centric representation ICRA 2010 Humanoids 2009 ICRA

41 Learning by Autonomous Exploration: Pushing Learning of actions on objects (Pushing) Learning relationship between point and angle of push and the actual movement of an object y world y obj angle of contact point of contact x obj actual object movement (x vel, y vel, Φ vel ) x world Use the knowledge in order to find the appropriate point and angle of push in order to bring an object to a goal 41

42 Pushing within the OAC concept Requirements: Initial motor knowledge: the robot need to know how to move the pusher along straight lines Assumed visual processing: segment and localize objects The pushing OAC oac push learns the prediction i function updatet, which is implemented as a feedforward neural network. This network represents a forward model for object movements that have been recorded with each pushing action. 42

43 Pushing for grasping Object independent pushing (generalization across objects). Learning relationship between point and angle of push and the actual movement of an object Direct association between the binarized object image and the response of the object with respect to the applied pushing action. Use the knowledge in order to find the appropriate point and angle of push in order to bring an object to a goal Joint work with Damir Omrcen and Ales Ude; (Humanoids 2009) 43

44 Learning of actions on objects Pushing for grasping y world y obj x obj Learning relationship between point and angle of push and the actual movement of an object angle of contact t point of contact actual object movement (x vel, y vel, Φ vel ) x world Use the knowledge in order to find the appropriate point and angle of push in order to bring an object to a goal Joint work with Damir Omrcen and Ales Ude; (Humanoids 2009) 44

45 How to relax the limitations in our scenario? Autonomous Exploration Visually guided haptic exploration Visual object exploration and search Learning actions of objects Coaching and Imitation Learning from Observation Goal directed Imitation 45

46 Learningg from human observation Reproduction Observation Ge e a at o Generalization 46

47 Stereo based 3D Human Motion Capture Capture 3D human motion based on the image input from the cameras of the robot s head only Approach Hierarchical Particle Filter framework Localization of hands and head using color segmentation and stereo triangulation Fusion of 3d positions and edge information Half of the particles are sampled using inverse kinematics Features Automatic Initialization 30 fps real time tracking on a 3 GHz CPU, 640x440 images Smooth tracking of real 3d motion Integrating Vision Toolkit: 47

48 Reproduction Tracking of human and object motion Visual servoing for grasping Generalisation? 48

49 Action representation Hidden Markov Models (HMM) Humanoids 2006, IJHR 2008 Extract key points (KP) in the demonstration Determine key points that are common in multiple demonstrations (common key points: CKP) Reproduction through interpolation between CKPs Dynamic movement primitives (DMP) ICRA 2009 Ijspeert, Nakanishi & Schaal, 2002 Trajectory formulation using canonical systems of differential equations Parameters are estimated using locally weighted regression Spline based representations fifth order splines that correspond to minimum jerk Humanoids 2007 trajectories to encode the trajectories Time normalize the example trajectories Determine common knot points so that all example trajectories are properly approximated. Similar tovia point, key points calculation. 49

50 Action representation using DMPs canonical system: nonlinear function: transformation system: τ u = α u P i ψ i(u) ( ) w i u f(u) = P i ψ i(u) τ v = K(g x) Dv +(g x 0 )f τẋ = v h i (u c i ) 2 ψ i (u) =e f target = w i Locally Weighted Learning K(g y)+dẏ + τÿ g x 0 canonical system u nonlinear function f target transformation system y, ẏ, ÿ 50

51 Action representation using DMPs canonical system: nonlinear function: transformation system: τ u = α u P i ψ i(u) w i u f(u) = P i ψ i(u) τ v = K(g x) Dv +(g x 0 )f τẋ = v h i (u c i ) 2 ψ i (u) =e x 0,g canonical system u nonlinear function f f(u) ) f target w i Locally Weighted Learning transformation system ŷ, ˆy, ẏ, ˆ ẏ, ÿˆ ÿ 51

52 Learning from Observation Library of motor primitives Markerless human motion tracking Object tracking Action representation Dynamic movement primitives for generating discrete and periodic movements Adaptation of dynamic systems to allow sequencing of movement primitives i i Associating semantic information with DMPs sequencing of movement primitives Planning 52

53 Learning from Observation Periodic movements: Wiping Extract the frequency and learn the waveform. Incremental regression for waveform learning Joint work with Andrej Gams and Ales Ude, Humanoids 2010 Ude, Gams, Asfour, Morimoto. Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives. IEEE Trans. on Robotics, vol. 26, no. 5, pp , October,

54 Learning from Observation Human grasp recognition, mapping and reproduction Joint work with Javier Romero, Danica Kragic; Humanoids

55 API of the ARMARs Robot interface (API) allows access to the robot s sensors and actors via C++ variables and method calls Skills: implemented atomic capabilities of the robot SearchForObject Grasp Place HandOver Open/Close door Tasks: combination of several skills for a specific purpose Bring object from Put object on Stack objects Software framework MCA2: Scenario Task 1 Task 2 Task m Skill 1 Skill 2 Skill n Robot Interface Robot Hardware 55

56 Connecting High level Task Planning to Execution PKS planner (STRIPS-like planner ) Joint work with Ron Patrick and Mark Steedman, University of Edinburgh 56

57 OAC Relation to Existing Concepts and Representations OACs combine three elements: The object (and situation) orientedoriented concept of affordance (Gibson 1950; Duchon, Warren and Kaelbling 1998; Stoytchev 2005; Fitzpatrick et al. 2003; Sahin et al. 2007; Grupen et al. 2007, Gorniak and Roy 2007); The representational and computational efficiency for planning and execution monitoring (the original Frame Problem) of STRIPS rules (Fikes and Nilsson 1971; Shahaf and Amir 2006; Shahaf, Chang and Amir 2006; Kuipers et al. 2006; Modayil and Kuipers 2007); The logical clarity of the situation/event calculus (McCarthyand and Hayes 1969; Kowalski and Sergot 1986; Reiter 2001; Poole 1993; Richardson and Domingos 2006; Steedman 2002) Machine Learning is involved in all three 57

58 What OACs add OACs are congruent between levels of representation. This is necessary for them to be both grounded and modular (Brooks 1986). This is crucial ilto plan monitoring i to deal with ihnon determinism and failure. It follows that OACs are initially defined from the bottom up. but with more innate structure assumed than in other work (e.g. the EU projects MACS and RobotCub). and with higher levels subsequently proposing new composite macro actions top down. In a robot system the formalization of OACs enforces that actions are always grounded, the agent s model of actions and their effect on the world is subject to permanent modification and improvement, relevant data for generalization and bootstrapping processes are generated and memorized while the system is operating. 58

59 OACs vs. Affordances Affordances are unidirectional Objects affords actions OACs Csae are bidirectional bd ecto Object affords actions Actions suggest objects OACs can be chained (new complex OACs from simpler OACs Tasks from skills = Planning ) OACs (unlike affordances) are fully formalized and (partly) implemented. 59

60 More Grasping OACs Box decomposition for grasping Pre grasp manipulation Grasping unknown objects 60

61 Grasping Known Objects: A Box Based Approach Object shape approximation through Box Decomposition [Huebner et al. 2008] Grasp hypothesis generation on objects, evaluated in GraspIt! grasp simulator part based grasping : Heuristical Selection (Geometric Reduction & Grasp Quality Learning) Shape Approximation Final grasp for the task show object Joint work with Danica Kragic and Kai Huebner, KTH, Sweden, ICAR

62 Pre grasp manipulation Flat objects are on the table are difficult to grasp Pre grasp manipulation to adjust the object before final grasping Joint work with Lillian Chang and Nancy Pollard (CMU), Humanoids 2010 Interface MCA2 and ROS 62

63 Grasping unknown objects Co planarity relation between visual entities define potential grasping affordances Surprising result: A success rate between 30 40% is already achievable by such a simple mechanisms. One reason is high level mechansism for hypotheses rejections through motion planning There is an autonomous success evaluation based on force/haptic information Collision, no success, unstable, stable. Joint work with Norbert Krüger and Mila Popovic, University of Southern Denmark 63

64 Thanks to the PACO PLUS Consortium Kungliga Tekniska Högskolan, Sweden J. O. Eklundh, D. Kragic University of Göttingen Germany F. Wörgötter Aalborg University Denmark V. Krüger University of Karlsruhe, Germany R. Dillmann, T. Asfour PACO PLUS PLUS Perception, Action and Cognition through Learning of Object Action Complexes plus.org plus org ATR, Kyoto, Japan M. Kawato & G. Cheng University of Liege Belgium J. Piater University of Southern Denmark N. Krüger University of Edinburgh United Kingdom M. Steedman Jozef Stefan Institute Slovenia A. Ude Leiden University Netherlands B. Hommel Consejo Superior de Investigaciones Científicas, Spain C. Torras & J. Andrade Cetto 64

65 Thanks Humanoids KIT Rüdiger Dillmann Stefan Ulbrich David Gonzalez Manfred Kröhnert Niko Vahrenkamp Julian Schill KiWlk Kai Welke Ömer Terlemez Alex Bierbaum Martin Do Stefan Gärtner Markus Przybylski Tamim Asfour Isabelle Wappler Pedram Azad (notin the picture) Paul Holz (not in the picture) Collaborators Members of the SFB 588, PACO PLUS PLUS and GRASP Ales Ude (JSI, Slovenia), James Kuffner (CMU,USA ), Danica Kragic (KTH, Sweden), Norbert Krüger (Denmark), Stefan Schaal and Peter Pastor (USC, USA), Nancy Pollard (CMU), Atsuo Takanishi (Waseda University, Japan), Gordon Cheng (TUM, Munich), Erhan Oztop (ATR, Japan), Florentin Wörgötter (BCCN, Germany), Mark Steedman (Edinburgh, UK) 65

66 Thank you for your attention. This work has been conducted within: the German Humanoid Research project SFB588 funded by the German Research Foundation (DFG) the EU Cognitive Systems projects funded by the European Commission PACO PLUS plus.org GRASP project.eu 66

Humanoid Manipulation

Humanoid Manipulation Humanoid Manipulation Tamim Asfour Institute for Anthropomatics, Computer Science Department, Humanoids and Intelligence Systems Lab (Prof. Dillmann) wwwiaim.ira.uka.de www.sfb588.uni-karlsruhe.de KIT

More information

ShanghAI Lectures December 2010

ShanghAI Lectures December 2010 ShanghAI Lectures 2010 16. December 2010 Robots think with their hands Tamim Asfour Karlsruhe Institute of Technology, Institute for Anthropomatics Humanoids and Intelligence Systems Lab INSTITUTE FOR

More information

Exploration and Imitation for the Acquisition of Object Action Complexes

Exploration and Imitation for the Acquisition of Object Action Complexes Exploration and Imitation for the Acquisition of Object Action Complexes on Humanoids Tamim Asfour, Ales Ude, Andrej Games, Damir Omrcen, Kai Welke, Alexander Bierbaum, Martin Do, Pedram Azad, and Rüdiger

More information

Learning and Generalization of Motor Skills by Learning from Demonstration

Learning and Generalization of Motor Skills by Learning from Demonstration Learning and Generalization of Motor Skills by Learning from Demonstration Peter Pastor, Heiko Hoffmann, Tamim Asfour, and Stefan Schaal Abstract We provide a general approach for learning robotic motor

More information

Visual Servoing for Dual Arm Motions on a Humanoid Robot

Visual Servoing for Dual Arm Motions on a Humanoid Robot Visual Servoing for Dual Arm Motions on a Humanoid Robot Nikolaus Vahrenkamp Christian Böge Kai Welke Tamim Asfour Jürgen Walter Rüdiger Dillmann Institute for Anthropomatics Department of Mechanical Engineering

More information

Abstract. 1 Introduction. 2 Concept of the mobile experimental

Abstract. 1 Introduction. 2 Concept of the mobile experimental ISR / ROBOTIK 2010 Mobile Experimental Platform for the Development of Environmentally Interactive Control Algorithms towards the Implementation on a Walking Humanoid Robot Giulio Milighetti, Janko Petereit

More information

Learning Primitive Actions through Object Exploration

Learning Primitive Actions through Object Exploration 2008 8 th IEEE-RAS International Conference on Humanoid Robots December 1 ~ 3, 2008 / Daejeon, Korea Learning Primitive Actions through Object Exploration Damir Omrčen, Aleš Ude, Andrej Kos Jozef Stefan

More information

Grasping Known Objects with Humanoid Robots: A Box-Based Approach

Grasping Known Objects with Humanoid Robots: A Box-Based Approach ing Known Objects with Humanoid Robots: A Box-Based Approach Kai Huebner, Kai Welke, Markus Przybylski, Nikolaus Vahrenkamp, Tamim Asfour, Danica Kragic and Rüdiger Dillmann Abstract Autonomous grasping

More information

On-line periodic movement and force-profile learning for adaptation to new surfaces

On-line periodic movement and force-profile learning for adaptation to new surfaces 21 IEEE-RAS International Conference on Humanoid Robots Nashville, TN, USA, December 6-8, 21 On-line periodic movement and force-profile learning for adaptation to new surfaces Andrej Gams, Martin Do,

More information

On-line periodic movement and force-profile learning for adaptation to new surfaces

On-line periodic movement and force-profile learning for adaptation to new surfaces On-line periodic movement and force-profile learning for adaptation to new surfaces Andrej Gams, Martin Do, Aleš Ude, Tamim Asfour and Rüdiger Dillmann Department of Automation, Biocybernetics and Robotics,

More information

Visual Collision Detection for Corrective Movements during Grasping on a Humanoid Robot

Visual Collision Detection for Corrective Movements during Grasping on a Humanoid Robot Visual Collision Detection for Corrective Movements during Grasping on a Humanoid Robot David Schiebener, Nikolaus Vahrenkamp and Tamim Asfour Abstract We present an approach for visually detecting collisions

More information

Action Sequence Reproduction based on Automatic Segmentation and Object-Action Complexes

Action Sequence Reproduction based on Automatic Segmentation and Object-Action Complexes Action Sequence Reproduction based on Automatic Segmentation and Object-Action Complexes Mirko Wächter 1, Sebastian Schulz 1, Tamim Asfour 1, Eren Aksoy 2, Florentin Wörgötter 2 and Rüdiger Dillmann 1

More information

Learning Objects and Grasp Affordances through Autonomous Exploration

Learning Objects and Grasp Affordances through Autonomous Exploration Learning Objects and Grasp Affordances through Autonomous Exploration Dirk Kraft 1, Renaud Detry 2, Nicolas Pugeault 1, Emre Başeski 1, Justus Piater 2, and Norbert Krüger 1 1 University of Southern Denmark,

More information

Grasp Recognition and Mapping on Humanoid Robots

Grasp Recognition and Mapping on Humanoid Robots Grasp Recognition and Mapping on Humanoid Robots Martin Do, Javier Romero, Hedvig Kjellström, Pedram Azad, Tamim Asfour, Danica Kragic, Rüdiger Dillmann Abstract In this paper, we present a system for

More information

Visual Perception for Robots

Visual Perception for Robots Visual Perception for Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Cognitive Robots Complete systems for example scenarios Equipped with rich sensors Flying robot

More information

Workspace Analysis for Planning Human-Robot Interaction Tasks

Workspace Analysis for Planning Human-Robot Interaction Tasks 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids) Cancun, Mexico, Nov 15-17, 2016 Workspace Analysis for Planning Human-Robot Interaction Tasks Nikolaus Vahrenkamp 1, Harry Arnst

More information

Autonomous Acquisition of Visual Multi-View Object Representations for Object Recognition on a Humanoid Robot

Autonomous Acquisition of Visual Multi-View Object Representations for Object Recognition on a Humanoid Robot Autonomous Acquisition of Visual Multi-View Object Representations for Object Recognition on a Humanoid Robot Kai Welke, Jan Issac, David Schiebener, Tamim Asfour and Rüdiger Dillmann Karlsruhe Institute

More information

Simox: A Robotics Toolbox for Simulation, Motion and Grasp Planning

Simox: A Robotics Toolbox for Simulation, Motion and Grasp Planning Simox: A Robotics Toolbox for Simulation, Motion and Grasp Planning N. Vahrenkamp a,b, M. Kröhnert a, S. Ulbrich a, T. Asfour a, G. Metta b, R. Dillmann a, and G. Sandini b Abstract Simox is a C++ robotics

More information

Stereo-based 6D Object Localization for Grasping with Humanoid Robot Systems

Stereo-based 6D Object Localization for Grasping with Humanoid Robot Systems Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007 TuC8.5 Stereo-based 6D Object Localization for Grasping with Humanoid

More information

Learning to bounce a ball with a robotic arm

Learning to bounce a ball with a robotic arm Eric Wolter TU Darmstadt Thorsten Baark TU Darmstadt Abstract Bouncing a ball is a fun and challenging task for humans. It requires fine and complex motor controls and thus is an interesting problem for

More information

Visual Servoing for Humanoid Grasping and Manipulation Tasks

Visual Servoing for Humanoid Grasping and Manipulation Tasks Visual Servoing for Humanoid Grasping and Manipulation Tasks Nikolaus Vahrenkamp, Steven Wieland, Pedram Azad, David Gonzalez, Tamim Asfour and Rüdiger Dillmann Institute for Computer Science and Engineering

More information

High-Level Robot Control with ArmarX

High-Level Robot Control with ArmarX High-Level Robot Control with ArmarX Nikolaus Vahrenkamp, Mirko Wächter, Manfred Kröhnert, Peter Kaiser, Kai Welke, Tamim Asfour High Performance Humanoid Technologies (H 2 T) Karlsruhe Institute for Technology

More information

IMITATION LEARNING OF DUAL-ARM MANIPULATION TASKS IN HUMANOID ROBOTS

IMITATION LEARNING OF DUAL-ARM MANIPULATION TASKS IN HUMANOID ROBOTS International Journal of Humanoid Robotics Vol. 5, No. 2 (2008) 289 308 c World Scientific Publishing Company IMITATION LEARNING OF DUAL-ARM MANIPULATION TASKS IN HUMANOID ROBOTS T. ASFOUR,P.AZAD,F.GYARFASandR.DILLMANN

More information

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots Jurgen Leitner, Simon Harding, Alexander Forster and Peter Corke Presentation: Hana Fusman Introduction/ Overview The goal of their

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

Design of an open hardware architecture for the humanoid robot ARMAR

Design of an open hardware architecture for the humanoid robot ARMAR Design of an open hardware architecture for the humanoid robot ARMAR Kristian Regenstein 1 and Rüdiger Dillmann 1,2 1 FZI Forschungszentrum Informatik, Haid und Neustraße 10-14, 76131 Karlsruhe, Germany

More information

Manipulating a Large Variety of Objects and Tool Use in Domestic Service, Industrial Automation, Search and Rescue, and Space Exploration

Manipulating a Large Variety of Objects and Tool Use in Domestic Service, Industrial Automation, Search and Rescue, and Space Exploration Manipulating a Large Variety of Objects and Tool Use in Domestic Service, Industrial Automation, Search and Rescue, and Space Exploration Sven Behnke Computer Science Institute VI Autonomous Intelligent

More information

Autonomous View Selection and Gaze Stabilization for Humanoid Robots

Autonomous View Selection and Gaze Stabilization for Humanoid Robots Autonomous View Selection and Gaze Stabilization for Humanoid Robots Markus Grotz1, Timothe e Habra2, Renaud Ronsse2, and Tamim Asfour1 Abstract To increase the autonomy of humanoid robots, the visual

More information

Toward a library of manipulation actions based on semantic object-action relations

Toward a library of manipulation actions based on semantic object-action relations Toward a library of manipulation actions based on semantic object-action relations M. J. Aein 1, E. E. Aksoy 1, M. Tamosiunaite 1, J. Papon 1, A. Ude 2, and F. Wörgötter 1 Abstract The goal of this study

More information

Active Recognition and Manipulation of Simple Parts Exploiting 3D Information

Active Recognition and Manipulation of Simple Parts Exploiting 3D Information experiment ActReMa Active Recognition and Manipulation of Simple Parts Exploiting 3D Information Experiment Partners: Rheinische Friedrich-Wilhelms-Universität Bonn Metronom Automation GmbH Experiment

More information

Learning Motor Behaviors: Past & Present Work

Learning Motor Behaviors: Past & Present Work Stefan Schaal Computer Science & Neuroscience University of Southern California, Los Angeles & ATR Computational Neuroscience Laboratory Kyoto, Japan sschaal@usc.edu http://www-clmc.usc.edu Learning Motor

More information

Task-based Grasp Adaptation on a Humanoid Robot

Task-based Grasp Adaptation on a Humanoid Robot Task-based Grasp Adaptation on a Humanoid Robot Jeannette Bohg Kai Welke Beatriz León Martin Do Dan Song Walter Wohlkinger Marianna Madry Aitor Aldóma Markus Przybylski Tamim Asfour Higinio Martí Danica

More information

Stereo-based Markerless Human Motion Capture for Humanoid Robot Systems

Stereo-based Markerless Human Motion Capture for Humanoid Robot Systems Stereo-based Markerless Human Motion Capture for Humanoid Robot Systems Pedram Azad 1 Aleš Ude 2 Tamim Asfour 1 Rüdiger Dillmann 1 Abstract In this paper, we present an image-based markerless human motion

More information

Kinesthetic Teaching via Fast Marching Square

Kinesthetic Teaching via Fast Marching Square Kinesthetic Teaching via Fast Marching Square Javier V. Gómez, David Álvarez, Santiago Garrido and Luis Moreno Abstract This paper presents a novel robotic learning technique based on Fast Marching Square

More information

Semantic Grasping: Planning Robotic Grasps Functionally Suitable for An Object Manipulation Task

Semantic Grasping: Planning Robotic Grasps Functionally Suitable for An Object Manipulation Task Semantic ing: Planning Robotic s Functionally Suitable for An Object Manipulation Task Hao Dang and Peter K. Allen Abstract We design an example based planning framework to generate semantic grasps, stable

More information

Active Multi-View Object Search on a Humanoid Head

Active Multi-View Object Search on a Humanoid Head 2009 IEEE International Conference on Robotics and Automation Kobe International Conference Center Kobe, Japan, May 12-17, 2009 Active Multi-View Object Search on a Humanoid Head Kai Welke, Tamim Asfour

More information

Robots Towards Making Sense of 3D Data

Robots Towards Making Sense of 3D Data Nico Blodow, Zoltan-Csaba Marton, Dejan Pangercic, Michael Beetz Intelligent Autonomous Systems Group Technische Universität München RSS 2010 Workshop on Strategies and Evaluation for Mobile Manipulation

More information

CARE-O-BOT-RESEARCH: PROVIDING ROBUST ROBOTICS HARDWARE TO AN OPEN SOURCE COMMUNITY

CARE-O-BOT-RESEARCH: PROVIDING ROBUST ROBOTICS HARDWARE TO AN OPEN SOURCE COMMUNITY CARE-O-BOT-RESEARCH: PROVIDING ROBUST ROBOTICS HARDWARE TO AN OPEN SOURCE COMMUNITY Dipl.-Ing. Florian Weißhardt Fraunhofer Institute for Manufacturing Engineering and Automation IPA Outline Objective

More information

Advances in Robot Programming by Demonstration

Advances in Robot Programming by Demonstration Künstl Intell (2010) 24: 295 303 DOI 10.1007/s13218-010-0060-0 FACHBEITRAG Advances in Robot Programming by Demonstration Rüdiger Dillmann Tamim Asfour Martin Do Rainer Jäkel Alexander Kasper Pedram Azad

More information

Robot learning for ball bouncing

Robot learning for ball bouncing Robot learning for ball bouncing Denny Dittmar Denny.Dittmar@stud.tu-darmstadt.de Bernhard Koch Bernhard.Koch@stud.tu-darmstadt.de Abstract For robots automatically learning to solve a given task is still

More information

A Framework for Visually guided Haptic Exploration with Five Finger Hands

A Framework for Visually guided Haptic Exploration with Five Finger Hands A Framework for Visually guided Haptic Exploration with Five Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann University of Karlsruhe Institute for Computer Science and Engineering

More information

Learning Semantic Environment Perception for Cognitive Robots

Learning Semantic Environment Perception for Cognitive Robots Learning Semantic Environment Perception for Cognitive Robots Sven Behnke University of Bonn, Germany Computer Science Institute VI Autonomous Intelligent Systems Some of Our Cognitive Robots Equipped

More information

Anibal Ollero Professor and head of GRVC University of Seville (Spain)

Anibal Ollero Professor and head of GRVC University of Seville (Spain) Aerial Manipulation Anibal Ollero Professor and head of GRVC University of Seville (Spain) aollero@us.es Scientific Advisor of the Center for Advanced Aerospace Technologies (Seville, Spain) aollero@catec.aero

More information

Object Classification in Domestic Environments

Object Classification in Domestic Environments Object Classification in Domestic Environments Markus Vincze Aitor Aldoma, Markus Bader, Peter Einramhof, David Fischinger, Andreas Huber, Lara Lammer, Thomas Mörwald, Sven Olufs, Ekaterina Potapova, Johann

More information

Bimanual Grasp Planning

Bimanual Grasp Planning Bimanual Grasp Planning Nikolaus Vahrenkamp, Markus Przybylski, Tamim Asfour and Rüdiger Dillmann Institute for Anthropomatics Karlsruhe Institute of Technology (KIT) Adenauerring 2, 76131 Karlsruhe, Germany

More information

Teaching a robot to perform a basketball shot using EM-based reinforcement learning methods

Teaching a robot to perform a basketball shot using EM-based reinforcement learning methods Teaching a robot to perform a basketball shot using EM-based reinforcement learning methods Tobias Michels TU Darmstadt Aaron Hochländer TU Darmstadt Abstract In this paper we experiment with reinforcement

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Online movement adaptation based on previous sensor experiences

Online movement adaptation based on previous sensor experiences 211 IEEE/RSJ International Conference on Intelligent Robots and Systems September 25-3, 211. San Francisco, CA, USA Online movement adaptation based on previous sensor experiences Peter Pastor, Ludovic

More information

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz Humanoid Robotics Inverse Kinematics and Whole-Body Motion Planning Maren Bennewitz 1 Motivation Planning for object manipulation Whole-body motion to reach a desired goal configuration Generate a sequence

More information

Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction

Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction Peter Kaiser 1, Eren E. Aksoy 1, Markus Grotz 1, Dimitrios Kanoulas 2, Nikos G. Tsagarakis 2, and Tamim Asfour 1

More information

Instant Prediction for Reactive Motions with Planning

Instant Prediction for Reactive Motions with Planning The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Instant Prediction for Reactive Motions with Planning Hisashi Sugiura, Herbert Janßen, and

More information

Learning Hand Movements from Markerless Demonstrations for Humanoid Tasks

Learning Hand Movements from Markerless Demonstrations for Humanoid Tasks Learning Hand Movements from Markerless Demonstrations for Humanoid Tasks Ren Mao, Yezhou Yang, Cornelia Fermüller 3, Yiannis Aloimonos, and John S. Baras Abstract We present a framework for generating

More information

Prediction of action outcomes using an object model

Prediction of action outcomes using an object model The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Prediction of action outcomes using an object model Federico Ruiz-Ugalde, Gordon Cheng,

More information

Accelerating Synchronization of Movement Primitives: Dual-Arm Discrete-Periodic Motion of a Humanoid Robot

Accelerating Synchronization of Movement Primitives: Dual-Arm Discrete-Periodic Motion of a Humanoid Robot 5 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Congress Center Hamburg Sept 8 - Oct, 5. Hamburg, Germany Accelerating Synchronization of Movement Primitives: Dual-Arm Discrete-Periodic

More information

Visual Servoing on Unknown Objects

Visual Servoing on Unknown Objects Visual Servoing on Unknown Objects Design and implementation of a working system JAVIER GRATAL Master of Science Thesis Stockholm, Sweden 2012 Visual Servoing on Unknown Objects Design and implementation

More information

Grasp Affordances from Multi-Fingered Tactile Exploration using Dynamic Potential Fields

Grasp Affordances from Multi-Fingered Tactile Exploration using Dynamic Potential Fields Grasp Affordances from Multi-Fingered Tactile Exploration using Dynamic Potential Fields Alexander Bierbaum, Matthias Rambow, Tamim Asfour and Rüdiger Dillmann Institute for Anthropomatics University of

More information

Efficient Motion and Grasp Planning for Humanoid Robots

Efficient Motion and Grasp Planning for Humanoid Robots Efficient Motion and Grasp Planning for Humanoid Robots Nikolaus Vahrenkamp and Tamim Asfour and Rüdiger Dillmann Abstract The control system of a robot operating in a human-centered environments should

More information

Visual Servoing on Unknown Objects

Visual Servoing on Unknown Objects Visual Servoing on Unknown Objects Xavi Gratal, Javier Romero, Jeannette Bohg, Danica Kragic Computer Vision and Active Perception Lab Centre for Autonomous Systems School of Computer Science and Communication

More information

Distance-based Kernels for Dynamical Movement Primitives

Distance-based Kernels for Dynamical Movement Primitives Distance-based Kernels for Dynamical Movement Primitives Diego ESCUDERO-RODRIGO a,1, René ALQUEZAR a, a Institut de Robotica i Informatica Industrial, CSIC-UPC, Spain Abstract. In the Anchoring Problem

More information

Learning of a ball-in-a-cup playing robot

Learning of a ball-in-a-cup playing robot Learning of a ball-in-a-cup playing robot Bojan Nemec, Matej Zorko, Leon Žlajpah Robotics Laboratory, Jožef Stefan Institute Jamova 39, 1001 Ljubljana, Slovenia E-mail: bojannemec@ijssi Abstract In the

More information

3D Simultaneous Localization and Mapping and Navigation Planning for Mobile Robots in Complex Environments

3D Simultaneous Localization and Mapping and Navigation Planning for Mobile Robots in Complex Environments 3D Simultaneous Localization and Mapping and Navigation Planning for Mobile Robots in Complex Environments Sven Behnke University of Bonn, Germany Computer Science Institute VI Autonomous Intelligent Systems

More information

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE Chapter 1. Modeling and Identification of Serial Robots.... 1 Wisama KHALIL and Etienne DOMBRE 1.1. Introduction... 1 1.2. Geometric modeling... 2 1.2.1. Geometric description... 2 1.2.2. Direct geometric

More information

Semantic RGB-D Perception for Cognitive Robots

Semantic RGB-D Perception for Cognitive Robots Semantic RGB-D Perception for Cognitive Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Domestic Service Robots Dynamaid Cosero Size: 100-180 cm, weight: 30-35 kg 36

More information

Image Processing Methods for Interactive Robot Control

Image Processing Methods for Interactive Robot Control Image Processing Methods for Interactive Robot Control Christoph Theis 1, Ioannis Iossifidis 2 and Axel Steinhage 3 1,2 Institut für Neuroinformatik, Ruhr-Univerität Bochum, Germany 3 Infineon Technologies

More information

Skill. Robot/ Controller

Skill. Robot/ Controller Skill Acquisition from Human Demonstration Using a Hidden Markov Model G. E. Hovland, P. Sikka and B. J. McCarragher Department of Engineering Faculty of Engineering and Information Technology The Australian

More information

Visual Servoing for Floppy Robots Using LWPR

Visual Servoing for Floppy Robots Using LWPR Visual Servoing for Floppy Robots Using LWPR Fredrik Larsson Erik Jonsson Michael Felsberg Abstract We have combined inverse kinematics learned by LWPR with visual servoing to correct for inaccuracies

More information

Adaptive Motion Planning for Humanoid Robots

Adaptive Motion Planning for Humanoid Robots Adaptive Motion Planning for Humanoid Robots Nikolaus Vahrenkamp, Christian Scheurer, Tamim Asfour, James Kuffner and Rüdiger Dillmann Institute of Computer Science and Engineering The Robotics Institute

More information

Mobile Manipulation A Mobile Platform Supporting a Manipulator System for an Autonomous Robot

Mobile Manipulation A Mobile Platform Supporting a Manipulator System for an Autonomous Robot Mobile Manipulation A Mobile Platform Supporting a Manipulator System for an Autonomous Robot U.M. Nassal, M. Damm, T.C. Lueth Institute for Real-Time Computer Systems and Robotics (IPR) University of

More information

Generalization of Example Movements with Dynamic Systems

Generalization of Example Movements with Dynamic Systems 9th IEEE-RAS International Conference on Humanoid Robots December 7-1, 29 Paris, France Generalization of Example Movements with Dynamic Systems Andrej Gams and Aleš Ude Abstract In the past, nonlinear

More information

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators Robotics and automation Dr. Ibrahim Al-Naimi Chapter two Introduction To Robot Manipulators 1 Robotic Industrial Manipulators A robot manipulator is an electronically controlled mechanism, consisting of

More information

Bio-inspired Learning and Database Expansion of Compliant Movement Primitives

Bio-inspired Learning and Database Expansion of Compliant Movement Primitives 215 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids) November 3-5, 215, Seoul, Korea Bio-inspired Learning and Database Expansion of Compliant Movement Primitives Tadej Petrič 1,2,

More information

Synthesizing Goal-Directed Actions from a Library of Example Movements

Synthesizing Goal-Directed Actions from a Library of Example Movements Synthesizing Goal-Directed Actions from a Library of Example Movements Aleš Ude, Marcia Riley, Bojan Nemec, Andrej Kos, Tamim Asfour and Gordon Cheng Jožef Stefan Institute, Dept. of Automatics, Biocybernetics

More information

Robust Real-time Stereo-based Markerless Human Motion Capture

Robust Real-time Stereo-based Markerless Human Motion Capture Robust Real-time Stereo-based Markerless Human Motion Capture Pedram Azad, Tamim Asfour, Rüdiger Dillmann University of Karlsruhe, Germany azad@ira.uka.de, asfour@ira.uka.de, dillmann@ira.uka.de Abstract

More information

Execution of a Dual-Object (Pushing) Action with Semantic Event Chains

Execution of a Dual-Object (Pushing) Action with Semantic Event Chains Execution of a Dual-Object (Pushing) Action with Semantic Event Chains Eren Erdal Aksoy 1, Babette Dellen 1,2, Minija Tamosiunaite 1 and Florentin Wörgötter 1 1 Bernstein Center for Computational Neuroscience

More information

Efficient Motion Planning for Humanoid Robots using Lazy Collision Checking and Enlarged Robot Models

Efficient Motion Planning for Humanoid Robots using Lazy Collision Checking and Enlarged Robot Models Efficient Motion Planning for Humanoid Robots using Lazy Collision Checking and Enlarged Robot Models Nikolaus Vahrenkamp, Tamim Asfour and Rüdiger Dillmann Institute of Computer Science and Engineering

More information

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping 1 Masaki Ogino, 1 Shigeo Matsuyama, 1 Jun ichiro Ooga, and 1, Minoru Asada 1 Dept. of Adaptive Machine Systems, HANDAI

More information

Robust Vision-Based Detection and Grasping Object for Manipulator using SIFT Keypoint Detector

Robust Vision-Based Detection and Grasping Object for Manipulator using SIFT Keypoint Detector Proceedings of the 2014 International Conference on Advanced Mechatronic Systems, Kumamoto, Japan, August 10-12, 2014 Robust Vision-Based Detection and Grasping Object for Manipulator using SIFT Keypoint

More information

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz Humanoid Robotics Inverse Kinematics and Whole-Body Motion Planning Maren Bennewitz 1 Motivation Plan a sequence of configurations (vector of joint angle values) that let the robot move from its current

More information

The Organization of Cortex-Ganglia-Thalamus to Generate Movements From Motor Primitives: a Model for Developmental Robotics

The Organization of Cortex-Ganglia-Thalamus to Generate Movements From Motor Primitives: a Model for Developmental Robotics The Organization of Cortex-Ganglia-Thalamus to Generate Movements From Motor Primitives: a Model for Developmental Robotics Alessio Mauro Franchi 1, Danilo Attuario 2, and Giuseppina Gini 3 Department

More information

Combining Harris Interest Points and the SIFT Descriptor for Fast Scale-Invariant Object Recognition

Combining Harris Interest Points and the SIFT Descriptor for Fast Scale-Invariant Object Recognition Combining Harris Interest Points and the SIFT Descriptor for Fast Scale-Invariant Object Recognition Pedram Azad, Tamim Asfour, Rüdiger Dillmann Institute for Anthropomatics, University of Karlsruhe, Germany

More information

COS Lecture 13 Autonomous Robot Navigation

COS Lecture 13 Autonomous Robot Navigation COS 495 - Lecture 13 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning Sören Schwertfeger / 师泽仁 ShanghaiTech University ShanghaiTech University - SIST - 10.05.2017

More information

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany

More information

AMR 2011/2012: Final Projects

AMR 2011/2012: Final Projects AMR 2011/2012: Final Projects 0. General Information A final project includes: studying some literature (typically, 1-2 papers) on a specific subject performing some simulations or numerical tests on an

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Applying the Episodic Natural Actor-Critic Architecture to Motor Primitive Learning

Applying the Episodic Natural Actor-Critic Architecture to Motor Primitive Learning Applying the Episodic Natural Actor-Critic Architecture to Motor Primitive Learning Jan Peters 1, Stefan Schaal 1 University of Southern California, Los Angeles CA 90089, USA Abstract. In this paper, we

More information

Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields

Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields 8 8 th IEEE-RAS International Conference on Humanoid Robots December 1 ~ 3, 8 / Daejeon, Korea Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields Dae-Hyung

More information

CS283: Robotics Fall 2016: Sensors

CS283: Robotics Fall 2016: Sensors CS283: Robotics Fall 2016: Sensors Sören Schwertfeger / 师泽仁 ShanghaiTech University Robotics ShanghaiTech University - SIST - 23.09.2016 2 REVIEW TRANSFORMS Robotics ShanghaiTech University - SIST - 23.09.2016

More information

6-dof Eye-vergence visual servoing by 1-step GA pose tracking

6-dof Eye-vergence visual servoing by 1-step GA pose tracking International Journal of Applied Electromagnetics and Mechanics 52 (216) 867 873 867 DOI 1.3233/JAE-16225 IOS Press 6-dof Eye-vergence visual servoing by 1-step GA pose tracking Yu Cui, Kenta Nishimura,

More information

Movement Learning and Control for Robots in Interaction

Movement Learning and Control for Robots in Interaction Movement Learning and Control for Robots in Interaction Dr.-Ing. Michael Gienger Honda Research Institute Europe Carl-Legien-Strasse 30 63073 Offenbach am Main Seminar Talk Machine Learning Lab, University

More information

Imitation of Human Motion on a Humanoid Robot using Non-Linear Optimization

Imitation of Human Motion on a Humanoid Robot using Non-Linear Optimization Imitation of Human Motion on a Humanoid Robot using Non-Linear Optimization Martin Do, Pedram Azad, Tamim Asfour, Rüdiger Dillmann University of Karlsruhe, Germany do@ira.uka.de, azad@ira.uka.de, asfour@ira.uka.de,

More information

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1 Last update: May 6, 2010 Robotics CMSC 421: Chapter 25 CMSC 421: Chapter 25 1 A machine to perform tasks What is a robot? Some level of autonomy and flexibility, in some type of environment Sensory-motor

More information

Movement Imitation with Nonlinear Dynamical Systems in Humanoid Robots

Movement Imitation with Nonlinear Dynamical Systems in Humanoid Robots Movement Imitation with Nonlinear Dynamical Systems in Humanoid Robots Auke Jan Ijspeert & Jun Nakanishi & Stefan Schaal Computational Learning and Motor Control Laboratory University of Southern California,

More information

ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design

ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design Emanuele Ruffaldi (SSSA) I. Kostavelis, D. Giakoumis, D. Tzovaras (CERTH) Overview Problem Statement Existing Solutions

More information

VisGraB: A Benchmark for Vision-Based Grasping

VisGraB: A Benchmark for Vision-Based Grasping VisGraB: A Benchmark for Vision-Based Grasping Gert Kootstra, Mila Popović 2, Jimmy Alison Jørgensen 3, Danica Kragic, Henrik Gordon Petersen 3, and Norbert Krüger 2 Computer Vision and Active Perception

More information

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018 Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018 Asaf Moses Systematics Ltd., Technical Product Manager aviasafm@systematics.co.il 1 Autonomous

More information

Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation

Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation Florian Pilz 1, Yan Shi 1,DanielGrest 1, Nicolas Pugeault 2, Sinan Kalkan 3, and Norbert Krüger 4 1 Medialogy Lab, Aalborg University

More information

Validation of Whole-Body Loco-Manipulation Affordances for Pushability and Liftability

Validation of Whole-Body Loco-Manipulation Affordances for Pushability and Liftability Validation of Whole-Body Loco-Manipulation Affordances for Pushability and Liftability Peter Kaiser, Markus Grotz, Eren E. Aksoy, Martin Do, Nikolaus Vahrenkamp and Tamim Asfour Abstract Autonomous robots

More information

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The

More information