CONCLUSION ACKNOWLEDGMENTS REFERENCES

Size: px
Start display at page:

Download "CONCLUSION ACKNOWLEDGMENTS REFERENCES"

Transcription

1 tion method produces commands that are suggested by at least one behavior. The disadvantage of the chosen arbitration approach is that it is computationally expensive. We are currently investigating methods to implement it effectively. In the near future, we will finish the experiments of unblocking. We expect these experiments to demonstrate how the arbitration method enables a smooth transition from one behavior to another. After that we will integrate the components based on the optical flow in the Pure system. We will continue to develop the Samba architecture. More complex systems are needed to test the arbitration method. The proposed learning scheme has to be specified in more detail and implemented. Although the learning scheme contains challenging problems, we believe that it also has a great potential as an approach to building complex control systems. CONCLUSION In this paper we discussed behavior cooperation. We described how markers and ed signals can be used to coordinate cooperating behaviors. We discussed the cooperating behaviors for both single and multiple agents. We proposed a method for learning cooperative behaviors. We tested the architecture with real robots. ACKNOWLEDGMENTS The authors would like to thank colleagues at the Electrotechnical Laboratory for their assistance in this research project. The discussions with Paul Bakker, Polly Pook, and Alex Zelinsky were valuable. Special thanks to Kenji Konaka, who helped in the implementation. Further, we are grateful to Nobuyuki Kita and Sebastien Rougeaux, who contributed to the original image processing and robot control programs. REFERENCES. Brooks, R.A. (986) A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2(): Brooks, R.A. (99) Elephants don t play chess. Robotics and Autonomous Systems 6(-2): Tsotsos, J.K. (995) Behaviorist intelligence and the scaling problem. Artificial Intelligence 75: Chapman, D. (99) Vision, instruction, and action, MIT Press. 5. Brill, F.Z., Martin, W.N. & Olson, T.J. (995) Markers elucidated and applied in local 3-space. International Symposium on Computer Vision, Coral Gables, Florida, November 2-23, 995. pages Payton, D.W., Rosenblatt, J.K. & Keirsey, D.M. (99) Plan guided reaction. IEEE Transactions on Systems, Man, and Cybernetics, 2(6): Rosenblatt, J.K. & Thorpe, C.E. (995) Combining goals in a behavior-based architecture. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 95), Pittsburgh, USA, August 995, pp Kuniyoshi, Y., Kita, N., Rougeaux, S., Sakane, S., Ishii, M. & Kakikura, M. (994) Cooperation by observation - The framework and basic task patterns. IEEE International Conference on Robotics and Automation, pp Kuniyoshi, Y., Inaba, M. & Inoue, H. (992) Seeing, understanding and doing human task. 992 IEEE International Conference on Robotics and Automation, May 992, Nice, France.. Kuniyoshi, Y., Riekki, J., Ishii, M., Rougeaux, S., Kita, N., Sakane, S. & Kakikura, M. (994) Visionbased behaviors for multi-robot cooperation. IEEE/RSJ/GI International Conference on Intelligent Robots and Systems (IROS 94), Munchen, Germany, September 994, pp Riekki, J. & Kuniyoshi, Y. (995) Architecture for vision-based purposive behaviors. IEEE/RSJ International Conf. on Intelligent Robots and Systems (IROS 95), Pittsburgh, USA, August 995, pp

2 a) The goal positions sent by Pose. b) The goal positions sent by Unblock x y - x y c) Combined goal positions x y Figure 7. An example of combining goal positions. DISCUSSION The lack of powerful behavior coordination methods prevents applying the behaviorbased architecture to complex tasks. To solve this problem, we proposed behavior coordination methods based on markers and ed signals. The s enable a continuous shift from one behavior to another. Further, s enable the anticipation of tasks, as behaviors can send commands with small s when the task conditions have not yet been fully satisfied. We improved the behavior-based architecture bottom-up, without introducing a symbolic model or reasoning based on the model. As all coordination methods are local, the system is robust and scalable. These coordination methods can also be used to cooperate behaviors of multiple agents. We proposed a method for learning a common task by imitating other agents. As there is no explicit communication between agents, the multi-agent system is robust and extensible. Further, there is no need for transformation between the representations of the agents. The arbitration approach was chosen as it is an open mechanism, that can easily be modified by other modules such as learning modules. Cooperation can be adapted based on experiments. Weighting each command enables dynamic arbitration based on the agent and environment state. The use of gains allows further tuning based on additional information. When we specified the arbiter, we considered first ed averaging. In this approach the resulting command is constrained by the commands sent by the behaviors. In many cases this is too loose a constraint. For example, when triangulation is used to calculate object positions, the cameras must always be turned towards an object. Also, a marker must be bound to one feature, not between several features. The chosen arbitra-

3 POSE FIXATED! camera angles fix egomotion planning MOVETO MARKER ROBOT GAZE PLATFORM active sensing motion control MOVETO robot command gaze command ZDF ZDFCENT MARKER PURSUIT Figure 5. Posing task time [s] Figure 6. Zdfcent. allel. Both behaviors sent commands to the Moveto marker. This is an example of cooperating behaviors of a single agent. The behaviors cooperate to unblock other agents. Pose keeps the agent near the other agent, so the area in front of the other agent can be checked periodically and Unblock can remove the obstacles. In the first version Unblock simply subsumed Pose when an obstacle was found. The behaviors did not cooperate very well. The reason was that during posing the agent tends to drive behind the other agent, which makes unblocking difficult. We are currently implementing a new version in which Pose and Unblock send ed signals to the Moveto marker. The signals are combined by an arbiter. The maximum s of the combined signal specify the goal and target positions of the Moveto marker. This approach enables Unblock to control the agent towards a good unblocking position while Pose keeps the agent near the other agent. When the area in front of the other agent is known to be free, Unblock sends small s and thus does not have big effect on the Moveto marker. But when the known free area shortens, Unblock increases the s and the goal position shifts toward the good unblocking position. Figure 7 illustrates the combining of the goal positions. In the signal sent by Pose, s are large at a constant distance from the target. Unblock specifies large s at a constant distance from the future trajectory of the target. The maximum of the combined signal specifies the goal position of the Moveto marker.

4 cutes the task described by the marker. When the agent observes another agent, it first represents the other agent s actions by commands to its own actuators. Then it replaces sequences of commands by a command to a marker, until there are no more sequences of commands in the observed set that match a set of child nodes in the action tree. The remaining set of actions is produced by the new behavior when the conditions for the task are satisfied. The learned behaviors can also be generalized. When several behaviors produce the same sequence of actions, a new behavior executing the common sequence is created. Also a new marker is created. The marker activates the new behavior. The common sequence of actions is replaced in each behavior by activation of the new marker. The complexity of the learning problem is managed by learning gradually. At each step, the agent learns a new way to combine the existing actions. After learning a new action, the action is added to the action tree. When possible, behaviors are generalized. The agent builds incrementally new layers of behaviors and markers in top of existing ones. EXPERIMENTS We have tested the Samba architecture in the application of Posing, Unblocking, and Receiving. The Pure control system controls an agent to help other agents in their transfer tasks. The agent is equipped with a stereo gaze platform. The gaze platform has 2 DOF vergence control. Zero Disparity Filter (Zdf) extracts from the image data the features that are at the center of both cameras, that is, the features that the cameras are fixated on. Refer to papers [,] for more details on the Pure system. Lately we have improved the Pure system by adding s for signals, activation levels for behaviors and arbiters for resources. So far we have tested the posing and unblocking tasks. Posing and Unblocking Separately In the posing experiment, the agent followed another agent successfully for several minutes. This experiment demonstrates the basic characteristics of our architecture. The active modules are shown in Figure 5. Pose behavior initializes the Moveto marker, which contains a point bound to an object and a goal point for the agent. The Moveto behavior moves the agent to the goal point and turns it towards the object point. The Moveto marker updates the internal representation of the points continuously based on ego-motion. Pursuit behavior controls the cameras towards the posed object based on the output of Zdf module, which is described by the Zdfcent marker. The Moveto marker updates its points based on the fixation point of the cameras. In the unblocking experiment, the agent helped another agent by pushing away an obstacle blocking that agent. This is an example of multi-agent cooperation. When there is an obstacle in the trajectory of another agent, the Unblock behavior controls the Moveto marker in such a way that the agent heads towards the obstacle at a predefined distance and pushes the obstacle away. As an example of calculation, see the of the Zdfcent marker in Figure 6. The is calculated based on the size of the Zdf region (in pixels) and the speeds of the cameras. The other agent was tracked until 245 seconds (approximately). The small valleys in the before that moment are caused by variations in the Zdf region size. When an obstacle is searched the goes to zero. When the obstacle is found, the increases quickly. At the end the varies considerably, as the obstacle fills the images. Cooperation of Posing and Unblocking We tested behavior cooperation also by executing posing and unblocking tasks in par-

5 SENSORS sensor data marker commands MARKER marker data feedback PURPOSIVE BEHAVIOR MOTOR BEHAVIOR actuator commands Arbiters An arbiter combines the commands that behaviors send to a resource. Each input command has a gain that can be modified dynamically. The arbiter multiplies the s of the commands by the gains, sums the s, and selects the command parameter values having the maximum s. Then, the arbiter sets the s of the combined command based on the maximum and its neighborhood. For an actuator arbiter we specify a second element, that sums the commands from stabilizing behaviors to the arbitrated command. The stabilizing behaviors take other actuators into account. For example, the cameras can be stabilized by subtracting the amount of agent rotation from the camera commands. COOPERATION Figure 4. Task execution. Single Agent As behaviors are capable of executing only simple tasks, many behaviors must cooperate to perform a complex task. Cooperation is implemented using markers and ed signals. The signal s specify how much each behavior affects the resulting command. The behaviors produce separate s for each marker position. The arbiter combines the positions separately. Multiple Agents Cooperation among multiple agents is based on markers initialized by visual observations. A marker triggers the behaviors performing the cooperative task. Thus, cooperating multiple agents corresponds to the process of controlling an agent s own behaviors. This cooperation by observation is discussed in more detail by Kuniyoshi et al. [8]. Cooperation among multiple agents can also be learned. An agent can learn tasks by imitating other agents. Imitation consists of seeing, understanding, and doing [9]. The agent observes another agent when it performs the task and represents the observed actions by its own actions. The agent represents also preconditions for the actions by its own sensor and marker values. After this, it can start to execute the task together with the other agents. Commands for markers and actuators form a natural representation for observed actions. Once a set of actions has been observed, the creation of a new behavior is straightforward. The behavior outputs the observed set of actions. Further, sensor and marker values are a natural choice for representing the conditions, as these values are produced also when the agent executes the task itself. The new behavior receives the sensor and marker signals that are used in the conditions. The commands for markers and actuators form the action tree of the agent. A node in the structure describes one action known by the agent, a command to either a marker or to an actuator. Nodes describing actuator commands are leafs in the tree. A node describing a marker command has child nodes for commands produced by the behavior that exe-

6 ..5 WEIGHT MIN MAX Figure 2. A function. PARAMETER Markers A marker connects a task-related environment feature to motor actions. A marker is bound to a feature either by a behavior or automatically based on sensor data. It can be interpreted as binding task parameters to environment features. Binding activates the behavior executing the task. A feature is indexed by its position in the environment. The feature position can be described either in the egocentric coordinate system (mobile space markers) or in the image coordinate system (image space markers). The feature position is updated automatically based on observations on the feature. Mobile space markers are updated also based on ego-motion. Markers are also used to describe goals for the agent as positions related to the feature position. Further, feedback data describes the state of the task. Feedback is sent by the behavior executing the task. A marker specifies s for the different possible feature positions. The s are initialized when the marker is bound. After that, the marker updates the s based on the observations on the feature. The s decay over time. The maximum is the activation level of the marker. When it decreases below a threshold, the marker deactivates itself. A marker specifies s also for the goal positions. Behaviors A behavior transforms input signals received from sensors and markers into commands to actuators and markers (that is, to resources). A behavior calculates first its activation level based on the input s and the previous activation level. If the activation level exceeds a threshold, the behavior transforms input signals into output signals. The activation level describes the importance of the behavior. The s of the output reflect the activation level. A behavior also reports the progress of the task by sending a state signal to the corresponding marker. The system contains two types of behaviors: motor behaviors and purposive behaviors. Motor behaviors control actuators based on signals received from sensors and markers. Purposive behaviors execute tasks by controlling markers. When task conditions are fulfilled, a purposive behavior binds the markers needed in task execution with environment features. It also fills in task-specific data related to the features. For each marker there is either a motor behavior that controls an actuator based on the marker data or a lower-level purposive behavior that decomposes the task described by the marker further and binds the corresponding markers. A purposive behavior monitors the execution of the task by analyzing sensor data and the marker feedback data. Figure 4 illustrates task execution. - x Figure 3. A surface. y

7 The arbitration method was inspired by the distributed command arbitration method reported by Payton et al. [6]. We have generalized the method for more complex commands, for arbitrating markers, and for the image space. In recent work also Rosenblatt & Thorpe [7] extend the distributed command arbitration method for more complex commands. In the following chapters we describe our architecture and discuss cooperation. We also describe the experiments done so far. THE SAMBA ARCHITECTURE The Samba architecture contains Sensor, Actuator, Marker, Behavior, and Arbiter modules. The architecture is presented in Figure. The control system is connected to the external world through sensor and actuator modules. Markers connect task-related environment features to behaviors. Actuators and markers are the resources of the control system. Behaviors execute tasks by sending commands to the markers and actuators. Arbiters solve the conflicts among the behaviors commanding the same resource. SENSORS MARKERS PURPOSIVE BEHAVIORS MOTOR BEHAVIORS A R B IT E R S A R B IT ACTUATORS E R S Figure. The Samba architecture. Markers coordinate behaviors at several stages of task execution. First, markers activate the behaviors needed to perform the task. Secondly, markers share data among the behaviors and focus their attention on the important environment features. Thirdly, markers arbitrate behaviors. Each marker has an arbiter, that combines the commands sent by the behaviors. Actuators have similar arbiters. Finally, markers control cooperating behaviors. When several tasks can be executed in parallel, the commands sent to a marker are combined. Signals Modules communicate by sending signals to each other. A signal specifies s for the different possible values of some data fields. The number and type of the data fields is not restricted. A is a real number in a range [-.,.]. The data fields can be grouped and s can be specified for these groups separately. In the simplest case, a signal contains one set of values. The interpretation is that these values have the maximum and the rest of the possible values have zero s. Or, the one set of values can have a in a range [.,.]. This can be given by a filter producing the values, or as a function of a parameter that has some relation to the values. Such a function is illustrated in Figure 2. The increases linearly from. to. when the parameter value increases from the minimum value to the maximum value. The parameter can be, for example, camera speed for a signal describing an image processing result. In the general case, the s are specified for all possible sets of data field values. For a position in two-dimensional space, the s form a three-dimensional surface. Figure 3 shows a simple surface.

8 BEHAVIOR COOPERATION BASED ON MARKERS AND WEIGHTED SIGNALS JUKKA RIEKKI * University of Oulu 957 Oulu Finland jpr@ee.oulu.fi YASUO KUNIYOSHI Autonomous Systems Section Electrotechnical Laboratory Tsukuba, Japan kuniyosh@etl.go.jp ABSTRACT In this paper we propose markers for coordinating behavior cooperation. Markers ground task-related data on sensor data flow. Behaviors command markers by specifying s for the different possible command parameter values. Cooperation is achieved by combining the commands sent to the same marker. We discuss also multi-agent cooperation and propose a scheme for learning cooperative behaviors. Markers have an important role in multi-agent cooperation and learning. We present experiments on a control system that enables a real robot to help other robots in transferring objects. KEYWORDS: behavior-based, markers, behavior coordination, multi-agent cooperation INTRODUCTION The behavior-based architecture has been applied successfully to mobile robot control [,2]. However, the tasks performed by these robots have been rather simple. The behavior-based architecture does not scale well to complex tasks because they require more powerful behavior coordination methods than the behavior-based architecture offers. The scaling problem has been discussed in more detail by Tsotsos [3]. We propose in this paper Samba, a behavior-based architecture with powerful coordination methods. The key idea is to use markers, which describe tasks and ground taskrelated data on sensor data flow. Behaviors execute tasks by commanding markers. Commands contain s for the different possible values of command parameters. These s are used in arbitrating behaviors. Markers are applied in coordinating cooperating behaviors - also in multi-agent cooperation - and in learning cooperative behaviors. The architecture was inspired by the work of Brooks [] and Chapman [4]. We have extended Chapman s work by attentive behaviors for the 3D domain and by integrating image space data with mobile space data. Also Brill et al. [5] have reported a markerbased architecture. The major difference between these architectures and ours is that in our system, markers have arbitration functionality. * This research was done at the Electrotechnical Laboratory. The research was supported by the Science and Technology Agency program of the Japanese government.

Binocular Tracking Based on Virtual Horopters. ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand

Binocular Tracking Based on Virtual Horopters. ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand Binocular Tracking Based on Virtual Horopters ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand Autonomous Systems Section ylaboratoire de Robotique d'evry Electrotechnical Laboratory Institut

More information

AMR 2011/2012: Final Projects

AMR 2011/2012: Final Projects AMR 2011/2012: Final Projects 0. General Information A final project includes: studying some literature (typically, 1-2 papers) on a specific subject performing some simulations or numerical tests on an

More information

Learning Fuzzy Rules by Evolution for Mobile Agent Control

Learning Fuzzy Rules by Evolution for Mobile Agent Control Learning Fuzzy Rules by Evolution for Mobile Agent Control George Chronis, James Keller and Marjorie Skubic Dept. of Computer Engineering and Computer Science, University of Missouri-Columbia, email: gchronis@ece.missouri.edu

More information

Localisation using Automatically Selected Landmarks from Panoramic Images

Localisation using Automatically Selected Landmarks from Panoramic Images Localisation using Automatically Selected Landmarks from Panoramic Images Simon Thompson, Toshihiro Matsui and Alexander Zelinsky Abstract Intelligent Systems Division Electro-Technical Laboratory --4

More information

Instant Prediction for Reactive Motions with Planning

Instant Prediction for Reactive Motions with Planning The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Instant Prediction for Reactive Motions with Planning Hisashi Sugiura, Herbert Janßen, and

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

ADVANCES IN ROBOT VISION: MECHANISMS AND ALGORITHMS

ADVANCES IN ROBOT VISION: MECHANISMS AND ALGORITHMS / First Asian Symposium on Industrial Automation and Robotics ADVANCES IN ROBOT VISION: MECHANISMS AND ALGORITHMS Alexander Zelinsky and John B. Moore Research School of Information Sciences and Engineering

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992 Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,

More information

Ball tracking with velocity based on Monte-Carlo localization

Ball tracking with velocity based on Monte-Carlo localization Book Title Book Editors IOS Press, 23 1 Ball tracking with velocity based on Monte-Carlo localization Jun Inoue a,1, Akira Ishino b and Ayumi Shinohara c a Department of Informatics, Kyushu University

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

A deformable model driven method for handling clothes

A deformable model driven method for handling clothes A deformable model driven method for handling clothes Yasuyo Kita Fuminori Saito Nobuyuki Kita Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology (AIST) AIST

More information

LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE

LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE Roger Freitas,1 José Santos-Victor Mário Sarcinelli-Filho Teodiano Bastos-Filho Departamento de Engenharia Elétrica, Universidade Federal do Espírito Santo,

More information

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little + Presented by Matt Loper CS296-3: Robot Learning and Autonomy Brown

More information

Intuitive Human-Robot Interaction through Active 3D Gaze Tracking

Intuitive Human-Robot Interaction through Active 3D Gaze Tracking Intuitive Human-Robot Interaction through Active 3D Gaze Tracking Rowel Atienza and Alexander Zelinsky Research School of Information Sciences and Engineering The Australian National University Canberra,

More information

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Koren Ward School of Information Technology and Computer Science University of Wollongong koren@uow.edu.au www.uow.edu.au/~koren Abstract

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

EE631 Cooperating Autonomous Mobile Robots

EE631 Cooperating Autonomous Mobile Robots EE631 Cooperating Autonomous Mobile Robots Lecture: Multi-Robot Motion Planning Prof. Yi Guo ECE Department Plan Introduction Premises and Problem Statement A Multi-Robot Motion Planning Algorithm Implementation

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note

6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note 6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note Alexander Krull, Frank Michel, Eric Brachmann, Stefan Gumhold, Stephan Ihrke, Carsten Rother TU Dresden, Dresden, Germany The

More information

A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots

A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots H.Aminaiee Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran Abstract This paper presents a local

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots Jurgen Leitner, Simon Harding, Alexander Forster and Peter Corke Presentation: Hana Fusman Introduction/ Overview The goal of their

More information

Continuous Valued Q-learning for Vision-Guided Behavior Acquisition

Continuous Valued Q-learning for Vision-Guided Behavior Acquisition Continuous Valued Q-learning for Vision-Guided Behavior Acquisition Yasutake Takahashi, Masanori Takeda, and Minoru Asada Dept. of Adaptive Machine Systems Graduate School of Engineering Osaka University

More information

Introduction to Intelligent, Cognitive, and Knowledge-Based Systems

Introduction to Intelligent, Cognitive, and Knowledge-Based Systems Introduction to Intelligent, Cognitive, and Knowledge-Based Systems Lecture 4: Command and Behavior Fusion Gal A. Kaminka galk@cs.biu.ac.il Previously Behavior Selection/Arbitration Activation-based selection

More information

Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner

Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner AARMS Vol. 15, No. 1 (2016) 51 59. Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner Peter KUCSERA 1 Thanks to the developing sensor technology in mobile robot navigation

More information

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping

Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping Motion Recognition and Generation for Humanoid based on Visual-Somatic Field Mapping 1 Masaki Ogino, 1 Shigeo Matsuyama, 1 Jun ichiro Ooga, and 1, Minoru Asada 1 Dept. of Adaptive Machine Systems, HANDAI

More information

Motion Control of Wearable Walking Support System with Accelerometer Considering Swing Phase Support

Motion Control of Wearable Walking Support System with Accelerometer Considering Swing Phase Support Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, Technische Universität München, Munich, Germany, August 1-3, Motion Control of Wearable Walking Support

More information

CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT

CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT Wan Ishak Wan Ismail, a, b, Mohd. Hudzari Razali, a a Department of Biological and Agriculture Engineering, Faculty of Engineering b Intelligent System and

More information

James Kuffner. The Robotics Institute Carnegie Mellon University. Digital Human Research Center (AIST) James Kuffner (CMU/Google)

James Kuffner. The Robotics Institute Carnegie Mellon University. Digital Human Research Center (AIST) James Kuffner (CMU/Google) James Kuffner The Robotics Institute Carnegie Mellon University Digital Human Research Center (AIST) 1 Stanford University 1995-1999 University of Tokyo JSK Lab 1999-2001 Carnegie Mellon University The

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Dept. of Mechanical and Materials Engineering University of Western Ontario London, Ontario, Canada

Dept. of Mechanical and Materials Engineering University of Western Ontario London, Ontario, Canada ACTIVE-VISION FOR THE AUTONOMOUS SURVEILLANCE OF DYNAMIC, MULTI-OBJECT ENVIRONMENTS Ardevan Bakhtari, Michael D. Naish, and Beno Benhabib Computer Integrated Manufacturing Laboratory Dept. of Mechanical

More information

Image Processing Methods for Interactive Robot Control

Image Processing Methods for Interactive Robot Control Image Processing Methods for Interactive Robot Control Christoph Theis 1, Ioannis Iossifidis 2 and Axel Steinhage 3 1,2 Institut für Neuroinformatik, Ruhr-Univerität Bochum, Germany 3 Infineon Technologies

More information

Gaze Control for a Two-Eyed Robot Head

Gaze Control for a Two-Eyed Robot Head Gaze Control for a Two-Eyed Robot Head Fenglei Du Michael Brady David Murray Robotics Research Group Engineering Science Department Oxford University 19 Parks Road Oxford 0X1 3PJ Abstract As a first step

More information

Fault Tolerant Supervisory Control of Human Interactive Robots

Fault Tolerant Supervisory Control of Human Interactive Robots Fault Tolerant Supervisory Control of Human Interactive Robots H.-B. Kuntze, C.-W. Frey, K. Giesen, G. Milighetti IITB für Karlsruhe, Germany Kuntze@iitb.fraunhofer.de page 1 Content 1 Introduction 2 Supervisory

More information

Control 2. Keypoints: Given desired behaviour, determine control signals Inverse models:

Control 2. Keypoints: Given desired behaviour, determine control signals Inverse models: Control 2 Keypoints: Given desired behaviour, determine control signals Inverse models: Inverting the forward model for simple linear dynamic system Problems for more complex systems Open loop control:

More information

Gaze Control for Goal-Oriented Humanoid Walking

Gaze Control for Goal-Oriented Humanoid Walking Gaze Control for Goal-Oriented Humanoid Walking J. F. Seara, O. Lorch, and G. Schmidt Institute of Automatic Control Engineering, Technische Universität München, D-829 Munich, Germany email: Javier.Fernandez.Seara@ei.tum.de

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

GAZE TRACKING CONTROL USING AN ACTIVE STEREO CAMERA

GAZE TRACKING CONTROL USING AN ACTIVE STEREO CAMERA GAZE TRACKING CONTROL USING AN ACTIVE STEREO CAMERA Masafumi NAKAGAWA *, Eisuke ADACHI, Ryuichi TAKASE, Yumi OKAMURA, Yoshihiro KAWAI, Takashi YOSHIMI, Fumiaki TOMITA National Institute of Advanced Industrial

More information

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which

More information

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji

More information

Assist System for Carrying a Long Object with a Human - Analysis of a Human Cooperative Behavior in the Vertical Direction -

Assist System for Carrying a Long Object with a Human - Analysis of a Human Cooperative Behavior in the Vertical Direction - Assist System for Carrying a Long with a Human - Analysis of a Human Cooperative Behavior in the Vertical Direction - Yasuo HAYASHIBARA Department of Control and System Engineering Toin University of Yokohama

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT 3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru

More information

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Noriaki Mitsunaga and Minoru Asada Dept. of Adaptive Machine Systems, Osaka University,

More information

Autonomous and Mobile Robotics. Whole-body motion planning for humanoid robots (Slides prepared by Marco Cognetti) Prof.

Autonomous and Mobile Robotics. Whole-body motion planning for humanoid robots (Slides prepared by Marco Cognetti) Prof. Autonomous and Mobile Robotics Whole-body motion planning for humanoid robots (Slides prepared by Marco Cognetti) Prof. Giuseppe Oriolo Motivations task-constrained motion planning: find collision-free

More information

Recap from Previous Lecture

Recap from Previous Lecture Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now

More information

Laser Eye a new 3D sensor for active vision

Laser Eye a new 3D sensor for active vision Laser Eye a new 3D sensor for active vision Piotr Jasiobedzki1, Michael Jenkin2, Evangelos Milios2' Brian Down1, John Tsotsos1, Todd Campbell3 1 Dept. of Computer Science, University of Toronto Toronto,

More information

An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement

An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement oshio Matsumoto, Alexander elinsky Nara Institute of Science and Technology 8916-5 Takayamacho, Ikoma-city,

More information

Robotized Assembly of a Wire Harness in Car Production Line

Robotized Assembly of a Wire Harness in Car Production Line The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Robotized Assembly of a Wire Harness in Car Production Line Xin Jiang, Member, IEEE, Kyong-mo

More information

A Path Tracking Method For Autonomous Mobile Robots Based On Grid Decomposition

A Path Tracking Method For Autonomous Mobile Robots Based On Grid Decomposition A Path Tracking Method For Autonomous Mobile Robots Based On Grid Decomposition A. Pozo-Ruz*, C. Urdiales, A. Bandera, E. J. Pérez and F. Sandoval Dpto. Tecnología Electrónica. E.T.S.I. Telecomunicación,

More information

25 Hz. 25 Hz. X M(z) u c + u X. C(z) u ff KF. Θ, α. V(z) Θ m. 166 Hz Θ. Θ mot. C(z) Θ ref. C(z) V(z) 25 Hz. V(z) Θ, α. Interpolator.

25 Hz. 25 Hz. X M(z) u c + u X. C(z) u ff KF. Θ, α. V(z) Θ m. 166 Hz Θ. Θ mot. C(z) Θ ref. C(z) V(z) 25 Hz. V(z) Θ, α. Interpolator. Comparison of Control Structures for Visual Servoing S. Chroust y, J.P. Barreto z H. Araújo z, M. Vincze y y Vienna University of Technology Institute of Flexible Automation Gusshausstr. 27-20 /361 fsc,vmg@infa.tuwien.ac.at

More information

ACE Project Report. December 10, Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University

ACE Project Report. December 10, Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University ACE Project Report December 10, 2007 Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University 1. Introduction This report covers the period from September 20, 2007 through December 10,

More information

Intelligent Cutting of the Bird Shoulder Joint

Intelligent Cutting of the Bird Shoulder Joint Intelligent Cutting of the Bird Shoulder Joint Ai-Ping Hu, Sergio Grullon, Debao Zhou, Jonathan Holmes, Wiley Holcombe, Wayne Daley and Gary McMurray Food Processing Technology Division, ATAS Laboratory,

More information

Development of an Augmented Reality System for Plant Maintenance Support

Development of an Augmented Reality System for Plant Maintenance Support Development of an Augmented Reality System for Plant Maintenance Support Hirotake Ishii, Koji Matsui, Misa Kawauchi, Hiroshi Shimoda and Hidekazu Yoshikawa Graduate School of Energy Science, Kyoto University

More information

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data MINI-PAPER by Rong Pan, Ph.D., Assistant Professor of Industrial Engineering, Arizona State University We, applied statisticians and manufacturing engineers, often need to deal with sequential data, which

More information

126 ACTION AND PERCEPTION

126 ACTION AND PERCEPTION Motion Sketch: Acquisition of Visual Motion Guided Behaviors Takayuki Nakamura and Minoru Asada Dept. of Mechanical Eng. for Computer-Controlled Machinery, Osaka University, Suita 565 JAPAN E-mail: nakamura@robotics.ccm.eng.osaka-u.ac.jp

More information

6-dof Eye-vergence visual servoing by 1-step GA pose tracking

6-dof Eye-vergence visual servoing by 1-step GA pose tracking International Journal of Applied Electromagnetics and Mechanics 52 (216) 867 873 867 DOI 1.3233/JAE-16225 IOS Press 6-dof Eye-vergence visual servoing by 1-step GA pose tracking Yu Cui, Kenta Nishimura,

More information

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Real Time Motion Detection Using Background Subtraction Method and Frame Difference Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today

More information

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

Obstacle Detection and Avoidance using Stereo Vision System with Region of Interest (ROI) on FPGA

Obstacle Detection and Avoidance using Stereo Vision System with Region of Interest (ROI) on FPGA Obstacle Detection and Avoidance using Stereo Vision System with Region of Interest (ROI) on FPGA Mr. Rohit P. Sadolikar 1, Prof. P. C. Bhaskar 2 1,2 Department of Technology, Shivaji University, Kolhapur-416004,

More information

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators 56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically

More information

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany

More information

Navigation and Metric Path Planning

Navigation and Metric Path Planning Navigation and Metric Path Planning October 4, 2011 Minerva tour guide robot (CMU): Gave tours in Smithsonian s National Museum of History Example of Minerva s occupancy map used for navigation Objectives

More information

Bumptrees for Efficient Function, Constraint, and Classification Learning

Bumptrees for Efficient Function, Constraint, and Classification Learning umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A

More information

Development of Vision System on Humanoid Robot HRP-2

Development of Vision System on Humanoid Robot HRP-2 Development of Vision System on Humanoid Robot HRP-2 Yutaro Fukase Institute of Technology, Shimizu Corporation, Japan fukase@shimz.co.jp Junichiro Maeda Institute of Technology, Shimizu Corporation, Japan

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Traffic/Flocking/Crowd AI. Gregory Miranda

Traffic/Flocking/Crowd AI. Gregory Miranda Traffic/Flocking/Crowd AI Gregory Miranda Introduction What is Flocking? Coordinated animal motion such as bird flocks and fish schools Initially described by Craig Reynolds Created boids in 1986, generic

More information

From Orientation to Functional Modeling for Terrestrial and UAV Images

From Orientation to Functional Modeling for Terrestrial and UAV Images From Orientation to Functional Modeling for Terrestrial and UAV Images Helmut Mayer 1 Andreas Kuhn 1, Mario Michelini 1, William Nguatem 1, Martin Drauschke 2 and Heiko Hirschmüller 2 1 Visual Computing,

More information

Final Project Report: Mobile Pick and Place

Final Project Report: Mobile Pick and Place Final Project Report: Mobile Pick and Place Xiaoyang Liu (xiaoyan1) Juncheng Zhang (junchen1) Karthik Ramachandran (kramacha) Sumit Saxena (sumits1) Yihao Qian (yihaoq) Adviser: Dr Matthew Travers Carnegie

More information

Head Pose Estimation Using Stereo Vision For Human-Robot Interaction

Head Pose Estimation Using Stereo Vision For Human-Robot Interaction Head Pose Estimation Using Stereo Vision For Human-Robot Interaction Edgar Seemann Kai Nickel Rainer Stiefelhagen Interactive Systems Labs Universität Karlsruhe (TH) Germany Abstract In this paper we present

More information

Human Robot Communication. Paul Fitzpatrick

Human Robot Communication. Paul Fitzpatrick Human Robot Communication Paul Fitzpatrick Human Robot Communication Motivation for communication Human-readable actions Reading human actions Conclusions Motivation What is communication for? Transferring

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Leg Motion Primitives for a Humanoid Robot to Imitate Human Dances

Leg Motion Primitives for a Humanoid Robot to Imitate Human Dances Leg Motion Primitives for a Humanoid Robot to Imitate Human Dances Shinichiro Nakaoka 1, Atsushi Nakazawa 2, Kazuhito Yokoi 3 and Katsushi Ikeuchi 1 1 The University of Tokyo,Tokyo, Japan (nakaoka@cvl.iis.u-tokyo.ac.jp)

More information

Tracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually

Tracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually Tracking Multiple Objects in 3D Jo~ao P. Barreto Paulo Peioto Coimbra 33 Coimbra 33 Jorge Batista Helder Araujo Coimbra 33 Coimbra 33 Abstract In this paper a system for tracking multiple targets in 3D

More information

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The

More information

Modeling the manipulator and flipper pose effects on tip over stability of a tracked mobile manipulator

Modeling the manipulator and flipper pose effects on tip over stability of a tracked mobile manipulator Modeling the manipulator and flipper pose effects on tip over stability of a tracked mobile manipulator Chioniso Dube Mobile Intelligent Autonomous Systems Council for Scientific and Industrial Research,

More information

Advancing Active Vision Systems by Improved Design and Control

Advancing Active Vision Systems by Improved Design and Control Advancing Active Vision Systems by Improved Design and Control Orson Sutherland, Harley Truong, Sebastien Rougeaux, & Alexander Zelinsky Robotic Systems Laboratory Department of Systems Engineering, RSISE

More information

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Sumit Jha and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The

More information

Visual Servoing Utilizing Zoom Mechanism

Visual Servoing Utilizing Zoom Mechanism IEEE Int. Conf. on Robotics and Automation 1995, pp.178 183, Nagoya, May. 12 16, 1995 1 Visual Servoing Utilizing Zoom Mechanism Koh HOSODA, Hitoshi MORIYAMA and Minoru ASADA Dept. of Mechanical Engineering

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

DETC THREE-DIMENSIONAL KINEMATIC ANALYSIS OF THE ACTUATED SPOKE WHEEL ROBOT. September 10-13, 2006, Philadelphia, Pennsylvania, USA

DETC THREE-DIMENSIONAL KINEMATIC ANALYSIS OF THE ACTUATED SPOKE WHEEL ROBOT. September 10-13, 2006, Philadelphia, Pennsylvania, USA Proceedings Proceedings of IDETC/CIE of IDETC 06 2006 ASME 2006 ASME International International Design Design Engineering Engineering Technical Technical Conferences Conferences & September Computers

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

More information

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission IFAS Citrus Initiative Annual Research and Extension Progress Report 2006-07 Mechanical Harvesting and Abscission Investigator: Dr. Tom Burks Priority Area: Robotic Harvesting Purpose Statement: The scope

More information

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc.

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc. ANIMATION Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character is represented

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

Journal of Asian Scientific Research AN APPLICATION OF STEREO MATCHING ALGORITHM FOR WASTE BIN LEVEL ESTIMATION

Journal of Asian Scientific Research AN APPLICATION OF STEREO MATCHING ALGORITHM FOR WASTE BIN LEVEL ESTIMATION Journal of Asian Scientific Research journal homepage: http://aessweb.com/journal-detail.php?id=5003 AN APPLICATION OF STEREO MATCHING ALGORITHM FOR WASTE BIN LEVEL ESTIMATION Md. Shafiqul Islam 1 M.A.

More information

A High Speed Face Measurement System

A High Speed Face Measurement System A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555

More information

Mobile Robots Locomotion

Mobile Robots Locomotion Mobile Robots Locomotion Institute for Software Technology 1 Course Outline 1. Introduction to Mobile Robots 2. Locomotion 3. Sensors 4. Localization 5. Environment Modelling 6. Reactive Navigation 2 Today

More information

Real-Time Obstacle Avoidance by an Autonomous Mobile Robot using an Active Vision Sensor and a Vertically Emitted Laser Slit

Real-Time Obstacle Avoidance by an Autonomous Mobile Robot using an Active Vision Sensor and a Vertically Emitted Laser Slit Real-Time Avoidance by an Autonomous Mobile Robot using an Active Vision Sensor and a Vertically Emitted Laser Slit Seydou SOUMARE, Akihisa OHYA and Shin ichi YUTA Intelligent Robot Laboratory, University

More information