Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Similar documents
Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Continuous Valued Q-learning for Vision-Guided Behavior Acquisition

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

Chapter 3 Image Registration. Chapter 3 Image Registration

Visual Servoing Utilizing Zoom Mechanism

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications

BabyTigers-98: Osaka Legged Robot Team

A Real Time Vision System for Robotic Soccer

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Measurement of Pedestrian Groups Using Subtraction Stereo

Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Mathematics of a Multiple Omni-Directional System

Improving Vision-Based Distance Measurements using Reference Objects

Eagle Knights: RoboCup Small Size League. Dr. Alfredo Weitzenfeld ITAM - Mexico

Horus: Object Orientation and Id without Additional Markers

Image-Based Memory of Environment. homing uses a similar idea that the agent memorizes. [Hong 91]. However, the agent nds diculties in arranging its

arxiv:cs/ v1 [cs.cv] 24 Mar 2003

Optical Flow-Based Person Tracking by Multiple Cameras

SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET

A 100Hz Real-time Sensing System of Textured Range Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Motion Control in Dynamic Multi-Robot Environments

Self-calibration of a pair of stereo cameras in general position

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints

A Fast Linear Registration Framework for Multi-Camera GIS Coordination

A Framework for 3D Pushbroom Imaging CUCS

Precise Omnidirectional Camera Calibration

Decision Algorithm for Pool Using Fuzzy System

NuBot Team Description Paper 2013

Vol. 21 No. 6, pp ,

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Chapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian

Soccer Server and Researches on Multi-Agent Systems. NODA, Itsuki and MATSUBARA, Hitoshi ETL Umezono, Tsukuba, Ibaraki 305, Japan

Robust Color Choice for Small-size League RoboCup Competition

Omni-Directional Visual Servoing for Human-Robot Interaction

Grasping Known Objects with Aldebaran Nao

Development of Vision System on Humanoid Robot HRP-2

1. What is the law of reflection?

Terrain Data Real-time Analysis Based on Point Cloud for Mars Rover

126 ACTION AND PERCEPTION

Omni Stereo Vision of Cooperative Mobile Robots

Catadioptric camera model with conic mirror

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

State Estimation for Continuous-Time Systems with Perspective Outputs from Discrete Noisy Time-Delayed Measurements

A Computer Vision Sensor for Panoramic Depth Perception

Ego-Mot ion and Omnidirectional Cameras*

THE UNIVERSITY OF AUCKLAND

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Chapter 26 Geometrical Optics

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

Projective Geometry and Camera Models

Marcel Worring Intelligent Sensory Information Systems

Compositing a bird's eye view mosaic

Polar coordinate interpolation function G12.1

3D Rendering and Ray Casting

Tracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually

Chapter 26 Geometrical Optics

Autonomous Programming FTC Challenge Workshops VCU School of Engineering September 24, 2016 Presented by: Team 8297 Geared UP!

Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Autodesk Fusion 360 Training: The Future of Making Things Attendee Guide

Optical design of COrE+

Reflection and Image Formation by Mirrors

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

Estimating Camera Position And Posture by Using Feature Landmark Database

1998 IEEE International Conference on Intelligent Vehicles 213

Chapter 26 Geometrical Optics

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

Ball tracking with velocity based on Monte-Carlo localization

SHARPKUNGFU TEAM DESCRIPTION 2006

3D-2D Laser Range Finder calibration using a conic based geometry shape

Super Wide Viewing for Tele-operation

Robotics (Kinematics) Winter 1393 Bonab University

4. A bulb has a luminous flux of 2400 lm. What is the luminous intensity of the bulb?

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Baseline Detection and Localization for Invisible Omnidirectional Cameras

Perspective Projection [2 pts]

Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Acquisition of Qualitative Spatial Representation by Visual Observation

Exploiting Geometric Restrictions in a PTZ Camera for Finding Point-correspondences Between Configurations

Chapters 1-4: Summary

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

1 Introduction. 2 Real-time Omnidirectional Stereo

Perceptual Attention Through Image Feature Generation Based on Visio-Motor Map Learning

3D Rendering and Ray Casting

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Task selection for control of active vision systems

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

Flexible Calibration of a Portable Structured Light System through Surface Plane

Security Monitoring around a Video Surveillance Car with a Pair of Two-camera Omnidirectional

DD2429 Computational Photography :00-19:00

Transcription:

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka 565-0871, Japan Abstract The authors propose an attention control method for an omnidirectional vision by an active zoom mechanism. It is implemented by controlling focal length of the camera without pan or tilt mechanism. We install an omnidirectional vision with a hyperbolic mirror to a mobile robot and apply Q-learning for its behavior acquisition. In a goal defending behavior of a soccer game, the robot can learning the behavior and the attention control works to reduce learning time. In this paper, we explain our method and experiment. 1 Introduction Omnidirectional vision has been receiving increased attentions as a method to capture the whole view all around the imaging system at any instant in time. This causes a wide variety of applications which include autonomous navigation [1], visual surveillance and guidance [2], video conference, virtual reality, and site modeling [3]. Since the omnidirectional vision can capture the visual information from the all directions, no pan or tilt control is necessary. Therefore, almost of the existing methods with omnidirectional vision system have focused on its optogeometric features to reconstruct 3-D scene structure. In this paper, we introduce an omnidirectional vision system on a mobile robot enhanced by an active zoom mechanism. We implement a zoom servoing [4] by which the target image can be captured at the constant position if it moves on the ground plane. It is realized by controlling focal length of the camera. Due to the active zoom mechanism, the target motion in the radius direction can be canceled, and only circular motions around the image center can be observed. This simplifies the image processing and target tracking. We apply the learning method [6] to a mobile robot with the enhanced omnidirectional vision system and the state space is designed for it. First, we examine the relationship between the target position in the omnidirectional view in terms of the focal length and the distance between the robot and the target. Then, we set up a 1

Front Back control low to realize a zoom servoing, Finally, we design the state space for the robot to obtain the desired behavior based on the reinforcement learning scheme. 2 The Task and the Robot System Figure 1(a) shows our robot with an omnidirectional vision system that is installed on the 2-DOFs non-holonomic vehicle so that its optical axis can be coincident with the axis of vehicle rotation. The robot is controlled by a remote host computer via radio link. Eight actions shown in Figure 1(b), moving toward left and right direction, turning to left and right, and combination of moving and turning, are installed on the robot and controlled from the remote computer. One of omnidirectional vision systems consists of a conic mirror and a TV camera [1]. The vertical axis of the mirror is aligned with the optical axis of the camera as Figure 2(a). The projection of the scene in the image plane is determined by the the shape of the mirror and camera configuration parameter(height of the camera, distance between the mirror and the lens, and focal length). These are designed according to the requirement of projection. Our omnidirectional vision system has a hyperbolic mirror and a sample of its image is shown in Figure 2(b). The omnidirectional image is transmitted to the remote computer via video transmitter and processed there. Right Left (a) overview (b) action Figure 1: The robot We set up a simple soccer like game according to the RoboCup context [5]. The task of the robot is to block a ball in front of the goal, that is, a goalie task (see Figure 3). In order to keep the goal, the robot has to track the moving ball and move to appropriate position.

virtual axis mirror image plane (a) projection (b) image Figure 2: A sample of an omnidirectiona vision Figure 3: The task of the robot

h f c c 3 Active Zoom Control on the Omnidirectional Vision We assume that the object is on a flat surface and we define coordinates and parameters as Figure 4(a). Let P (R, θ, Z) and p(r, θ) donate a point in the environment donated and a projected point of P in the image plane. The relation of P and p is, Z = R tan α + c + h tan γ = b2 + c 2 b 2 c tan α + 2bc 1 2 c 2 b 2 cos α (1) r = f tan γ Where a, b is parameters of the hyperbolic mirror and R2 Z2 = 1, c = a a 2 b 2 + b 2. In 2 our system these parameters are α 2 = 233.3[mm], β 2 = 1135.7[mm], h = 250[mm]. Z mirror OM Q 14 12 r = 35 [pix] r = 40 [pix] r = 45 [pix] p ( r, z ) 10 image plane P ( R, Z ) f [mm] 8 6 OC 4 2 O R 0 50 100 150 200 250 300 R [cm] (a) The projection of the hyperbolic mirror (b) The relation between distances and the focal length Figure 4: The basics of the hyperbolic mirror We propose an attention control on an omnidirectional vision by controlling focal length of the camera. In general, an attention control is archived by seeing an object in a certain position in the image plane which is implemented by controlling pan and tilt angle of the camera. However, in an omnidirectional vision, matching of the object and the target in the image become complex since the projection is not simple. Therefore we propose an attention control by seeing the object in a certain distance in the image plane from center. It is simply implemented by controlling focal length of the camera as shown in Figure 5

I rd I rd Z mirror OM p ( r, z ) f p ( r, z ) image plane P ( R, Z ) OC O R Figure 5: Attention control on the omnidirectional vision Figure 4(b) shows the relation between the distance of the object in the image from the center donated by r, the distance of the object in the environment donated by R, and the focal length of the camera f. We control the focal length of the camera with a following equation; u f = K( I r d I r) (2) where u f is the focal length of the camera, I r d is the desired distance in the image, I r is the current distance in the image. For example, if I r is smaller than I r d it comes closer by increasing f as shown in Figure 5. 4 Behavior Acquisition by Learning We apply Q-learning, one of major reinforcement learning method, to acquire a behavior. The state space need to be defined from observed image by the robot [?]. We define states as Table 1. For comparision, we define states for the robot with and without attention control. For the case without attention control, states are defined by the direction and distance of the ball and the goal in the image. We define 8 substates for the direction of the ball as Figure 6(a) and 3 substates, far, medium and near, for the distance. In addition we define 3 substates, change in clock wise, counter clock wise, and no change for the difference of the direction, 3 substates, farer closer, and no change for the difference of the distance. Substaes of the direction of the goal is 8 same to the ball and the distance is two for and close. Total number of states is 8 3 3 3 8 2 = 3456. For the case with attention control, substates of the distance of the ball can be canceled since the distance in the image is constant. Direction of the ball, direction and distance of the goal is define in the same way as above. However, the view of the goal is different. It corresponds to the distance of ball and the goal, hence it is the distance of the robot and the goal in the case without attention control. Total number of states is 8 3 2 8 = 320.

without attention control with attention control direction of the ball in the image 8 8 change of the direction of the ball 3 3 distance of the ball in the image 3 change of the distance of the ball 3 direction of the goal in the image 8 8 distance of the goal in the image 2 2 total number of the state 3456 320 Table 1: Substates Left Front Back Near Right Middle Far (a) directions (b) distances Figure 6: Substates 5 Experiments We performed a simulation to acquire a goal defending behavior. Figure 7 shows the environment and initial position of the robot and the ball. The environment is defined according to the RoboCup middle league regulation. The size of the field is 4575[mm] width and 4110[mm] equivalent to the half size of field. The goal size is 1500[mm] width and 600[mm] height and the diameter of the ball is 200[mm]. The goal and the ball is painted in one color blue and red respectively for easy recognition. The ball is located on a circle defined by corners and the center of the goal. The robot is located in the circle randomly. The ball rolls toward the goal with a constant velocity A trial is finished when the ball comes into the goal or goes out from the field. Figure 8 shows the result when the robot finished 10000, 20000, 30000, 40000 trials. It shows a task success rate with the learned behavior. When the attention control is used the robot can learn quicker than without the attention control. After learning on simulation, the acquired behavior is implemented on the real robot. Figure 9(a)-(f) shows an example of the real behavior.

Figure 7: Initial position Shoot Block Rate[%] 100 80 60 40 20 with Attention control without Attention control 0 0 10000 20000 30000 40000 Trial times Figure 8: Result (a) (b) (c) (d) (e) (f) Figure 9: A sequence of behavior

6 Conclusions We propose an attention control for an omnidirectional vision by controlling focal length of the camera. We used the system to robot learning and it helped reduce learning time. Formulation of the relation between the implemented serve and learning is future discussion. References [1] Y. Yagi and S. Kawato. Panoramic scene analysis with conic projection. In Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems 1990 (IROS 90), 1990. [2] H. Ishiguro. Distributed vision systems: A perceptual information infrastructure for robot navigation. In Proc. of IJCAI-97, pages 36 41, 1997. [3] V. N. Peri and S. Nayar. Generation of perspective and panoramic video from omnidorectinal video. In Proc. of 1997 Image Understanding Workshop, pages 243 245, 1997. [4] K. Hosoda, H. Moriyama, and M. Asada. Visual servoing utilizing zoom mechanism. In Proc. of IEEE Int. Conf. on Robotics and Automation, pages 178 183, 1995. [5] H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa, and H. Matsubara. robocup: A challenge problem of ai. AI Magazine, 18:73 85, 1997. [6] M. Asada, S. Noda, S. Tawaratsumida, and K. Hosoda. Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning. Machine Learning, 23:279 303, 1996.