3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Similar documents
Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Self-Localization and 3-D Model Construction of Pipe by Earthworm Robot Equipped with Omni-Directional Rangefinder

Absolute Scale Structure from Motion Using a Refractive Plate

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

3-D Shape Measurement of Pipe by Range Finder Constructed with Omni-Directional Laser and Omni-Directional Camera

3-D Shape Reconstruction of Pipe with Omni-Directional Laser and Omni-Directional Camera

3D Measurement of Transparent Vessel and Submerged Object Using Laser Range Finder

Calibration of Fish-Eye Stereo Camera for Aurora Observation

Mathematics of a Multiple Omni-Directional System

Measurement of Pedestrian Groups Using Subtraction Stereo

Simultaneous Tele-visualization of Construction Machine and Environment Using Body Mounted Cameras

Stereo Image Rectification for Simple Panoramic Image Generation

Catadioptric camera model with conic mirror

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Precise Omnidirectional Camera Calibration

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

Path and Viewpoint Planning of Mobile Robots with Multiple Observation Strategies

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Flexible Calibration of a Portable Structured Light System through Surface Plane

Integrating 3D Vision Measurements into Industrial Robot Applications

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Integration of 3D Stereo Vision Measurements in Industrial Robot Applications

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Dominant plane detection using optical flow and Independent Component Analysis

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

3D Corner Detection from Room Environment Using the Handy Video Camera

Three Dimensional Measurement of Object s Surface in Water Using the Light Stripe Projection Method

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix

Diverging viewing-lines in binocular vision: A method for estimating ego motion by mounted active cameras

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

EECS 442: Final Project

CS5670: Computer Vision

Using temporal seeding to constrain the disparity search range in stereo matching

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

arxiv: v1 [cs.cv] 28 Sep 2018

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Image processing and features

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Self-calibration of a pair of stereo cameras in general position

Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity

Inspection of Visible and Invisible Features of Objects with Image and Sound Signal Processing

1 Introduction. 2 Real-time Omnidirectional Stereo

Noises Removal from Image Sequences Acquired with Moving Camera by Estimating Camera Motion from Spatio-Temporal Information

Rectification and Disparity

Monocular Visual Odometry

Towards the completion of assignment 1

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints

3D Metric Reconstruction from Uncalibrated Omnidirectional Images

Detection of Small-Waving Hand by Distributed Camera System

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Structure from motion

CS201 Computer Vision Camera Geometry

arxiv: v1 [cs.cv] 18 Sep 2017

Segmentation and Tracking of Partial Planar Templates

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Object Shape Reconstruction and Pose Estimation by a Camera Mounted on a Mobile Robot

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Multiple View Geometry in Computer Vision Second Edition

Computer Vision Lecture 17

Chapter 3 Image Registration. Chapter 3 Image Registration

Computer Vision Lecture 17

Robust Camera Pan and Zoom Change Detection Using Optical Flow

3-D MAP GENERATION BY A MOBILE ROBOT EQUIPPED WITH A LASER RANGE FINDER. Takumi Nakamoto, Atsushi Yamashita, and Toru Kaneko

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

A 100Hz Real-time Sensing System of Textured Range Images

Stereo Rectification for Equirectangular Images

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Omni Stereo Vision of Cooperative Mobile Robots

3D Metric Reconstruction from Uncalibrated Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

A Robust Two Feature Points Based Depth Estimation Method 1)

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming

Structure from Small Baseline Motion with Central Panoramic Cameras

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Control of Mobile Robot Equipped with Stereo Camera Using the Number of Fingers

Region matching for omnidirectional images using virtual camera planes

Dense 3D Reconstruction. Christiano Gava

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

A Framework for 3D Pushbroom Imaging CUCS

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Epipolar Geometry in Stereo, Motion and Object Recognition

Projector Calibration for Pattern Projection Systems

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

A Real-Time Catadioptric Stereo System Using Planar Mirrors

Mirror Based Framework for Human Body Measurement

Transcription:

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 43-8561, Japan f003001@ipc.shizuoka.ac.jp Ryosuke Kawanishi Toru Kaneko Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 43-8561, Japan {f5945015,tmtkane}@ipc.shizuoka.ac.jp Atsushi Yamashita Department of Precision Engineering The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan yamashita@ieee.org Hajime Asama Department of Precision Engineering The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan asama@robot.t.u-tokyo.ac.jp Abstract Map information is important for path planning and selflocalization when mobile robots accomplish autonomous tasks. In unknown environments, they should generate environment maps by themselves. An omnidirectional camera is effective for environment measurement, because it has a wide field of view. There are binocular stereo and motion stereo in traditional methods for measurement by omnidirectional camera. However, each method has advantages and disadvantages. In this paper, we aim to improve measurement accuracy by integrating binocular stereo and motion stereo using two omnidirectional cameras installed on a mobile robot. In addition, stereo matching accuracy is improved by considering omnidirectional image distortion. Experimental results show the effectiveness of our proposed method. 1. Introduction In recent years, it is expected to introduce autonomous mobile robots in various environments with robot technology development. There are disaster sites, the nuclear reactor inside, and so on, in these environments. They are often unknown environments. Therefore, mobile robots should measure the environments to generate the maps by themselves. 3D measurement using image data makes it possible to generate environment maps [1]. However, an image ac- Figure 1. The omnidirectional camera which we use is shown in the left figure. The camera attaches a hyperboloid mirror in front of the camera lens. An omnidirectional image is shown in the right figure. The image has a 360-degree horizontal field of view. quired by a conventional camera has a limited field of view. To solve this problem, cameras with a wide field of view have been developed. Cameras with a wide field of view include an omnidirectional camera using a hyperboloid mirror [, 3]. Taking account of installation on a mobile robot, an omnidirectional camera is suitable because it can get a surrounding view image at once. Gluckman [4] showed that an omnidirectional camera is effective for environment measurement. Therefore, we use omnidirectional cameras (Fig. 1) for environmental measurement. The omnidirectional camera is equipped with a hyperboloid mirror. There are binocular stereo [5,6] and motion stereo [7] for measurement by using omnidirectional camera. 011 IEEE International Conference on Computer Vision Workshops 978-1-4673-0063-6/11/$6.00 c 011 IEEE 96

Omnidirectional camera Movement direction Image acquisition ± Base-line length result Good Bad result Measurement point ± Measurement point 1 Mobile robot Figure. Advantages and disadvantages of binocular stereo measurement. Point 1 can be measured with high accuracy. However, it is difficult to measure point with high accuracy. Measurement Binocular stereo Motion stereo Measurement data integration Figure 4. Procedure of our proposed method. Omnidirectional camera Base-line length Measurement point Movement direction Mobile robot Before movement After movement ± Measurement point 1 Figure 3. Advantages and disadvantages of motion stereo measurement. Point 1 can be measured with high accuracy. However, it is difficult to measure point with high accuracy. The measurement accuracy by binocular stereo depends on the baseline length. The longer the baseline length compared with distance to a measurement object is, the higher the accuracy is. However, the measurement accuracy of the method is limited because the baseline length cannot be made longer than the robot size. Therefore, the method can measure with high accuracy only near the robot (Fig. ). Motion stereo using a pair of images taken with a single camera at different observation points is equivalent to binocular stereo vision. This method can make the baseline length longer without restriction of the robot size. Therefore, objects far from the robot can be measured with higher accuracy than by binocular stereo. It is necessary to estimate camera motions (the relative relations of camera positions and orientations) in this method. The structure from motion (SFM) that is a kind of motion stereo estimates camera motions, and then measures objects [8]. However, the SFM is difficult to measure objects existing in baseline direction with high accuracy (Fig. 3). In addition, the scale is unknown by using SFM-based approaches. Then, we aim to improve measurement accuracy by using two omnidirectional cameras installed on a mobile robot. In addition, stereo matching accuracy is improved by considering omnidirectional image distortion. Figure 5. Camera configuration of our proposed method.. Outline The process of our method is shown in Fig. 4. First, the mobile robot equipped with two omnidirectional cameras acquires an omnidirectional image sequence during its locomotion. Next, by using omnidirectional images, binocular stereo measurement and motion stereo measurement are executed. Finally, both measurement results are integrated to obtain high measurement accuracy. Two omnidirectional cameras are installed parallel on the mobile robot. The baseline direction is decided perpendicular to the movement direction (Fig. 5). 3. Ray Vector Calculation The coordinate system of our omnidirectional camera is shown in Fig. 6. The omnidirectional camera has a hyperboloid mirror in front of the lens of a conventional camera. A ray heading to image coordinates (u, v) from the camera lens is reflected on the hyperboloid mirror. In this paper, the reflected vector is called a ray vector. The extension lines of all ray vectors intersect at the focus of the hyperboloid mirror. The ray vector is calculated by the following equations. r = su sv sf c, (1) s = ( a f a + b + b ) u + v + f a f b (u + v ), () where a, b and c are hyperboloid parameters and f is the focal length of the lens, respectively. 97

Hyperboloid a X + Y Z b = 1 r Object point c ( = a + b ) Z c Image plane X x Y ( u, v ) f y Figure 8. The image is obtained by perspective transformation. Omnidirectional image. Perspective image. Lens Figure 6. The coordinate system of the omnidirectional camera. A ray vector is defined as a unit vector which starts from the focus of a hyperboloid mirror. Projection plane1 Object 4. Binocular Stereo Measurement 4.1. Epipolar Constraint It is necessary to detect corresponding points between two images in stereo measurement. Then, in our method, we use template matching for searching corresponding points. We use epipolar constraint for template matching because the search range can be limited on the epipolar line. The epipolar line is obtained by projecting the ray vector calculated from a point in the image onto the other image (Fig. 7). Projection center Projection plane Figure 9. Projective distortion. The upper perspective image have projective distortion. The lower one does not have projective distortion. s y Hyperboloid Hyperboloid Projection plane d s x Camera ray1 Camera ray rl r r Epipolar line Image plane1 Image plane Epipolar line Figure 7. The epipolar line of the omnidirectional image. epipolar line is obtained by projecting the ray vector. The Left image Right image Figure 10. Detecting of corresponding point by comparing dissimilarity measures. The dissimilarity measure is calculated by SAD. 4.. Corresponding Point Acquisition The omnidirectional image distortion and the projection distortion disturb corresponding point acquisition. The perspective projection conversion is used to solve this problem. The omnidirectional camera with a hyperboloid mirror can convert the omnidirectional image to the perspective projection image (Fig. 8). The perspective projection image is generated by projecting the omnidirectional image to the projection plane in the 3D space. Calculation of the projection plane is important for decreasing various distortion. When the projection plane leans, the projection distortion arises to the perspective projection image (Fig. 9). The dissimilarity measure becomes small if the distortion is small when the feature quantity is compared. Therefore, we search for the center position and the gradient of the projection plane that minimizes dissimilarity measures as follows. Corresponding points are acquired from the perspective projection image calculated based on the center position and the gradient. Let d be the center position of the projection plane and s x and s y be the axes of the projection plane. At first, the method calculates d. As the first step, our 98

method acquires images taken with right and left cameras. Then, feature points are extracted from the left image. Ray vector r 1 is calculated from these feature points. The epipolar line in the righ image is calculated from r 1 and the position relationship between the left and right cameras. d is given by searching along the epipolar line of the r 1 in the right image (Fig. 10). In the next step, the ray vector r that passes each point on the epipolar line is calculated. The candidate point of d is calculated as a cross point of r 1 and each r. In the third step, the projection plane is generated centering on each d. An omnidirectional image is enlarged on the projection plane. And the projection plane is projected to the image plane. After these steps, the right image and the left image are converted to the perspective projection image at each candidate point of d. By comparing dissimilarity measures between these images, d which minimizes dissimilarity measure is decided. The dissimilarity measure is calculated by SAD (Sum of Absolute Difference). Then, s x and s y are rotated at the position of d. Feature quantity is compared while rotating s x and s y. At last, corresponding points which are regarded as the same point in 3D space can be obtained. 4.3. 3D Measurement Each ray vector is calculated from corresponding points of two images. Measurement points are obtained as the cross point of each ray vector. 5. Motion Stereo Measurement 5.1. Point Tracking For getting correspondent points between images in the omnidirectional image sequence, the method extracts feature points in the initial image and then tracks them along the sequence. In our method, we use the Lucas Kanade tracker algorithm [9] with image pyramid representation [10]. These points are regarded as corresponding points between two images taken before and after the robot movement. When the baseline direction is decided perpendicular to the movement direction, the other camera appears in the field of view. This might cause error in camera motion estimation.therefore, in our method, the area where the other camera appears is excluded from each image. 5.. Camera Motion Estimation The ray vector is calculated from feature points in before and after images as follows. r i =[x i,y i,z i ] T, r i =[x i,y i,z i] T. (3) Before movement 1 Measurement point (Motion stereo) m After movement (Not scaled) p i Figure 11. Scale matching. After movement (scaled) Measurement point (Binocular stereo) Then, the essential matrix E that describes camera positions and orientations is calculated. E is expressed by Eq. (4), and Eq. (5) is obtained from Eq. (4). p' i r T i Er i =0, (4) u T i e i =0, (5) where u i and e are expressed by the following equations (Eqs. (6) and (7)). u i =[x i x i,y i x i,z i x i,x i y i,y i y i,z i y i,x i z i,y i z i,z i z i] T, (6) e =[e 11,e 1,e 13,e 1,e,e 3,e 31,e 3,e 33 ] T. (7) Then, E is calculated by Eq. (8). min Be, e (8) B =[u 1, u, u n ] T, (9) where n is the number of feature points. E is expressed by Eq. (11) which is a product of a rotation matrix R and a translation vector t =[t x,t y,t z ] T [11]. 5.3. 3D Measurement E = TR, (10) 0 t z t y T = t z 0 t x. (11) t y t x 0 Each ray vector is calculated from corresponding points of before and after images. The measurement point is obtained as the cross point of each ray vector. The motion stereo method cannot determine the distance t between two observation points because the method 99

which uses only images for the input does not get scale information. In our method, the 3D coordinates of the feature point measured by the binocular stereo method has scale information. Therefore, we use the measurement result by the binocular method for scale matching. We measure the 3D coordinates of a point p i by the motion stereo method. The 3D coordinates of the same point p i are measured by the binocular stereo method (Fig. 11). The scale m is calculated by Eq. (1). mp i = p i. (1) Other motion stereo measurement data are scaled by the scale factor m. 6. Measurement Data Integration Figure 1. The mobile robot equipped with two omnidirectional cameras. Figure 13. Feature points extracted by KLT algorithm. The measurement accuracy will be low if an object is close to the baseline direction or far from the camera. Therefore, the measurement data is a mixture of high and low accuracy. Then we remove the measurement data of low accuracy by taking the differentiation of measurement result p i by the image coordinates of the two feature points [u m,i,v m,i ] T and [u m,i,v m,i ]T as the evaluation of the measurement accuracy. Measurement results satisfying Eq. (17) are regarded as good results, where h is a threshold. g i =[g x,i,g y,i,g z,i ] T, (13) p g x,i = i x 1,i + p x i 1,i + p x i,i + p x i,i x 1,i u 1,i x 1,i v 1,i x,i u,i x,i v,i, (14) p g y,i = i y 1,i + p y i 1,i + p y i,i + p y i,i y 1,i u 1,i y 1,i v 1,i y,i u,i y,i v,i, (15) p g z,i = i z 1,i + p z i 1,i + p z i,i + p z i,i z 1,i u 1,i z 1,i v 1,i z,i u,i z,i v,i, (16) g i <h. (17) Therefore, binocular stereo and motion stereo measurement results are integrated with high accuracy. 7. Experiments We used two omnidirectional cameras installed on the mobile robot (Fig. 1). The baseline length is 360 mm. The image size is 190 1080 pixels. Figure 14. Corresponding points obtained by template matching. With perspective transformation. With panoramic extension. 7.1. Experiment for Matching Comparison We compared the binocular stereo matching with perspective transformation and matching with panoramic expansion. The distance between the omnidirectional camera and the measurement object is 1m. We used 50 feature points extracted by the Lucas Kanade tracker algorithm (Fig. 13). Figure 14 shows corresponding points obtained by each method. Mismatching points are 11 points by using a panoramic image. Mismatching points are 4 points by the proposed method. Figure 15 shows a histogram of matching score. The vertical axis is the number of corresponding points and the horizontal axis is the similarity. From those results, the proposed method using perspective transformation is possible to obtain corresponding points with high accuracy. 7.. Measurement Experiment In the experiment, we measured an indoor environment (Fig. 16). We acquired omnidirectional images (Fig. 17) by using the mobile robot. The top view of the measurement data by binocular stereo is shown in Fig. 18. The red marks show the measurement data. The blue marks show the camera position. The measurement data has error because of including error by camera calibration. The top view of the measurement data by motion stereo is shown Fig. 19. The yellow marks show the measurement data. The green marks 300

t in e h T 30 o p 5 g in d n o 0 p s r e o 15 c f o r e b 10 m u n 5 0 panoramic extension Proposed method 86 88 90 9 94 96 98 100 Similarity[%] Figure 15. The histogram of matching score. The vertical axis is the number of corresponding points. The horizontal axis is the similarity. Figure 18. Measurement data by binocular stereo. The red marks show the measurement data. The blue marks show the camera position. Figure 16. Experimental environment. Figure 19. Measurement data by motion stereo. The yellow marks show the measurement data. The green marks show the camera position. Left camera. Right camera. Figure 17. Omnidirectional stereo image pair are acquired by using mobile robot. Left image. Right image. show the camera position. The measurement data with scale matching is shown Fig. 0. The top view of the measurement results integrated with binocular stereo data and motion stereo data is shown in Fig. 1. The bird s eye view of the measurement results is shown in Fig.. Table 1 shows the standard deviation from the least squares plane 1 and (Fig. 1). These results show that the proposed method can measure the environment with high accuracy. Table shows the processing time of each method. The processing time of binocular stereo takes 80.78sec to measure the area of 3m 4.5m m. The processing time of motion stereo takes 6.3sec to measure the area of 0.5m.5m 1.5m. Then, 90 images are used by motion stereo mesurement. Table 1. Standard deviation from the least squares plane. Standard deviation Plane1 9.09mm Plane 17.1mm 8. Conclusions In this paper, we propose a three-dimensional measurement method using binocular stereo together with motion stereo by two omnidirectional cameras installed on a mobile robot. Experimental results showed the effectiveness of the proposed method. As future works, the camera internal parameters should be estimated accurately to improve the measurement accuracy and the processing time. Many points should be used in scale matching to improve the measurement accuracy. 301

Table. Processing times. Measurement method Processing time Binocular stereo 80.78sec Motion stereo 6.3sec Figure. Bird s eye view of integrated measurement result. The red marks and yellow marks show the measurement data by binocular stereo and motion stereo respectively. The blue marks and green marks show the camera position by binocular stereo and motion stereo respectively. Figure 0. Measurement data with scale matching. The red marks and yellow marks show the measurement data by binocular stereo and motion stereo respectively. The blue marks and green marks show the camera position by binocular stereo and motion stereo respectively. Plane1 Plane References [1] A. J. Davison: Real-Time Simultaneous Localisation and Mapping with a Single Camera, Proceedings of 9th IEEE International Conference on Computer Vision, Vol., pp. 1403-1410, 003. [] C. Geyer and K. Daniilidis: Omnidirectional Video, The Visual Computer, Vol.19, No.6, pp.405-416, 003. [3] R. Bunschoten and B. Krose: Robust Scene Reconstruction from an Omnidirectional Vision System, IEEE Transactions on Robotics and Automation, Vol.19, No., pp.351-357, 003. [4] J. Gluckman and S. K. Nayar: Ego-motion and Omnidirectional Cameras, Proceedings of the 6th International Conference on Computer Vision, pp. 999-1005, 1998. [5] H. Ishiguro, M. Yamamoto and S. Tsuji: Omni-Directional Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No., pp. 57-6, 199. Figure 1. Top view of integrated measurement result. The red marks and yellow marks show the measurement data by binocular stereo and motion stereo respectively. The blue marks and green marks show the camera position by binocular stereo and motion stereo respectively. Acknowledgements This work was in part supported by MEXT KAKENHI, Grant-in-Aid for Young Scientist (A), 680017. [6] J. Takiguchi, M. Yoshida, A. Takeya, J. Eino and T. Hashizume: High Precision Range Estimation from an Omnidirectional Stereo System, Proceedings of the 00 IEEE/RSJ International Conferenceon Intelligent Robots and Systems, pp. 63-68, 00. [7] R. Kawanishi, A. Yamashita and T. Kaneko: Three- Dimensional Environment Model Construction from an Omnidirectional Image Sequence, Journal of Robotics and Mechatronics, Vol.1, No.5, pp.574-58, 009. [8] P. Chang and M. Hebert: Omni-directional Structure from Motion, Proceedings of the 000 IEEE Workshop on Omnidirectional Vision, pp.17-133, 000. 30

[9] J. Shi and C. Tomasi: Good Features to Track, Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.593-600, 1994. [10] J. Y. Bouguet: Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the Algorithm, OpenCV, Intel Corporation, 000. [11] R. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision, Second Edition, Cambridge University Press, 004. 303