A new approach for terrain description in mobile robots for humanitarian demining missions

Size: px
Start display at page:

Download "A new approach for terrain description in mobile robots for humanitarian demining missions"

Transcription

1 ABSTRACT A new approach for terrain description in mobile robots for humanitarian demining missions C. Salinas, M. Armada, P. Gonzalez de Santos Department of Automatic Control Industrial Automation Institute CSIC Ctra. Campo Real Km Madrid, Spain The humanitarian demining missions require the use of robust systems such as efficient mobile robots and improved sensors. This application domain involves performing tasks in non structured scenarios and dynamically changing environments. This paper is focused on a new approach based on omnidirectional vision systems for terrain description. Computer vision systems are widely used in similar applications. However, conventional video cameras have limited fields of view which make them restricted for several applications in robotics. For example, mobile robots often require a full 360º view of their environment, in order to perform navigational tasks such localizing within the environment, identifying landmarks and determining free paths in which to move. The omnidirectional sensors allow the capture of much wider field of view; they can provide panoramic images of 360º around the robot. Certain techniques have been developed for acquiring panoramic images. This work outlines the fundamental principles of three of these techniques, and shows a direct application of a low-cost catadioptric omnidirectional vision sensor on-board a six-legged robot intended for antipersonnel landmine localization. 1. Introduction Millions of landmines currently are still buried all over the world, causing threats to the economy and particularly to lives of the nations affected for these pyrotechnic instruments. Detection and removal of antipersonnel landmines has become a global issue. According to recent computes landmines kill/injury more than 000 civilians per month. Demining has still been in progress by a manual method, since it was proposed several decades ago, this procedure is risky and slow [1]. Relying only on manual work, to dispose all these landmines it would be necessary a hundreds of years. Solutions are being explored in different engineering fields. The best course for releasing human operators from this risky task is to apply fully automatic systems; nevertheless, this solution is still far from successful, due to the complexity of applying fully autonomous systems in such unstructured environments. However, there are some aspects of the job that robots can do quite well, like scanning the ground to detect and locate buried mines [, 7]. This is where robots, carrying efficient sensor, can play an important role. The automation of an application such as the detection and removal of antipersonnel mines implies the use of autonomous or teleoperated mobile robots. These robots follow a predefined path, send the recorded data to their expert-system, when a mine is detected its position is marked/ saved with a probability of predefined level and possibly remove the detected mine. Several sensors have been adopted over the last years; the work on rough terrain navigation has been broadly implemented with high-cost solutions. In general, vision systems are mounted

2 high in the frame to look down towards the ground. The field of view covers the area between the metal detector and the scanning sensors, the scanning of an area can be obtained by moving the robot body itself. Stereovision systems based on image processing are used to provide 3-dimensional maps for the detection. The target trajectory is generated from the depth information previously acquired from several images of the minefield. A large volume of data is produced, that is inconvenient for the planning trajectory and the calibration of stereo systems must be carefully performed [3, 4]. These methods usually need an offline planning trajectory and are hardly limited by the image sensor. Others configuration generally used are 3D-raster image consisting of a number of laser spots located on several concentric scan lines [6], combined with/or ultrasonic sensor, rangefinder sensors and visual servoing systems based on a pan-and-tilt color camera. The consistency of remote sensing is improved by fusing synergistic measurements of different types of detectors [5], which meant implementation of high-cost systems. In these work, we present a new approach for terrain description based on low-cost omnidireccional vision system, on-board a six-legged robot intended for antipersonnel landmine localization, to improve efficiency of involved tasks in automated detection and removal operations. The outline of this paper is as follows. Section introduces the DYLEMA project and explains the main features of the mobile system Silo6. Section 3 details important issues of omnidirectional vision techniques. In section 4, catadioptric omnivision techniques and camera central projection model are explained. Section 5 report some results accomplish with the ongoing experimental prototype of catadioptric omnivision system implemented in DYLEMA project. Finally, section 6 presents the main conclusions and future research.. The DYLEMA project The DYLEMA project is devoted to the configuration of a humanitarian de-mining system that consists of a sensor head, a scanning manipulator and a mobile platform based on a hexapod walking robot (see Figure 1) [8]. The sensor head consists on a commercial mine-detecting set customised with a ground-tracking set to adapt the head to ground irregularities. This groundtracking set provides adequate information for keeping the manipulator s end-effector at a given height above the ground. To achieve this objective, the manipulator has to track the surface whilst moving the sensor head. In addition, the manipulator has to avoid obstacles in the way of the sensor head, such as big stones, bushes, trees and so on. The sensor head information is also used by the system controller to steer the mobile robot during mine detection missions. It is commonly agreed that an efficient detection system should merge different technologies. The DYLEMA project, however, focuses only on the development of robotic techniques. The development of mine-detecting sensors does not fall within the scope of the project. Therefore, the simplest mine sensor (a metal detector) is considered in this work, just to help detect and locate potential alarms. After a suspect object is detected, its location must be marked in the system database for further analysis and possible deactivation. Additionally, DYLEMA project includes research field in methods of complete coverage of unstructured environments for mobile robot navigation [9] and sensor integration and control for scanning activities [10].

3 Magnetic compass Manipulator GPS antenna Figure 1 - DYLEMA Configuration Mobile platform Sensor head 3. Omnidirectional vision Systems Figure 1: DYLEMA configuration. During the last decades, researchers in several engineering fields as applied optics, computer vision, and robotics, has presented a remarkable work related to omnidirectional cameras and their applications [11]. The standard cameras typically have a constrained field of view (~30 60) and are therefore adequate for observing small local areas. However, there are many applications that require or benefit from observing wider areas than is not possible with a TV camera. For example, mobile robots often require a full 360 view of their environment in order to perform navigational tasks such identifying landmarks, localizing within the environment, and determining free paths in which to move. For these reasons many techniques related to omnidirectional vision systems have been developed. For mobile robots, several specific tasks are essential for navigating in either structured or unstructured environments. The robot must be able to sense its environment and construct a local representation that is sufficient in detail and accuracy to allow the robot to find free paths in which to move. It also must be able to perform localization, i.e., determine its position and orientation within the environment and register this information with its local representation of its surroundings, combined in this case with detection of landmine task. These vision systems consist of omnidirectional sensors, which came in many varieties; whatever it is, by definition the essential idea of omnidirectional sensor is afford a wide field of view. According to their structure, the sensors are classified in 3 groups: Dioptric cameras, which can acquire wide angles view of as much as hemispherical view, e.g. the commercial cameras named fisheye. Polidiotric cameras, these systems are able to provide ~360 degrees of FOV, the typical configuration is composed by multiples overlapping cameras. And the third group, Catadioptric cameras, that are able to acquire more than 180 degrees of field of view and normally are composed of perspective cameras and convex mirror (see Figure ). The main application of the system (i.e. autonomous mobile robots, surveillance, teleopereance) defines the he solution within dioptric, polidiotric or catadioptric systems, one or two fish-eye or synchronized cameras. The polidiotric sensors have high resolution per viewing angle, the

4 cameras can be cheaper if they are homemade, but they must be calibrated and synchronized, and commercial cameras usually are expensive. The principal disadvantage is the amount of bandwidth required to simultaneous acquisition of numerous cameras. In addition, the complexly to manufacture compact systems and mechanical problems, it is an important issue to calibrate multiple synchronized cameras. In the case of dioptric cameras, only is required the acquisition of a single image per time, it means a low acquisition rate. They are difficult to made, normally these cameras not satisfy the constraint of single effective viewpoint and central projection, causing a complicated computing of three-dimensional information. The resolution through the omnidirectional image is not constant, been poor in the peripheral area. Catadioptric sensor also required a single image capture, the 3-dimensional data processing it could be workable due the possibility to satisfying the single effective viewpoint constraint. The resolution is lower than the original image, however the wide view angle benefit the estimation algorithms stabilizing for ego-motion, the rotation and translation can be easily distinguished. Starting with two panoramic images is possible to carry out a surrounding scene area, by means of 360 view angle around the camera. (a) (b) (c) Figure : Omnidirectional vision sensors: (a) Dioptric, (b) Polidioptric and (c) Catadioptric. 4. Catadioptric omnidirectional vision systems Catadioptric is referred to sciences of refracting and reflecting elements, such as lenses and mirrors. The central projection of a convex mirror should be aligned with an optical axis of the camera lens, and also must be placed into focal point of the mirror, in this way; the intersection of all reflected rays at the focal point of the mirror is assured. This means that the whole catadioptric sensor has a single effective viewpoint (central catadioptric camera). The remarkable work in designing panoramic catadioptric cameras were presented by Baker [1], Yagi [13], Svodoba [14]. Several mirrors have been used with perspective and orthographic cameras, spherical, hyperbolic and parabolic mirrors (Figure 3).

5 Image plane Image plane (a) (b) (c) Figure 3: Projection of (a) spherical, (b) hyperbolic and (c) parabolic mirror and their corresponding reflected rays. Considering the information in Figure 3, the reflected ray of spherical mirrors have similar properties as fish-eye lenses, they have good resolution in central region but the peripheral resolution is poor. Parabolic mirrors (Figure 3(c)) where proposed in late 90 s, they work as a parabolic antenna, their reflected ray are parallel to the rotation axis of the mirror, they are modelled by coupling and orthographic camera, in order to accomplish the single effective viewpoint. Finally, hyperbolic mirrors reflect 3 dimensional rays in the space through its second focal point located in its central axis (Figure 3(b)). The projection of hyperbolic mirror is represented in detail in Figure 4, where a point P in the space is reflected by the hyperboloid surface and projected on the image plane. From point P(X, Y, Z) a ray goes toward the focal point of the mirror Focus1 and is reflected by the mirror, then is conduced through the other focal point Focus, intersecting into the image plane at point p(- x,-y). This relation is satisfied for any point in the space in the field of view (360 degrees around the Z axis) of the hyperbolical projection. Consequently, we can obtain an omnidirectional image of the scene on the image plane with a single center of projection within Focus1 and Focus. The solution of fixed viewpoint constraint and geometry of convex mirror are widely explained in [1]. The hyperboloid surfaces can be obtained by revolving the hyperbola around the Z axis and having two focal points as shown in Figure 4. Using the world coordinates system (X, Y, Z) the hyperboloid surface can be represented as equation (1) and (): X + Y a Z b = 1 (1) c + = a b () Where a and b define the shape of a hyperboloid surface. If the system consists of a CCD camera and a hyperboloid mirror; notice that the focal point of the hyperbolic mirror Focus1

6 and the lens center of the camera Focus, are fixed at the focal points of the hyperboloid surfaces (0, 0, c) and (0, 0,c), respectively. The axis of the camera and mirror are aligned. The image plane should also be placed at a distance f (focal length of camera) from the lens center of camera Focus, and be parallel to the XY plane. Figure 4: Hyperbolic projection In order to accomplish the image modeling for this catadioptric sensor is necessary to obtain the homogenous transformation amount the world, the mirror and the camera frames. Defining carefully three references, the world reference system denoted O w which a corresponding 3D point is X w ; the mirror coordinates system centered at the focus Focus1 whose vector is X m ; and the camera coordinates system centered at Focus, according to what X c is its vector The projection between the pinhole camera and mirror frame, is obtained by equation (1) and (), and is given by: m mt m m ( R ( R ( X t ) t ) 1 m = K c λ w w w + c c (3) Where m is the projection to be calculated, is a non linear function of X m, K is the internal m calibration matrix of the camera looking at the mirror. t c is the center of the mirror expressed in the camera frame, corresponding to (0,0,c). The rotation between camera and mirror frames is represented by matrix R. And finally, the configuration of the mirror with respect to the m c world frame, rotation and orientation are represented by m t w and m R w respectably. In this way, it is possible to determine the position of 3 dimensional points in image plane (panoramic image) [15]. Moreover, this feature is useful to design the shape and dimensions of the mirror, in order to improve and maximize resolution and field of view of our system. Some experiments are shown in follow figures, where 3 point in the world frame are acquire for hyperbolic mirror whit different configurations.

7 (a) (b) (c) Figure 5: Simulation of catadioptric sensor image acquisition, configuration 1 Figure 6: Simulation of catadioptric sensor image acquisition, configuration.

8 In Figure 5, the graphic (a) shows the representation of the catadioptric system and the 3 dimensional points attached in the world frame, (b) shows the top view of the system, and finally (c) represents the position of these points in a D acquired image. Figure 6, as Figure 5, show the representation of the catadioptric sensor acquisition, corresponding to a different configuration, where the maximum angle between a reflected ray and the center axis is 140, which mean that the highest resolution of the image is placed in peripheral area. 5. Experimentation and results If a standard camera is mounted on a mobile robot and it is aligned with its forward direction of motion. This is sufficient for vehicles that can move in a constrained set of directions (e.g., a car), where the primary vision tasks typically consist of obstacle detection and avoidance. Nevertheless, a rigid camera with a limited field of view is not ideally suited to robots with omnidirectional motion capability, and navigation on unstructured environments, and when other vision tasks must be performed such as building a local representation or localizing within its environment, detection or scanning rough terrains. For these tasks it would be much better to use a vision sensor that can provide panoramic images, i.e., 360 images around the robot; these are also referred to as omnidirectional images. In previous section we present several techniques to obtain omnidirectional images, and according to the features required for mobile robot system, such us the case of Silo6 six-legged robot (section ), a catadioptric hyperbolic vision system is adequate and will benefit the tasks performed by the robot. Several simulation where done in section 4, as a result a low-cost catadioptric system is been developed in Department of Automation Control. An omnidirectional image acquired by our system is presented as follow, and its corresponding panoramic image. It is easily to detect that both images are distorting from our point of view; however for omnidirectional theory it is straightforward to calculate this deformation angle and introduce it in the system.

9 Figure 7: (a) Omnidirectional and (b) panoramic images. The angle in the image can be calculated as y/x, showing the azimuth angle of point P in space (section 4). Also, it can be easily understood that all points with the same azimuth in space appear on a radial line through the image center. This useful feature with a hyperboloid projection, allow the vertical edges in the environment appear radially in the image (represented by blue and red arrows). By simple geometrical analysis, equations relating the point in space P(X, Y, Z) and its image point on the image plane p(x, y), can be derived as follows: X x tan θ = = (4) Y y Z = X + Y tanα + c ( b + c ) sin γ ( b c ) cos γ 1 α = tan bc (5) (6) 1 x + y γ = tan (7) f Where denotes the tilt angle of point P from the horizontal plane, f is the focal length of the camera lens. From equations (4), (5) and (6), the azimuth angle and the tilt angle of the line connecting the focal point of the mirror Focus1 and the point in space P can be obtained from the position of the image point p(x, y) (see Figure 4). This means that the equation of the line connecting Focus1 and P can be determined uniquely from the coordinates of the image point p(x, y), regardless of the location of the point P in space. The prototype was tested on-board of six-legged robot Silo6, a sequence acquired by the system is presented in Figure 8, where corresponding entities do not vanish due to limited field of view (enclosed by green). The displacements of such entities considerably vary with different kind of motion. With this system is possible to detect object around the robot only acquiring a singles image per time. The terrain area around the robot can be easily separated form the peripheral area, such as trees, bushes stones, and moving objects.

10 - Figure 8: Sequence of catadioptric system onboard six-legged robot Silo6 The system obtains the capability to detect several obstacles, fixed or moving object. It is possible to apply image processing techniques as optical flow for segmented these objects and avoid them. Also is able to tracking the trajectory of the manipulator (Figure 9) and use the vision to correct it movement, especially in situation where two objects are close and the distance between then is smaller that the sensor head diameter.

11 Figure 9: Sequence of manipulator movement detection. Other important issue is the capability to observe with a single image the environment and the robot itself. The area covered by the image includes the space between the scanning system manipulator and the robot. This system also provides the feature to observe the 360 view for teleoperated applications, saving the security of the operator and the robot itself. 6. Conclusions The removal of antipersonnel landmine is a global issue. In this work we presented the possibility of designing a low-cost system based on omnidirectional sensors, to improve the efficiency of humanitarian de-mining tasks. Because these tasks require devices that can automate the location of unexploded ordnance, it is proposed that they can be accomplished by using robotic systems capable of carrying scanning sensors over infested fields The ongoing prototype has very usefully features, because they can benefit several tasks involved in humanitarian demining missions. The robotic system can be able to respond in advance, i.e. obstacle situated in a distance larger than the manipulator range. The system will be capable to make online corrections of its trajectory. Other important point is the benefit of the efficiency of a complete coverage of a minefield wider area, since the system has a previous knowledge of the terrain and its obstacles. The next step in this research will be the test of the catadioptric vision system in specific online tasks on hexapod mobile platform and to study particular image processing algorithms for panoramic images to terrain description. Acknowledgements DYLEMA project is funded by the Spanish Ministry of Education and Science through grant DIP This work was supported in part by Consejería de Educación of Comunidad de Madrid under grant RoboCity030 S-0505/DPI/0176. This work is funded by Autonomous Community of Madrid through fellowship in Research Professional in Training (FPI).

12 References [1] J.-D. Nicoud, Vehicles and robots for humanitarian demining, Industrial Robot 4 () (1997) [] J. Trevelyan, Robots and landmines, Industrial Robot 4 () (1997) [3] P. Bellutta, R. Manduchi, L. Matthies, K. Owens, and A. Rankin. Terrain Perception for DEMO III. In Procs. IEEE Intelligent Vehicles Symposium 000, , Detroit, USA, Oct [4] Seiji Masunaga and Kenzo Nonami, Controlled Metal Detector Mounted on Mine Detection RobotInternational Journal of Advanced Robotic Systems, Vol. 4, No. (007). [5] G. A. Clark, Computer Vision and Sensor Fusion for Detecting Buried Objects, Annual Asilomar Conference on Signal, Systems, and Computers (6th), October 199. [6] H. Najjaran, A. Goldenberg, Landmine detection using an autonomous terrain-scanning robot, Industrial Robot: An International Journal 3 (3) (005) [7] E. Colon, G. De Cubber, H. Ping, J-C Habumuremyi, H. Sahli and Y. Baudoin. Integrated robotic systems for Humanitarian Demining, International Journal of Advanced Robotic Systems, Vol. 4, No. (007). [8] P. Gonzalez de Santos, J.A. Cobano, E. Garcia, J. Estremera, M.A. Armada. A sex-legged robot-based system for humanitarian demining missions. Mecatronics 17 (007) [9] E. Garcia, P. Gonzalez de Santos, Mobile robot navigation with complete coverage of unstructured environments, Robotics and Autonomous Systems 46 (4) (004) [10] E. Garcia, P. Gonzalez de Santos, Hybrid deliberative/reactive control of a scanning system for landmine detection, Robotics and Autonomous Systems 55 (6) (007) [11] Y. Yagi. Omnidireccional sensing and its applications. IEICE Trans. Inf. Syst. E8-D(3) (1999) [1] S. Baker, S.KNayar. A theory of single-viewpoint catadioptric image formation. Int. J. of Computer Vision 35() (1999) [13] Y. Yagi, M. Yachida. Real-Time Omnidirectional Image Sensors. International Journal of Computer Vision 58(3)(004) [14] T. Svodoba, T. Pajdla. Epipolar geometry for central catadioptric camera. Int. J. of Computer Vision, 49(1)(00) [15] G.L. Mariottini,D. Prattichizzo. The epipolar geometry toolbox. IEEE Robotics and Automation Magazine, 1(3)(005).

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Stereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy

Stereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy Stereo SLAM, PhD migliore@elet.polimi.it Department of Electronics and Information, Politecnico di Milano, Italy What is a Stereo Camera? Slide n 2 Do you remember the pin-hole camera? What is a Stereo

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Monitoring surrounding areas of truck-trailer combinations

Monitoring surrounding areas of truck-trailer combinations Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Hiroshi ISHIGURO Department of Electrical & Computer Engineering, University of California, San Diego (9500 Gilman

More information

Ego-Mot ion and Omnidirectional Cameras*

Ego-Mot ion and Omnidirectional Cameras* Ego-Mot ion and Omnidirectional Cameras* Joshua Gluckman and Shree K. Nayar Department of Computer Science Columbia University New York, New York 10027 Abstract Recent research in image sensors has produced

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot

Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot Non-isotropic Omnidirectional Imaging System for an Autonomous Mobile Robot Kazuaki Kondo and Yasushi Yagi The Institute of Scientific and Industrial Research Osaka University Mihogaoka 8- Ibaragishi Osaka,

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Using 3D Laser Range Data for SLAM in Outdoor Environments

Using 3D Laser Range Data for SLAM in Outdoor Environments Using 3D Laser Range Data for SLAM in Outdoor Environments Christian Brenneke, Oliver Wulf, Bernardo Wagner Institute for Systems Engineering, University of Hannover, Germany [brenneke, wulf, wagner]@rts.uni-hannover.de

More information

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,

More information

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Ryosuke Kawanishi, Atsushi Yamashita and Toru Kaneko Abstract Map information is important

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

A dioptric stereo system for robust real-time people tracking

A dioptric stereo system for robust real-time people tracking Proceedings of the IEEE ICRA 2009 Workshop on People Detection and Tracking Kobe, Japan, May 2009 A dioptric stereo system for robust real-time people tracking Ester Martínez and Angel P. del Pobil Robotic

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

arxiv:cs/ v1 [cs.cv] 24 Mar 2003

arxiv:cs/ v1 [cs.cv] 24 Mar 2003 Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging Technical Report arxiv:cs/0303024v1 [cs.cv] 24 Mar 2003 R. Andrew Hicks Department of Mathematics Drexel University

More information

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi

More information

Light: Geometric Optics (Chapter 23)

Light: Geometric Optics (Chapter 23) Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 10M Open access books available International authors and editors Downloads Our authors

More information

SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET

SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET Borut Batagelj, Peter Peer, Franc Solina University of Ljubljana Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25,

More information

A Fast Linear Registration Framework for Multi-Camera GIS Coordination

A Fast Linear Registration Framework for Multi-Camera GIS Coordination A Fast Linear Registration Framework for Multi-Camera GIS Coordination Karthik Sankaranarayanan James W. Davis Dept. of Computer Science and Engineering Ohio State University Columbus, OH 4320 USA {sankaran,jwdavis}@cse.ohio-state.edu

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Chapter 12 3D Localisation and High-Level Processing

Chapter 12 3D Localisation and High-Level Processing Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the

More information

Calibration of a fish eye lens with field of view larger than 180

Calibration of a fish eye lens with field of view larger than 180 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek

More information

CS4670: Computer Vision

CS4670: Computer Vision CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter University of Hamburg Faculty of Mathematics, Informatics

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics 23.1 The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

General Physics II. Mirrors & Lenses

General Physics II. Mirrors & Lenses General Physics II Mirrors & Lenses Nothing New! For the next several lectures we will be studying geometrical optics. You already know the fundamentals of what is going on!!! Reflection: θ 1 = θ r incident

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Real-time Security Monitoring around a Video Surveillance Vehicle with a Pair of Two-camera Omni-imaging Devices

Real-time Security Monitoring around a Video Surveillance Vehicle with a Pair of Two-camera Omni-imaging Devices Real-time Security Monitoring around a Video Surveillance Vehicle with a Pair of Two-camera Omni-imaging Devices Pei-Hsuan Yuan, Kuo-Feng Yang and Wen-Hsiang Tsai, Senior Member, IEEE Abstract A pair of

More information

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

521466S Machine Vision Exercise #1 Camera models

521466S Machine Vision Exercise #1 Camera models 52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB HIGH ACCURACY 3-D MEASUREMENT USING MULTIPLE CAMERA VIEWS T.A. Clarke, T.J. Ellis, & S. Robson. High accuracy measurement of industrially produced objects is becoming increasingly important. The techniques

More information

A PERCEPTION SYSTEM FOR ACCURATE AUTOMATIC CONTROL OF AN ARTICULATED BUS

A PERCEPTION SYSTEM FOR ACCURATE AUTOMATIC CONTROL OF AN ARTICULATED BUS A PERCEPTION SYSTEM FOR ACCURATE AUTOMATIC CONTROL OF AN ARTICULATED BUS CARLOTA SALINAS 1, HECTOR MONTES 1, 2, MANUEL ARMADA RODRIGUEZ 1 1 Dept. of Automatic Control, Centre for Automation and Robotics

More information

Motion estimation of unmanned marine vehicles Massimo Caccia

Motion estimation of unmanned marine vehicles Massimo Caccia Motion estimation of unmanned marine vehicles Massimo Caccia Consiglio Nazionale delle Ricerche Istituto di Studi sui Sistemi Intelligenti per l Automazione Via Amendola 122 D/O, 70126, Bari, Italy massimo.caccia@ge.issia.cnr.it

More information

Part Images Formed by Flat Mirrors. This Chapter. Phys. 281B Geometric Optics. Chapter 2 : Image Formation. Chapter 2: Image Formation

Part Images Formed by Flat Mirrors. This Chapter. Phys. 281B Geometric Optics. Chapter 2 : Image Formation. Chapter 2: Image Formation Phys. 281B Geometric Optics This Chapter 3 Physics Department Yarmouk University 21163 Irbid Jordan 1- Images Formed by Flat Mirrors 2- Images Formed by Spherical Mirrors 3- Images Formed by Refraction

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Task selection for control of active vision systems

Task selection for control of active vision systems The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October -5, 29 St. Louis, USA Task selection for control of active vision systems Yasushi Iwatani Abstract This paper discusses

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

A Computer Vision Sensor for Panoramic Depth Perception

A Computer Vision Sensor for Panoramic Depth Perception A Computer Vision Sensor for Panoramic Depth Perception Radu Orghidan 1, El Mustapha Mouaddib 2, and Joaquim Salvi 1 1 Institute of Informatics and Applications, Computer Vision and Robotics Group University

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information

ÉCOLE POLYTECHNIQUE DE MONTRÉAL

ÉCOLE POLYTECHNIQUE DE MONTRÉAL ÉCOLE POLYTECHNIQUE DE MONTRÉAL MODELIZATION OF A 3-PSP 3-DOF PARALLEL MANIPULATOR USED AS FLIGHT SIMULATOR MOVING SEAT. MASTER IN ENGINEERING PROJET III MEC693 SUBMITTED TO: Luc Baron Ph.D. Mechanical

More information

Chapters 1 9: Overview

Chapters 1 9: Overview Chapters 1 9: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 9: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapters

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR FOR ROBOT NAVIGATION

OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR FOR ROBOT NAVIGATION Proceedings of COBEM 005 Copyright 005 by ABCM 18th International Congress of Mechanical Engineering November 6-11, 005, Ouro Preto, MG OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Apr 22, 2012 Light from distant things We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can

More information

Digital Image Correlation of Stereoscopic Images for Radial Metrology

Digital Image Correlation of Stereoscopic Images for Radial Metrology Digital Image Correlation of Stereoscopic Images for Radial Metrology John A. Gilbert Professor of Mechanical Engineering University of Alabama in Huntsville Huntsville, AL 35899 Donald R. Matthys Professor

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS #65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS Final Research Report Luis E. Navarro-Serment, Ph.D. The Robotics Institute Carnegie Mellon University Disclaimer The contents

More information

Nicholas J. Giordano. Chapter 24. Geometrical Optics. Marilyn Akins, PhD Broome Community College

Nicholas J. Giordano.   Chapter 24. Geometrical Optics. Marilyn Akins, PhD Broome Community College Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 24 Geometrical Optics Marilyn Akins, PhD Broome Community College Optics The study of light is called optics Some highlights in the history

More information

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:

More information

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Remote Reality Demonstration

Remote Reality Demonstration Remote Reality Demonstration Terrance E. Boult EECS Dept., 19 Memorial Drive West Lehigh Univ., Bethlehem, PA 18015 tboult@eecs.lehigh.edu Fax: 610 758 6279 Contact Author: T.Boult Submission category:

More information

Representing the World

Representing the World Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...

More information