Real Time Biologically-Inspired Depth Maps from Spherical Flow

Size: px
Start display at page:

Download "Real Time Biologically-Inspired Depth Maps from Spherical Flow"

Transcription

1 Real Time Biologically-Inspired Depth Maps from Spherical Flow Chris McCarthy, Nick Barnes and Mandyam Srinivasan Abstract We present a strategy for generating real-time relative depth maps of an environment from optical flow, under general motion. We achieve this using an insect-inspired hemispherical fish-eye sensor with 19 degree FOV, and a derotated optical flow field. The de-rotation algorithm applied is based on the theoretical work of Nelson and Aloimonos [1]. From this we obtain the translational component of motion, and construct full relative depth maps on the sphere. We examine the robustness of this strategy in both simulation and realworld experiments, for a variety of environmental scenarios. To our knowledge, this is the first demonstrated implementation of the Nelson and Aloimonos algorithm working in real-time, over real image sequences. In addition, we apply this algorithm to the real-time recovery of full relative depth maps. These preliminary results demonstrate the feasibility of this approach for closed-loop control of a robot. I. INTRODUCTION Essential to autonomous navigation is the ability to perceive depth. While absolute measures of distance are useful, they are not necessary for achieving most navigation tasks. Relative measures of distance to surfaces have been shown to be sufficient for autonomously navigating corridors [12], avoiding obstacles [13], [2], and docking with objects in the environment [9], [12]. In addition, obtaining depth maps across a wide field of view provides a means of perceiving environmental structure, which in turn may be used for higher level navigation tasks such as invoking appropriate navigational subsystems and mapping. It is well known that biological vision systems perceive depth from a variety of cues, depending on the configuration and the geometry of the eye. Among the potential cues are (a) stereo information (b) depth from focus (c) depth from convergence and (d) depth from optical flow [16]. Insects, with their immobile, fixed-focus eyes and low interocular separation, rely heavily on optical flow cues to obtain depth information [17]. Information from the optical flow that is generated in the eyes by the insects motion in the environment is used to (i) navigate safely through narrow gaps (ii) detect and avoid collisions with objects (iii) distinguish objects from their immediate backgrounds and (iv) orchestrate smooth landings [17]. Estimating depth from optical flow is simpler, computationally, than estimating depth from stereo, and is thus an attractive strategy for the relatively simple nervous systems of insects. Given many robotic vision systems are equipped with only a single camera, much attention has been given to those cues that C. McCarthy and N. Barnes are with the Vision, Science and Technology Programme, National ICT. Australia, and the Dept. Information Engineering, The Australian National University M. Srinivasan is with the Centre for Visual Sciences, The Australian National University do not rely on two or more simultaneous views of the scene, such as optical flow. In computer vision, the use of optical flow for scene reconstruction has been problematic. While a significant body of theoretical work exists [7], [1], [11], [5], in practice, these approaches generally lack the speed and robustness required for a real-time navigation system. One issue is the intolerance of such strategies to noisy flow estimates in local regions [18]. Optical flow estimation is notoriously noisy, and difficult to compute accurately under real-world conditions. An additional issue for depth map recovery is the reliable extraction of the translational component of flow (derotation), from which relative depth can be inferred. Many current flow-based depth perception strategies either assume pure translational motion (e.g. [3]), or apply planar models to extract surfaces from the scene (e.g. [13], [14]). While derotation algorithms exist, these are largely constrained to a single rotation, or are not fast or robust enough for real-time depth mapping. To generate 3D depth maps from optical flow under general motion, a de-rotation strategy is required for each rotational component. To obtain workable depth maps for navigation from flow, this algorithm must be sufficiently accurate, and must run in real-time. There currently exists no system capable of achieving this in real-world conditions. There is a growing body of theoretical work suggesting a spherical projection model, rather than a perspective model, may offer distinct advantages when inferring scene structure and self-motion from optical flow [4], [1]. Nelson and Aloimonos [1] highlight specific advantages gained through the existence of both a focus of expansion (FOE) and focus of contraction (FOC) in a single spherical image. From such observations, a potentially real-time de-rotation algorithm is derived for the complete recovery of rotational velocity components from the optical flow on a full view sphere. While some theoretical analysis of the algorithm s likely robustness in real-world conditions is provided, to our knowledge there exists no published results to date reporting the algorithm s performance in real-time, and over real image sequences. Given the potential of this algorithm to support a variety of navigation tasks, it is of interest to examine its feasibility for the recovery of 3D depth maps in real-time. In this paper, we report preliminary results obtained from the application of the Nelson and Aloimonos derotation algorithm to the task of generating 3D relative depth maps from a spherical sensor. In simulation, we show its application over a full view sphere undergoing general motion. We also present preliminary results over two real image sequences captured during the ground-based motion of a hemispherical view fish-eye camera. We show that the

2 Z E y Algorithm 1 Nelson and Aloimonos De-rotation Algorithm [1] 1: for ω c = ω min to ω max do 2: D[ω c ] = 3: for θ c = to 2π do 4: a = e(θ c ) ω c X Fig. 1. E z Optical flow on the view sphere. E x Y 5: β = φ θ c 6: if a < and β < π then 7: result = a 8: else if a > and π β < 2π then 9: result = a 1: else 11: result = 12: end if 13: D[ω c ] = D[ω c ] + result 14: end for 15: end for 16: ω = min index(d[ω min : ω max ]) 17: return ω recovery of 3D relative depth maps can be achieved in realtime, without the need for camera calibration. II. GENERATING DEPTH MAPS FROM SPHERICAL FLOW We briefly outline the theory for recovering 3D depth maps from the spherical projection of optical flow. For the purposes of de-rotation, it is of interest to consider the component of flow in the direction of great circles about each axis of rotation [1]. We define the position of any point on a view sphere in terms of its angular location on three great circles lying in orthogonal planes perpendicular to each axis, such that: θ = [ θ x θ y θ z ], (1) where θ x, θ y and θ z are angles in the direction of each orthogonal great circle, E x, E y and E z, in the range [, 2π], as shown in Figure 1. Using this representation, we may express the component of motion, e(θ), in the direction of these equators as (defined here for E x ): e x (θ) = vcos(α) R(θ x ) sin(φ x θ x ) + ω x, (2) where ω x is the rotational velocity component about the X axis, R(θ x ) is the radial depth to the scene point projecting to θ x, v is the translational velocity of the sphere, α is the angle of the translation vector from the plane of E x, and φ x is the direction of the projection of this vector on E x. It is important to note that the above equation may be defined for any great circle, and is not limited to equators about the X, Y and Z axis. From this we note that flow in the direction of any great circle is effected by only a single rotation about the axis perpendicular to that great circle s plane. This observation has led to the development of a full de-rotation algorithm for optical flow on the sphere. A. De-rotating flow on the sphere In the late 198s, Nelson and Aloimonos [1] proposed an algorithm for recovering the full 3D rotational velocities of a view sphere, in real-time, by exploiting three key geometric properties of optical flow on the sphere: 1) the component of flow parallel to any great circle is effected only by the rotational component about its perpendicular axis, thus decoupling it from rotations about orthogonal axes. 2) under pure translation, both the FOE and FOC will coexist at antipodal points on the sphere, and will evenly partition flow along any great circle connecting these two points, into two distinct directions of motion (i.e. clockwise and counter-clockwise). 3) the existence of any rotational motion along a great circle causes the FOE and FOC to converge, thus ensuring the two points will only lie at antipodal locations under pure translation. The first observation indicates that each component of rotation can be resolved independently, and thus each may be considered in turn. From the second and third observations, Nelson and Aloimonos propose an algorithm for recovering the rotational component of flow, ω, about any great circle of flow e(θ). For reference, we reproduce the algorithm in pseudo code (see Algorithm 1). In words, the algorithm describes a simple search-based strategy for resolving rotation. For each discrete point, θ c, on a circle of flow, e(θ), a range of rotations are searched through, where each candidate, ω c, is used to de-rotate flow along the circle. After de-rotation, the sum of the residual flow on the circle is taken. Note that the sign of the flow indicates its direction on the circle, therefore, a perfect split of clockwise and counter-clockwise flow will yield a sum of. Accounting for noise and quantisation errors, the chosen rotation is therefore the one which yields the smallest sum

3 TABLE I SIMULATION ERROR MEASURES Gauss noise Rotational error Trans dir Depth (std dev) ω x ω y ω z error error o o 8.9% 2 o o 13.% 4 o o 25.3% 1 o o 39.2% of flow on the circle after de-rotation. By applying this algorithm to great circles about each rotational axis, the complete recovery of the sphere s rotation is achieved. After de-rotation, the direction of translation is also given by the line passing through the FOE and FOC. Notably, the algorithm s run time performance is dictated primarily by the quantisation of discrete locations on the great circle, and the range of possible rotations for each great circle. Given reasonable choices, the algorithm should provide fast execution [1]. Given encouraging results reported by the authors in simulation, it is of interest to consider the feasibility of this approach to real-time navigation tasks, under real-world conditions. B. Generating relative depth maps After de-rotation, all flow vectors follow great circles passing through the FOE and FOC. Thus, we may express the magnitude of the residual translational flow at a discrete location on the great circle as (θ) as: f(θ) = v sin(φ θ), (3) R(θ) and from this, define the relative distance to any scene point: R(θ) v = sin(φ θ). (4) f(θ) If we know v, then we may obtain an absolute measure of distance, however, for most navigation tasks, a relative depth estimate is sufficient. Notably, (4) is only defined where optical flow exists (i.e. f(θ) ). Thus, range cannot be reliably measured where a lack of texture exists, or where flow magnitude tends to zero such as at the FOE and FOC. C. Simulation results Simulation tests were conducted to examine the robustness of depth map estimates obtained using the strategy outlined above. For this, a model of a unit view sphere was defined, and immersed in a virtual 3D boxed space. A ground truth depth map was then obtained over the whole sphere, from which accuracy could be measured. Optical flow fields were computed on the view sphere for multiple sets of twenty randomly chosen translation directions, with randomly chosen components of rotation about each principle axis. Rotational velocity values were within the range [.5,.5] (rad/unit time). To examine robustness, increasing levels of Gaussian noise were added to the direction of each flow vector for each set of twenty trials. On each equator, 112 discrete points, and 1 possible rotations were used for de-rotation. Table I provides mean errors obtained during each simulation run. Fig. 2. Sample frame shows the circle used to de-rotate the flow. Blue pixels indicate clockwise flow and yellow counter-clockwise. The yellow line indicates the estimated direction of translation after de-rotation. Rotational errors are given as the mean of the absolute difference between the estimated and true rotation velocities for each simulation run. The translational direction error is given as the mean angular error between the estimated translational direction and ground truth. Depth map estimation errors are given as the mean of relative errors against ground truth. From Table I it can be seen that rotational errors remain stable, despite increasing noise levels. Translational direction errors also appear to remain stable. It is important to note that image resolution dictates that estimates of translational direction can only estimate the true direction of motion to within 3.2 o of the actual direction, hence the presence of non-zero errors when no noise is introduced. It is clear that depth estimation errors increase with noise, and with errors in the estimated translational direction. Notably, large errors were also found to occur around the FOE and FOC, where flow magnitudes approach zero. Under less precise, real-world conditions, this issue is less likely to influence accuracy due to noise levels preventing flow magnitudes from diminishing to such small values. III. REAL-WORLD EXPERIMENTS The 3D depth map algorithm was implemented for use with a single fish-eye camera (Unibrain Fire-i BCL ) undergoing ground-based motion. Given motion was approximately planar, only a single hemispherical view was required to account for rotation on the ground plane. The fish-eye camera has a 19 degree FOV, and thus provides a suitable approximation to a hemispherical projection of the scene. In all experiments, full estimates of optical flow were acquired using Lucas and Kanade s gradient-based method [6], in combination with Simoncelli s multi-dimensional paired filter [15] for pre-filtering and gradient estimation. This combination was chosen on the basis of strong performances in a recent comparison of optical flow methods for real-time robot navigation [8]. Two image sequences were constructed for the depth map experiments. In both experiments, the camera s forward velocity was kept approximately constant, while rotation about the axis perpendicular to the ground plane was introduced. No camera calibration was performed or post-filtering of flow 1 Omni-tech robotics

4 (a) original image relative depth map projected structure map (b) (c) (d) Fig. 3. Sample depth maps obtained on-board the mobile platform (camera facing up). The left column shows the original image, and estimated direction of translation obtained from de-rotation. The middle column shows grayscale relative depthmaps computed from the translational flow. The right column shows structure maps, obtained by projecting relative depth estimates into 3D space, and then flattened out. estimates, translational direction or relative depth estimates. On a 2.1 GHz computer, depth maps were generated at a rate of 1.2 updates per second over circular image regions of radius 11 pixels, with an input frame rate of 15Hz. A. De-rotation Given ground-based motion, the Nelson and Aloimonos de-rotation algorithm was implemented for a single rotation, using a circle of evenly distributed points (radius 11 pixels) around the estimated projective centre of the camera. Due to limitations imposed by the size of the camera s image plane, the true great circle could not be used. For planar motion, any concentric circle about the projective centre is sufficient. Figure 2 provides a sample frame showing the output of the de-rotation algorithm over a real image sequence (described later). The blue and yellow points indicate the locations on the circle used for de-rotation. Blue points indicate clockwise flow, and yellow, counter-clockwise. The yellow line signifies the estimated direction of translation after de-rotation is applied. The imbalance of partitioned flow indicates the presence of rotation. B. Corridor navigation experiment The first image sequence was obtained by mounting the camera on-board a mobile robot platform, with omnidirectional motion. The camera was fixed as close as possible to the robot s rotation axis, facing upwards. Figure 3 shows sample depth maps obtained during the corridor experiment.

5 original image relative depth map projected structure map (a) (b) (c) (d) Fig. 4. Sample frames and depth maps from Belco kitchen sequence (hand held camera facing towards ground plane). The left column shows the original image, and estimated direction of translation obtained from de-rotation. The middle column shows grayscale relative depthmaps computed from the translational flow. The right column shows structure maps, obtained by projecting relative depth estimates into 3D space, and then flattened out. The first column shows the central image (32 24 pixels) from the buffered frames used to compute the optical flow for the corresponding depth map. The second column shows a grayscale map of the relative depths of objects in the scene (brighter being closer) estimated from the de-rotated flow field. The third column provides a top-down view of the relative depths of scene points, projected onto the ground plane (we refer to these as structure maps). The centre of the structure map gives the location of the camera. For this, thresholding was applied to extract only the closest surfaces in the scene (and thus omit depth estimates from the ceiling). The relative depth maps obtained over the corridor navigation sequence provide a good qualitative representation of the environment. An abundance of clear structural cues resulting from the motion of surface boundaries such as corners, doorways and windows can be seen. This is particularly evident in Figures 3(c) and (d) where the wall edge in (c), and column and fire hydrant in (d) are the brightest objects. The structure maps in Figure 3 further support the accuracy of the relative depth measures for inferring scene structure. Most evident is the extraction of free space in the local area about the robot. Notably, some surface areas have only a few depth measures associated with them due to a lack of measurable flow in the area. C. Cluttered environment experiment A second sequence was constructed depicting the camera s motion through a cluttered kitchen environment. For this, the

6 camera was hand-held, facing toward the ground plane as it was walked through the kitchen. Figure 4 shows four samples from the results obtained for this sequence. The depth maps obtained exhibit less structural definition than the corridor sequence. This is unsurprising given the unstructured nature of the environment, and the greater abundance of objects in close proximity to the camera. The camera s orientation towards the ground plane appears to significantly improve the extraction of free space from obstructed space due to the abundance of visual texture. In the corridoe sequence, fluro lights, and significantly more distant surfaces reduced the amount of visual texture available. While the improvement is evident in the grayscale depth maps, it is made particularly clear in structure maps like that shown in Figure 4(a), where a detailed map of free space is provided over the viewing area. D. Discussion The quality of depth maps obtained in both these preliminary experiments is encouraging. While more quantitative testing is needed, these experiments show that basic 3D scene structure can be reliably inferred from optical flow in realtime. At the very least, these results suggest clear distinctions between free and obstructed space can be obtained, thus supporting the feasibility of this strategy for closed-loop control. For a ground-based mobile robot, the entire viewing angle may not be needed to generate depth maps sufficient for navigation. In the case of the corridor sequence, only the peripheral view area need be examined to avoid obstacles around the robot. Consideration of both the robot s physical height, and constraints on its motion maybe exploited to improve the speed and accuracy (through higher resolution images) of the depth maps generated. These results suggest the Nelson and Aloimonos derotation algorithm is performing well over real-world images. Both sequences depict significant rotations, yet no ill-effects of this appear evident in the depth maps obtained. While a thorough examination of the algorithm s accuracy over real-image sequences is still needed, it is evident from these experiments, that the algorithm provides sufficient accuracy to facilitate the real-time recovery of both the direction of ego-motion, and complete 3D relative depth maps. REFERENCES [1] G. Adiv, Inherent ambiguities in recovering 3-d motion and structure from a noisy flow field, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 5, pp , [2] T. Camus, D. Coombs, M. Herman, and T.-H. Hong, Real-time singleworkstation obstacle avoidance using only wide-field flow divergence, in Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austra, 1996, pp [3] J. S. Chahl and M. V. Srinivasan, Range estimation with a panoramic visual sensor, Journal of the Optical Society of America A, vol. 14, no. 9, pp , sep [4] C. Fermüller and Y. Aloimonos, Geometry of eye design: Biology and technology, in Multi Image Search and Analysis, Lecture Notes in Computer Science, T. H. R. Klette and G. Gimelfarb, Eds. Springer Verlag, Heidelberg, 2. [5] H. C. Longuet-Higgins and K. Prazdny, The interpretation of a moving retinal image, Proceedings of the Royal Society of London, vol. B28, pp , 198. [6] B. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision. in Proceedings of DARPA Image Understanding Workshop, 1984, pp [7] S. Maybank, Theory of Reconstruction from Image Motion. Springer, Berlin, [8] C. McCarthy and N. Barnes, Performance of optical flow techniques for indoor navigation with a mobile robot, in Proceedings of the 24 IEEE International Conference on Robotics and Automation, 24, pp [9], A robust docking strategy for a mobile robot using flow field divergence, in Proceedings of the 26 IEEE/RSJ International Conference on Intelligent Robots and Systems (in press), 26. [1] R. C. Nelson and J. Aloimonos, Finding motion parameters from spherical motion fields (or the advantages of having eyes in the back of your head), Biological Cybernetics, vol. 58, pp , [11] J. H. Rieger and H. T. Lawton, Processing differential image motion, Journal of the Optical Society of America A, vol. 2, no. 2, pp , [12] J. Santos-Victor and G. Sandini, Divergent stereo in autonomous navigation: From bees to robots, International Journal of Computer Vision, vol. 14, no. 2, pp , [13], Uncalibrated obstacle detection using normal flow, Machine Vision and Applications, vol. 9, no. 3, pp , [14], Visual behaviors for docking, Computer Vision and Image Understanding: CVIU, vol. 67, no. 3, pp , [15] E. P. Simoncelli, Design of multi-dimensional derivative filters, in First Int l Conf on Image Proc, vol. I. Austin, TX USA: IEEE Sig Proc Society, 1994, pp [16] M. V. Srinivasan, How insects infer range from visual motion, in Visual Motion and its Role in the Stabilization of Gaze, F. Miles and J. Wallman, Eds. Elsevier, Amsterdam, 1993, pp [17] M. V. Srinivasan and S. W. Zhang, Visual motor computations in insects, Annual Review of Neuroscience, vol. 27, pp , 24. [18] A. Verri and T. Poggio, Motion field and optical flow: qualitative properties, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 5, pp , IV. CONCLUSION In this paper, we have presented a strategy for generating 3D relative depth maps from optical flow, in real-time. In so doing, we have demonstrated for the first time to our knowledge, the use of the Nelson and Aloimonos de-rotation algorithm in real-time, over real images, depicting real-world environments. Results from simulated full general motion of a sphere, and from real-world experiments suggest this strategy may be a useful base for many navigational subsystems. In addition, these results further support theoretical arguments in favour of a spherical projection when attempting to infer scene structure and self-motion from optical flow.

Real Time Biologically-Inspired Depth Maps from Spherical Flow

Real Time Biologically-Inspired Depth Maps from Spherical Flow Real Time Biologically-Inspired Depth Maps from Spherical Flow Chris McCarthy, Nick Barnes and Mandyam Srinivasan Abstract We present a strategy for generating real-time relative depth maps of an environment

More information

Comparison of Temporal Filters for Optical Flow Estimation in Continuous Mobile Robot Navigation

Comparison of Temporal Filters for Optical Flow Estimation in Continuous Mobile Robot Navigation Comparison of Temporal Filters for Optical Flow Estimation in Continuous Mobile Robot Navigation Chris McCarthy 1 and Nick Barnes 2 1 Dept. Computer Science and Software Engineering The University of Melbourne

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Insect Inspired Robots

Insect Inspired Robots Insect Inspired Robots John Lim 1,2, Chris McCarthy 1,2, David Shaw 1,2, Nick Barnes 1,2, Luke Cole 2 1 Department of Information Engineering, RSISE, 2 National ICT Australia, Australian National University

More information

Performance of Temporal Filters for Optical Flow Estimation in Mobile Robot Corridor Centring and Visual Odometry

Performance of Temporal Filters for Optical Flow Estimation in Mobile Robot Corridor Centring and Visual Odometry Performance of Temporal Filters for Optical Flow Estimation in Mobile Robot Corridor Centring and Visual Odometry Chris McCarthy Department of Computer Science and Software Engineering The University of

More information

Dominant plane detection using optical flow and Independent Component Analysis

Dominant plane detection using optical flow and Independent Component Analysis Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Real Time Motion Recovery using a Hemispherical Sensor

Real Time Motion Recovery using a Hemispherical Sensor Real Time Motion Recovery using a Hemispherical Sensor Rebecca Dengate, Nick Barnes, John Lim, Chi Luu and Robyn Guymer National ICT Australia, the Australian National University and the Centre for Eye

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization

Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Kosuke Fukano, Yoshihiko Mochizuki, Satoshi Iizuka, Edgar Simo-Serra, Akihiro Sugimoto, and Hiroshi Ishikawa Waseda

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion HW 4 due Friday Final Exam: Tuesday, 6/7 at 8:00-11:00 Fill out your CAPES Introduction to Computer Vision CSE 152 Lecture 20 Motion Some problems of motion 1. Correspondence: Where

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Insect Inspired Three Dimensional Centring

Insect Inspired Three Dimensional Centring Insect Inspired Three Dimensional Centring Luke Cole 1,2 Nick Barnes 1 luke@coletek.org nick.barnes@nicta.com.au 1 National ICT Australia (NICTA), 2 Australian National University Locked Bag 1, ACT,, Australia

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Department of Engineering, Australian National University, Australia. email: Robert.Mahony@anu.edu.au url: http://engnet.anu.edu.au/depeople/robert.mahony/

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

A SIMPLE VISUAL COMPASS WITH LEARNED PIXEL WEIGHTS. Andrew Vardy

A SIMPLE VISUAL COMPASS WITH LEARNED PIXEL WEIGHTS. Andrew Vardy A SIMPLE VISUAL COMPASS WITH LEARNED PIXEL WEIGHTS Andrew Vardy Computer Science / Engineering & Applied Science Memorial University of Newfoundland St. John s, Canada ABSTRACT A mobile robot can determine

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Optical flow and depth from motion for omnidirectional images using a TV-L1 variational framework on graphs

Optical flow and depth from motion for omnidirectional images using a TV-L1 variational framework on graphs ICIP 2009 - Monday, November 9 Optical flow and depth from motion for omnidirectional images using a TV-L1 variational framework on graphs Luigi Bagnato Signal Processing Laboratory - EPFL Advisors: Prof.

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Optic Flow and Basics Towards Horn-Schunck 1

Optic Flow and Basics Towards Horn-Schunck 1 Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion Introduction to Computer Vision CSE 152 Lecture 20 HW 4 due Friday at Midnight Final Exam: Tuesday, 6/12 at 8:00AM-11:00AM, regular classroom Extra Office Hours: Monday 6/11 9:00AM-10:00AM

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Hiroshi ISHIGURO Department of Electrical & Computer Engineering, University of California, San Diego (9500 Gilman

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Diverging viewing-lines in binocular vision: A method for estimating ego motion by mounted active cameras

Diverging viewing-lines in binocular vision: A method for estimating ego motion by mounted active cameras Diverging viewing-lines in binocular vision: A method for estimating ego motion by mounted active cameras Akihiro Sugimoto and Tomohiko Ikeda National Institute of Informatics, Tokyo 11-843, Japan School

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES DEPTH ESTMATON USNG STEREO FSH-EYE LENSES Shishir Shah and J. K. Aggamal Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 520 The University of Texas At Austin

More information

Improving Vision-Based Distance Measurements using Reference Objects

Improving Vision-Based Distance Measurements using Reference Objects Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

An Integrated Vision Sensor for the Computation of Optical Flow Singular Points

An Integrated Vision Sensor for the Computation of Optical Flow Singular Points An Integrated Vision Sensor for the Computation of Optical Flow Singular Points Charles M. Higgins and Christof Koch Division of Biology, 39-74 California Institute of Technology Pasadena, CA 925 [chuck,koch]@klab.caltech.edu

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

A 3D Pattern for Post Estimation for Object Capture

A 3D Pattern for Post Estimation for Object Capture A 3D Pattern for Post Estimation for Object Capture Lei Wang, Cindy Grimm, and Robert Pless Department of Computer Science and Engineering Washington University One Brookings Drive, St. Louis, MO, 63130

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Navigation of a mobile robot on the temporal development of the optic

Navigation of a mobile robot on the temporal development of the optic 558 Navigation of a mobile robot on the temporal development of the optic flow Anuj Dev, Ben Krose, Frans Groen RWCP Novel Functions SNN*Laboratory Department of Computer Science, University of Amsterdam

More information

An Adaptive Fusion Architecture for Target Tracking

An Adaptive Fusion Architecture for Target Tracking An Adaptive Fusion Architecture for Target Tracking Gareth Loy, Luke Fletcher, Nicholas Apostoloff and Alexander Zelinsky Department of Systems Engineering Research School of Information Sciences and Engineering

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Henrik Malm, Anders Heyden Centre for Mathematical Sciences, Lund University Box 118, SE-221 00 Lund, Sweden email: henrik,heyden@maths.lth.se Abstract. In this

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

A Framework for 3D Pushbroom Imaging CUCS

A Framework for 3D Pushbroom Imaging CUCS A Framework for 3D Pushbroom Imaging CUCS-002-03 Naoyuki Ichimura and Shree K. Nayar Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST) Tsukuba,

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Fast and Reliable Two-View Translation Estimation

Fast and Reliable Two-View Translation Estimation Fast and Reliable Two-View Translation Estimation Johan Fredriksson 1 1 Centre for Mathematical Sciences Lund University, Sweden johanf@maths.lth.se Olof Enqvist 2 Fredrik Kahl 1,2 2 Department of Signals

More information

COS Lecture 10 Autonomous Robot Navigation

COS Lecture 10 Autonomous Robot Navigation COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

3D Spatial Layout Propagation in a Video Sequence

3D Spatial Layout Propagation in a Video Sequence 3D Spatial Layout Propagation in a Video Sequence Alejandro Rituerto 1, Roberto Manduchi 2, Ana C. Murillo 1 and J. J. Guerrero 1 arituerto@unizar.es, manduchi@soe.ucsc.edu, acm@unizar.es, and josechu.guerrero@unizar.es

More information

Automatic 3D Model Construction for Turn-Table Sequences - a simplification

Automatic 3D Model Construction for Turn-Table Sequences - a simplification Automatic 3D Model Construction for Turn-Table Sequences - a simplification Fredrik Larsson larsson@isy.liu.se April, Background This report introduces some simplifications to the method by Fitzgibbon

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

CS 4758: Automated Semantic Mapping of Environment

CS 4758: Automated Semantic Mapping of Environment CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Ego-Mot ion and Omnidirectional Cameras*

Ego-Mot ion and Omnidirectional Cameras* Ego-Mot ion and Omnidirectional Cameras* Joshua Gluckman and Shree K. Nayar Department of Computer Science Columbia University New York, New York 10027 Abstract Recent research in image sensors has produced

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Low-level Image Processing for Lane Detection and Tracking

Low-level Image Processing for Lane Detection and Tracking Low-level Image Processing for Lane Detection and Tracking Ruyi Jiang 1, Reinhard Klette 2, Shigang Wang 1, and Tobi Vaudrey 2 1 Shanghai Jiao Tong University, Shanghai, China 2 The University of Auckland,

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

Generating Dynamic Projection Images for Scene Representation and Understanding

Generating Dynamic Projection Images for Scene Representation and Understanding COMPUTER VISION AND IMAGE UNDERSTANDING Vol. 72, No. 3, December, pp. 237 256, 1998 ARTICLE NO. IV980678 Generating Dynamic Projection Images for Scene Representation and Understanding Jiang Yu Zheng and

More information

Time To Contact (TTC) Berthold K.P. Horn, Yajun Fang, Ichiro Masaki

Time To Contact (TTC) Berthold K.P. Horn, Yajun Fang, Ichiro Masaki Time To Contact (TTC) Berthold K.P. Horn, Yajun Fang, Ichiro Masaki Estimating Time-To-Contact (TTC) Time-To-Contact (TTC) Time-to-contact is the ratio of distance to velocity. Time-to-contact estimated

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information