Fusion of Discrete and Continuous Epipolar Geometry for Visual Odometry and Localization
|
|
- Brett O’Connor’
- 5 years ago
- Views:
Transcription
1 Fusion of Discrete and Continuous Epipolar Geometry for Visual Odometry and Localization David Tick Computer Science Dept. University of Texas at Dallas Dallas, Texas 7 dqt81@utdallas.edu Jinglin Shen Electrical Engineering Dept. University of Texas at Dallas Dallas, Texas 7 sxj96@utdallas.edu Dr. Nicholas Gans Electrical Engineering Dept. University of Texas at Dallas Dallas, Texas 7 ngans@utdallas.edu Astract Localization is a critical prolem for uilding moile rootic systems capale of autonomous navigation. This paper descries a novel visual odometry method to improve the accuracy of localization when a camera is viewing a piecewise planar scene. Discrete and continuous Homography Matrices are used to recover position, heading, and velocity from images of coplanar feature points. A Kalman filter is used to fuse pose and velocity estimates and increase the accuracy of the estimates. Simulation results are presented to demonstrate the performance of the proposed method. I. INTRODUCTION Building moile rootic systems that are capale of realtime, autonomous navigation is a complex, multi-faceted prolem. One of the primary aspects of this prolem is the task of localization. Localization, or pose-estimation, refers to the task of estimating the velocity (oth angular and linear), position, and orientation of the root at any given instant. There are many estalished ways to approach the task of localization including wheel odometry [1], inertial sensors [1], GPS [1], sonar [], and IR/laser-ased range finding sensors []. There has also een significant development of localization techniques which are solely vision-ased. Visual odometry is a method of localization that uses one or more cameras to continuously capture images or video frames taken of a scene [4]. The frames are analyzed sequentially using various computer vision techniques. The analysis of these frames estimates angular and linear velocities of the camera etween each time step. These velocities can then e integrated over time to estimate how the camera has moved. Using this technique, an estimate of a moile root s speed, position, and heading (orientation) can e calculated and maintained over time. A pose estimate produced in this manner can e further enhanced y comining visual odometry with traditional wheel odometry and other sensor devices. This can e achieved through the use of various signal processing techniques called sensor fusion. The Kalman filter is particularly useful for fusing signals from multiple sensors and removing errors in localization that occur due to many factors such as sensor noise, quantization, flawed process modeling, and sensor ias or drift [1], []. Some vision-ased localization techniques are designed to calculate the pose of the camera relative to some well known reference oject that appears in each frame [6], [7]. Usually these techniques require the system to have accurate geometric information aout the scene and/or some reference oject(s) in the scene prior to execution. In many situations, there exists little accurate prior knowledge regarding a scene and the ojects therein. In such cases, two frames can e compared with one another ased on a set of feature points which exists in oth frames. Once a set of feature points is identified in oth frames, a homogeneous mapping of the feature points from one frame onto the next must exist. This mapping can e modeled as a linear transformation and encapsulates the rotation and translation of the camera that occurs etween the taking of each picture. In practice, the Essential Matrix or Euclidean Homography Matrix is used to estimate a camera s pose in terms of a set of rotational and translational transformations [8], [9]. Over the years, many control systems have een designed that utilize Essential or Homography Matrices in vision-ased rootic tasks [1] [1]. The Euclidean Homography Matrix can e used in oth a continuous as well as a discrete form [9]. The Homography Matrix, in its discrete form, has een widely used for many years y the community [14] [18]. However, it seems that there has not een as much work done regarding applications of the Homography Matrix in its continuous form. Of particular interest in this paper, is the application of the Euclidean Homography Matrix as a means of estimating velocity [19]. If one does a discrete homography-ased estimation of the camera position and orientation (i.e., pose), then one can also integrate the continuous estimate of the velocity at each time step to extract the pose as well. The pose estimate otained from integrating the continuous homographic estimate of angular velocity must agree with the discretely estimated position of the camera. In this paper, we propose a method which uses a single six degrees of freedom (6DOF), or eye in hand, camera to do homography-ased visual odometry. This system will capture video frames and identify sets of coplanar feature points in a scene. It will estimate the change in pose of the camera y using the discrete form of the Euclidean Homography Matrix. The system will also utilize the continuous form of the Euclidean Homography Matrix to estimate the velocity of the camera. Our system utilizes the well known Kalman filter to fuse these two estimates and remove error which accrues /1/$6. 1 IEEE
2 over time due to integration, noise, and quantization []. Section II clarifies the terminology used in this paper as well as formally defining and modeling the prolem. Section III explains the proposed approach at homographic visual odometry. We present simulation results in section IV that illustrate the effectiveness of the proposed system. Finally, in section V we espouse the various conclusions that we have reached. Z w F w X w (v 1, ω 1 ) (v... k-, ω k- ) Y w (v k-1, ω k-1 ) X F (t k-1 ) (v k, ω k ) Y Z F (t k ) II. BACKGROUND While there are certain formal conventions that exist with respect to the terminology used in vision-ased localization, there is no official standard. Thus, it is necessary that we define the terminology that we use in this paper. A. Formal Definition and Terminology Fig. 1 illustrates a camera, with attached reference frame, moving over time. The lowercase su-script attached to each axis lael indicates the frame of reference to which that axis elongs. Navigation sensors, including cameras, report measurements in terms of the moving ody frame, formally called F. These measurements are then rotated to otain localization with respect to the world frame, formally called F w. The z-axis of F is oriented along the optical axis, the x-axis is oriented along the horizontal direction of the image plane, and the y-axis is oriented parallel to the vertical direction of the image. The orientation and position (collectively referred to as pose) of a ody frame in terms of the world frame is expressed as a function of time y including the index numer for a given time step in parentheses. For example, in Fig. 1 F (t ) descries the pose of the ody frame F at time index t as measured from F (t ), while F (t k ) descries the pose of F at time t k with respect to F (t ). The changes in the pose of F that occur over the time interval [t, t 1,..., t k 1, t k ] is descried in terms of a translation vector T k R and a rotation matrix R k SO (), formally written as (T k, R k ), where SO () means Special Orthogonal Group of order three. At any given time t k, the instantaneous linear and angular velocities of the ody frame F are descried, as a pair of vectors (v k, ω k ), where v k R and ω k R. B. Pinhole Camera Model Fig. illustrates a camera taking two images from two different poses F and F (t). F is considered a static reference frame, such as the pose at time t =. Without loss of generality we take F w = F. F (t) is considered a moving frame or current frame. The changes which exist etween the two poses, as stated aove, are encapsulated y (T, R). Fig. also shows N 4 feature points that all lie in the plane π s. The D coordinates of each feature point as measured from the reference frame of the pose F (t) are defined in terms of a vector m j (t) R. Similarly, the D coordinates of each feature point as measured from the reference frame of the pose Fig. 1. m 1 1 m (v, ω ) F (t ) X F (t 1 ) Y F (t t k- ) ( T k, R k ) Translation and Rotation (Tk, Rk) of Body/Camera Frame F Y Z X Fig m t t (T, R) t Y t s m t 1 Z t Planar Scenes and Feature Points X t F are defined in terms of a vector m j R. Formally these vectors are given as m j R = [ x j, y j, z j ] T, j {1,..., N} m j (t) R = [x j (t), y j (t), z j (t)] T, j {1,..., N}. Two different images are captured y the camera at the two poses F and F (t). This is modeled y projecting the feature points onto the D image planes π and π (t). The coordinates of the feature points in these D planes are expressed as a normalized set of D coordinates, where depth along the Z-axis is set equal to one. The normalized image plane coordinates that result from this projection, as measured from the reference frame of the pose F (t) are defined in terms of a vector m j (t) R. Similarly, the normalized image plane coordinates of each feature point as measured from the reference frame of the pose F are defined in terms of a vector m j R. These vectors are expressed as [ T x m j R j = zj, y j zj, 1], j {1,..., N} m j (t) R = [ xj (t) z j (t), y j (t) z j (t), 1 ] T, j {1,..., N}.
3 C. Euclidean Homography In this work, we focus on the planar Homography case, that is, all D feature points are on the same plane. Consider the two sets of points m(t) on plane π s, mentioned aove, the transformation etween the two sets is given y [9] m j = Rm j + T. (1) The relationship in (1) can e rewritten in terms of image points as [8] m j = (R + 1 d T n T )m j where n = [n x, n y, n z] T is the constant unit normal vector of plane π s measured in F, and d is the constant distance etween the optical center of the camera (i.e. the origin of F ) to the plane π s. We assume that d is known in this initial investigation. The matrix H d = R + 1 d T n T defines what is known as the discrete Homography Matrix. By using the four-point algorithm, the Homography Matrix H d (t) can e solved to recover the translation vector T (t) and the rotation matrix R(t) [8]. In the continuous case, image point m j (t) and its optical flow ṁ j (t) are measured instead of image pair m j and m j (t). The time derivative of m j (t) satisfies ṁ j = ˆωm j + v () where ˆω(t) R is the skew-symmetric matrix of ω(t). The relationship in () can e rewritten in terms of image points as [9] The matrix ṁ j (t) = (ˆω + 1 d vn T )m j (t). H c = ˆω + 1 d vn T is defined as the continuous Homography Matrix. Similar to the discrete form, a four-points algorithm gives the solution for linear velocity v(t) and angular velocity ω(t) of the camera. III. APPROACH As descried in section II, the translation and rotation of a camera can e recovered y applying the Homography Matrix method in its discrete form. Similarly, the linear and angular velocities of a camera can e otained using the continuous form. However, the estimation can e noisy due to possile sensor noise from the camera and numerical error produced during computation of the Homography Matrix. Integration of the continuous velocity estimate over time should agree with the discrete pose estimate, thus fusing velocity and pose estimates will reduce the effect of noise. A Kalman filter is uilt to perform the sensor fusion, including integration, and generate the final pose and velocity estimates. The reference feature points m j are taken as the points at initial time t. At each time step t k, feature points m j and m j (t k ) are used to calculate the Homography Matrices H d (t k ) and H c (t k ). The Homography Matrices are decomposed to solve for R k, T k, v k, ω k. The system states are the position and velocity of the camera. Roll, pitch, and yaw angles (denoted r(t k ), p(t k ), w(t k )) represent the camera s orientation instead of the recovered rotation matrix. The relation etween the angle rates ṙ(t k ), ṗ(t k ), ẇ(t k ) and angular velocity ω k = [ω x (t k ), ω y (t k ), ω z (t k )] T is given y [] ṗ = (ω x cos r ω y sin r) () ẇ = (ω x sin r + ω y cos r) sec p (4) ṙ = (ω x sin r + ω y cos r) tan p + ω z. () The system equations for the Kalman Filter are given y In (6), the state vector is given y x k = F k x k 1 + w k (6) y k = H k x k + v k. (7) x k = [x, y, z, p, w, r, v x, v y, v z, ω x, ω y, ω z ] T R 1 and w k R 1 is a normally distriuted random process with zero mean and covariance matrix Q k R 1 1. The state transition matrix F k R 1 1 and the process covariance matrix Q k can e found in the Appendix. A random walk process is used to determine Q k. In F k and Q k the term t is the frame rate of the camera, and the various σ terms are scaling terms for the states and are indicated as suscripts. In (7), the measurement matrix H k R 1 1 is an identity matrix since all components of the state vector are eing measured. The sensor noise vector v k R 1 is a normally distriuted random process with zero mean and covariance matrix R k R 1 1, which is a diagonal matrix, whose diagonal elements are decided according to the estimated measurement noise for each state vector component. Given the terms in (6) and (7) the Kalman filter is designed and updated in the typical manner []. Estimates from the update step are utilized to provide the pose and velocity estimates that are in turn used for localization of the camera. IV. RESULTS In this section, simulation results of the proposed Homographic visual odometry method are presented. A single camera oserves four static, coplanar feature points. The camera motion is generated so that the feature points always stay in the field of view. The linear velocity is sinusoidal along all degrees of freedom. The angular velocity is zero along the x-axis and sinusoidal along the other two axes. The frame rate of the camera is frames/second. Discrete and continuous Homography matrices compute the estimated camera position, orientation and velocities. Next, these homographic estimates are input into the Kalman filter, which fuses them, producing the final state estimate. The results are then plotted against the camera motion for comparison. In the first simulation, the fused results from the Kalman filter are compared with results which come directly from using only one of the Homography matrices. Gaussian noise with zero mean and variance equal to.1 is added. The
4 diagonal values of R k used in our simulations are [.1,.1,.1,.1,.1,.1,.1,.1,.1,.1,.1,.1]. Fig. through Fig. 6 show the estimated camera position, orientation, linear and angular velocities respectively. The dashed line (lue) represents the value of the simulated trajectory, and the two solid lines (green and red) represent the estimate otained directly from Homography(H) and the fused estimate produced y the Kalman filter(kf). Fig. and Fig. 4 show that given a random initial state, the estimated values of position and orientation converge toward the trajectory quickly, and thereafter estimation error remains small for oth the Homography and the Kalman filtered estimates. Fig. and Fig. 6, show that the velocity estimates from Kalman filter track the values quite well, while the continuous Homography estimate is much noiser. Also, note that a lag exists etween the signals and the filtered estimates. Large values of the diagonal elements in the measurement covariance matrix R k cause this lag. Larger values in R k make the estimated trajectories more accurate, ut increase the lag of the Kalman filter. There is a trade-off etween the system s roustness against noise, and the overall speed of the system. In the second simulation, we investigate the roustness of our proposed method y looking at its performance when different amounts of noise are added. Fig. 7 through Fig. 1 once again show estimated camera position, orientation, linear and angular velocities respectively. The dashed line (lue) represents the value of the simulated trajectory, and the two solid lines (green and red) represent the fused estimate produced y the Kalman filter with no noise added(n=), and with Gaussian noise having zero mean and variance equal to. added(n=.). Comparison with the set of results from the first simulation, reveals that the accuracy of the position and orientation estimates are not severely impacted y the introduction of more noise. However, the velocity estimates do ecome significantly noisier as the variance of the sensor noise increases. These results indicate that our proposed system is sensitive to noise in its velocity estimate. There are several approaches which could mitigate this noise sensitivity. One possile solution is to utilize a Fixed Interval Kalman Smoother [1]. As an example, the effect of this smoother on velocity estimation is shown in Fig. 11 and Fig. 1. Note that the velocity estimates are drastically improved when this method is employed. However, this solution that is not usale in real time. Another solution, which is capale of eing run in real time, is to add more feature points to the set. While this would likely provide some additional roustness against noise, it should e noted that in this initial investigation we are evaluating performance of the proposed system in the worst case situation of having only the minimum set of four feature points. V. CONCLUSION In this paper, we present a novel visual odometry method for localization of a moile rootic system. Continuous and discrete forms of the Euclidean Homography Matrix are used Posx(m) Posy(m) Posz(m) Angx(rad) Angy(rad) Angz(rad) Fig.. Filter vs DH: Position Estimate (noise variance =.1) Fig. 4. Filter vs DH: Orientation Estimate (noise variance =.1) Vx(m/s) Vy(m/s) Vz(m/s) Fig.. Filter vs CH: Linear Velocity Estimate (noise variance =.1)
5 Wx(rad/s) Vx(m/s) estimated(n=) estimated(n=.) Wy(rad/s).. Vy(m/s) Wz(rad/s).. Vz(m/s) Fig. 6. Filter vs CH: Angular Velocity Estimate (noise variance =.1) Fig. 9. Filtered Linear Velocity Estimate with Camera Noise Added Posx(m) estimated(n=) estimated(n=.) Wx(rad/s).. estimated(n=) estimated(n=.) Posy(m) Wy(rad/s) Posz(m) Wz(rad/s) Fig. 7. Filtered Position Estimate with Camera Noise Added Fig. 1. Filtered Angular Velocity Estimate with Camera Noise Added Angx(rad).. estimated(n=) estimated(n=.) Vx(m/s) estimate(kf) estimate(ks) Angy(rad).. Vy(m/s) Angz(rad).. Vz(m/s) Fig. 8. Filtered Orientation Estimate with Camera Noise Added Fig. 11. Smoothed Estimate of Filtered Linear Velocity w/noise Added
6 Wx(rad/s) Wy(rad/s) Wz(rad/s).... estimate(kf) estimate(ks) actual Fig Smoothed Estimate of Filtered Angular Velocity w/noise Added to recover position and velocity information from camera images. A Kalman filter is used to fuse the velocity and pose estimates. This reduces the effects of noise and improves the estimate. Simulations were performed to explore the performance of the proposed method. When sensor noise is not considered, the estimated positions and velocities show little error from the signals. Even when noise is introduced to the camera model, the velocity estimates produced y our proposed method are dramatically etter than the velocity estimates otained directly from the continuous Homography Matrix. Carefully choosing the parameters of the Kalman filter allows the user to alance the accuracy and responsiveness of the system. Furthermore, there exist oth offline and real-time solutions that mitigate the effects of noise sensitivity. There are several avenues open for future work. In the short term, experiments will e performed of the presented method. This will require accurate and fast detection and tracking of features. Low levels of tracking error will e important to mitigate sensor noise. In the longer term, a method must e determined to handle large scale movements of the root and camera such that feature points can e allowed to enter and leave the camera field of view without disrupting the estimation. Finally this method will e incorporated with other localization and odometry methods, including inertial measurement units and wheel encoders. REFERENCES [1] G. Dudek and M. Jenkin, Inertial sensors, gps, and odometry, in Springer Handook of Rootics, 8, pp [] L. Kleeman and R. Kuc, Sonar sensing, in Springer Handook of Rootics, 8, pp [] R. B. Fisher and K. Konolige, Range sensors, in Springer Handook of Rootics, 8, pp [4] D. Nister, O. Naroditsky, and J. Bergen, Visual odometry, in Computer Vision and Pattern Recognition, 4. CVPR 4. Proceedings of the 4 IEEE Computer Society Conference on, vol. 1, 7 4, pp. I 6 I 69 Vol.1. [] R. G. Brown, Introduction to Random Signal Analysis and Kalman Filtering. John Wiley & Sons, 198. [6] D. DeMenthon and L. S. Davis, Model-ased oject pose in lines code, in European Conf. on Computer Vision, 199, pp. 4. F k = Q k = APPENDIX KALMAN FILTER MATRICES 1 t 1 t 1 t 1 cos (r) t sin (r) t sin(r) 1 cos(p) cos(r) t cos(p) t 1 sin (r) tan (p) t cos (r) tan (p) t t (σx t ) (σx t ) ( ) σy t ( ) σy t (σz t ) (σz t ) ( ) σp t ( ) σp t (σw t ) (σw t ) (σr t ) (σx t ) σx t ( ) σy t σy t (σz t ) σz t ( ) σp t σp t (σw t ) σw t (σr t ) (σr t ) σr t [7] L. Quan and Z.-D. Lan, Linear n-point camera pose determination, IEEE Trans. Pattern Anal. Mach. Intell., vol. 1, no. 8, pp , [8] O. D. Faugeras and F. Lustman, Motion and structure from motion in a piecewise planar environment, Int. J. Pattern Recog. and Artificial Intell., vol., no., pp. 48 8, [9] Y. Ma, S. Soatto, J. Koseck, and S. Sastry, An Invitation to -D Vision. Springer, 4. [1] E. Malis, F. Chaumette, and S. Boudet, -1/D visual servoing, IEEE Trans. Root. Autom., vol. 1, no., pp. 8, [11] C. Taylor and J. Ostrowski, Roust vision-ased pose control, in Proc. IEEE Int. Conf. Rootics and Automation,, pp [1] Y. Fang, D. Dawson, W. Dixon, and M. de Queiroz, Homographyased visual servoing wheeled moile roots, in Proc. IEEE Conf. on Decision and Control,, pp [1] Y. Fang, W. E. Dixon, D. M. Dawson, and P. Chawda, Homographyased visual servo regulation of moile roots, IEEE Trans. Syst., Man, Cyern., vol., no., pp ,. [14] Z. Zhang and A. Hanson, D reconstruction ased on homography mapping, in Proc. ARPA Image Understanding Workshop Palm Springs CA, [1] N. Daucher, M. Dhome, J. Laprest, and G. Rives, Speed command a rootic system y monocular pose estimate, in Proc. IEEE/RSJ Int. Conf. Intelligent Roots and Systems, 1997, pp. 4. [16] C. Baillard and A. Zisserman, Automatic reconstruction planar models from multiple views, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1999, pp [17] E. Malis and F. Chaumette, 1/D visual servoing with respect to unknown ojects through a new estimation scheme camera displacement, Int. J. Computer Vision, vol. 7, no. 1, pp ,. [18] K. Okada, S. Kagami, M. Inaa, and H. Inoue, Plane segment finder: algorithm, implementation and applications, in Proc. IEEE Int. Conf. Rootics and Automation, 1, pp [19] V. Chitrakaran, D. M. Dawson, W. E. Dixon, and J. Chen, Identification a moving oject s velocity with a fixed camera, Automatica, vol. 41, no., pp. 6,. [] D. H. Titterton and J. L. Weston, Strapdown Inertial Navigation Technology (nd Edition)l. Institution of Engineering and Technology, 4. [1] H. Rauch, F. Tung, and C. Strieel, Maximum likelihood estimates of linear dynamic systems, AIAA, vol. (8), pp , 196.
Localization Through Fusion of Discrete and Continuous Epipolar Geometry with Wheel and IMU Odometry
211 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 211 Localization Through Fusion of Discrete and Continuous Epipolar Geometry with Wheel and IMU Odometry Jinglin
More informationNeural Networks Terrain Classification using Inertial Measurement Unit for an Autonomous Vehicle
SICE Annual Conference 2008 August 20-22, 2008, The University Electro-Communications, Japan Neural Networks Terrain Classification using Inertial Measurement Unit for an Autonomous Vehicle Rukwan Jitpakdee
More informationPosition and Orientation of an Aerial Vehicle through Chained, Vision-Based Pose Reconstruction
Position and Orientation of an Aerial Vehicle through Chained, Vision-Based Pose Reconstruction K. Kaiser, N. Gans, W. Dixon Dept. of Mechanical and Aerospace Engineering, University of Florida, Gainesville,
More informationShortest Path Homography-Based Visual Control for Differential Drive Robots
Citation: G. López-Nicolás, C. Sagüés, and J. J. Guerrero. Vision Systems, Shortest Path Homography-Based Visual Control for Differential Drive Robots, chapter 3, pp. 583 596. Edited by Goro Obinata and
More informationExponential Maps for Computer Vision
Exponential Maps for Computer Vision Nick Birnie School of Informatics University of Edinburgh 1 Introduction In computer vision, the exponential map is the natural generalisation of the ordinary exponential
More informationVISUAL SERVO control is a broad area of mainstream
128 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 18, NO. 1, JANUARY 2010 Adaptive Homography-Based Visual Servo Tracking Control via a Quaternion Formulation Guoqiang Hu, Nicholas Gans, Norman
More informationStructure and motion in 3D and 2D from hybrid matching constraints
Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationDESIGNING mobile robotic systems capable of real-time
IEEE TRANSACTIONS ON CYBERNETICS 1 Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry David Tick, Student Member, IEEE, Aykut
More informationKeeping features in the camera s field of view: a visual servoing strategy
Keeping features in the camera s field of view: a visual servoing strategy G. Chesi, K. Hashimoto,D.Prattichizzo,A.Vicino Department of Information Engineering, University of Siena Via Roma 6, 3 Siena,
More informationAKEY issue that impacts camera-based visual servo control
814 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 13, NO. 5, SEPTEMBER 2005 Adaptive Homography-Based Visual Servo Tracking for a Fixed Camera Configuration With a Camera-in-Hand Extension Jian
More informationPosition-Based Navigation Using Multiple Homographies
Position-Based Navigation Using Multiple Homographies Eduardo Montijano DIIS - IA University of Zaragoza, Spain emonti@unizar.es Carlos Sagues DIIS - IA University of Zaragoza, Spain csagues@unizar.es
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationVisual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach
Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach Masahide Ito Masaaki Shibata Department of Electrical and Mechanical
More informationAn idea which can be used once is a trick. If it can be used more than once it becomes a method
An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationNew shortest-path approaches to visual servoing
New shortest-path approaches to visual servoing Ville Laboratory of Information rocessing Lappeenranta University of Technology Lappeenranta, Finland kyrki@lut.fi Danica Kragic and Henrik I. Christensen
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München Points in 3D 3D point Augmented vector Homogeneous
More informationFast Computation of the Gabor Wavelet Transform
DICTA2002: Digital Image Computing Techniques and Applications, 21 22 January 2002, Melourne, Australia 1 Fast Computation of the Gaor Wavelet Transform Gareth Loy Department of Systems Engineering Research
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationReal Time Motion Authoring of a 3D Avatar
Vol.46 (Games and Graphics and 2014), pp.170-174 http://dx.doi.org/10.14257/astl.2014.46.38 Real Time Motion Authoring of a 3D Avatar Harinadha Reddy Chintalapalli and Young-Ho Chai Graduate School of
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More informationFrom this result, it ecomes evident that y estimating the components of projected motion (i.e. u = v x cos( )+v y sin( )), for at least two independen
Efficiency and Accuracy Tradeoffs in using Projections for Motion Estimation Λ Dirk Roinson Peyman Milanfar Department of Electrical Engineering Department of Electrical Engineering University of California
More informationImage Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment Ahmet Yasin Yazicioglu,
More informationHomography-Based Multi-Robot Control with a Flying Camera
Citation: G. López-Nicolás, Y. Mezouar, and C. Sagüés. Homography-based multi-robot control with a flying camera. In IEEE International Conference on Robotics and Automation, pp. 4492-4497, Shangai, China,
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationCamera Calibration Using Line Correspondences
Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationTwo-View Geometry (Course 23, Lecture D)
Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More information1. GRAPHS OF THE SINE AND COSINE FUNCTIONS
GRAPHS OF THE CIRCULAR FUNCTIONS 1. GRAPHS OF THE SINE AND COSINE FUNCTIONS PERIODIC FUNCTION A period function is a function f such that f ( x) f ( x np) for every real numer x in the domain of f every
More informationRobot Vision Control of robot motion from video. M. Jagersand
Robot Vision Control of robot motion from video M. Jagersand Vision-Based Control (Visual Servoing) Initial Image User Desired Image Vision-Based Control (Visual Servoing) : Current Image Features : Desired
More informationRevising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History
Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationAccurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion
007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,
More informationTask selection for control of active vision systems
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October -5, 29 St. Louis, USA Task selection for control of active vision systems Yasushi Iwatani Abstract This paper discusses
More information1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License
Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationHomography based visual odometry with known vertical direction and weak Manhattan world assumption
Homography based visual odometry with known vertical direction and weak Manhattan world assumption Olivier Saurer, Friedrich Fraundorfer, Marc Pollefeys Computer Vision and Geometry Lab, ETH Zürich, Switzerland
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationA Framework for 3D Pushbroom Imaging CUCS
A Framework for 3D Pushbroom Imaging CUCS-002-03 Naoyuki Ichimura and Shree K. Nayar Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST) Tsukuba,
More informationShort Papers. Adaptive Navigation of Mobile Robots with Obstacle Avoidance
596 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL., NO. 4, AUGUST 997 Short Papers Adaptive Navigation of Mobile Robots with Obstacle Avoidance Atsushi Fujimori, Peter N. Nikiforuk, and Madan M. Gupta
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationCamera and Inertial Sensor Fusion
January 6, 2018 For First Robotics 2018 Camera and Inertial Sensor Fusion David Zhang david.chao.zhang@gmail.com Version 4.1 1 My Background Ph.D. of Physics - Penn State Univ. Research scientist at SRI
More informationImage Rectification (Stereo) (New book: 7.2.1, old book: 11.1)
Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture
More informationSimultaneous Vanishing Point Detection and Camera Calibration from Single Images
Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationarxiv: v1 [cs.cv] 18 Sep 2017
Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de
More informationA Switching Approach to Visual Servo Control
A Switching Approach to Visual Servo Control Nicholas R. Gans and Seth A. Hutchinson ngans@uiuc.edu, seth@uiuc.edu Dept. of Electrical and Computer Engineering The Beckman Institute for Advanced Science
More informationSTRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION *
STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION * Tanuja Joshi Narendra Ahuja Jean Ponce Beckman Institute, University of Illinois, Urbana, Illinois 61801 Abstract:
More informationRobotic Perception and Action: Vehicle SLAM Assignment
Robotic Perception and Action: Vehicle SLAM Assignment Mariolino De Cecco Mariolino De Cecco, Mattia Tavernini 1 CONTENTS Vehicle SLAM Assignment Contents Assignment Scenario 3 Odometry Localization...........................................
More informationFactorization with Missing and Noisy Data
Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,
More informationAuto-epipolar Visual Servoing
Auto-epipolar Visual Servoing Jacopo Piazzi Johns Hopkins University Baltimore, Maryland USA Domenico Prattichizzo Univeristá degli Studi di Siena Siena, Italy Noah J. Cowan Johns Hopkins University Baltimore,
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationDYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER. Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda**
DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda** * Department of Mechanical Engineering Technical University of Catalonia (UPC)
More informationRealtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments
Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi
More informationShort on camera geometry and camera calibration
Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationOn the Efficient Second Order Minimization and Image-Based Visual Servoing
2008 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 2008 On the Efficient Second Order Minimization and Image-Based Visual Servoing Omar Tahri and Youcef Mezouar
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Henrik Malm, Anders Heyden Centre for Mathematical Sciences, Lund University Box 118, SE-221 00 Lund, Sweden email: henrik,heyden@maths.lth.se Abstract. In this
More informationUniversity of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract
Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,
More informationA simple method for interactive 3D reconstruction and camera calibration from a single view
A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute
More informationIndoor Localization Algorithms for a Human-Operated Backpack System
Indoor Localization Algorithms for a Human-Operated Backpack System George Chen, John Kua, Stephen Shum, Nikhil Naikal, Matthew Carlberg, Avideh Zakhor Video and Image Processing Lab, University of California,
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationUnit 2: Locomotion Kinematics of Wheeled Robots: Part 3
Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of
More informationGeometric transformations in 3D and coordinate frames. Computer Graphics CSE 167 Lecture 3
Geometric transformations in 3D and coordinate frames Computer Graphics CSE 167 Lecture 3 CSE 167: Computer Graphics 3D points as vectors Geometric transformations in 3D Coordinate frames CSE 167, Winter
More informationDIMENSIONAL SYNTHESIS OF SPATIAL RR ROBOTS
DIMENSIONAL SYNTHESIS OF SPATIAL RR ROBOTS ALBA PEREZ Robotics and Automation Laboratory University of California, Irvine Irvine, CA 9697 email: maperez@uci.edu AND J. MICHAEL MCCARTHY Department of Mechanical
More information6-dof Eye-vergence visual servoing by 1-step GA pose tracking
International Journal of Applied Electromagnetics and Mechanics 52 (216) 867 873 867 DOI 1.3233/JAE-16225 IOS Press 6-dof Eye-vergence visual servoing by 1-step GA pose tracking Yu Cui, Kenta Nishimura,
More informationCIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM
CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and
More information(1) and s k ωk. p k vk q
Sensing and Perception: Localization and positioning Isaac Sog Project Assignment: GNSS aided INS In this project assignment you will wor with a type of navigation system referred to as a global navigation
More informationMEM380 Applied Autonomous Robots Winter Robot Kinematics
MEM38 Applied Autonomous obots Winter obot Kinematics Coordinate Transformations Motivation Ultimatel, we are interested in the motion of the robot with respect to a global or inertial navigation frame
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More informationEgomotion Estimation by Point-Cloud Back-Mapping
Egomotion Estimation by Point-Cloud Back-Mapping Haokun Geng, Radu Nicolescu, and Reinhard Klette Department of Computer Science, University of Auckland, New Zealand hgen001@aucklanduni.ac.nz Abstract.
More informationA dynamic background subtraction method for detecting walkers using mobile stereo-camera
A dynamic ackground sutraction method for detecting walkers using moile stereo-camera Masaki Kasahara 1 and Hiroshi Hanaizumi 1 1 Hosei University Graduate School of Computer and Information Sciences Tokyo
More informationVisual Odometry for Non-Overlapping Views Using Second-Order Cone Programming
Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Jae-Hak Kim 1, Richard Hartley 1, Jan-Michael Frahm 2 and Marc Pollefeys 2 1 Research School of Information Sciences and Engineering
More informationMobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS
Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2
More informationAbsolute Scale Structure from Motion Using a Refractive Plate
Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 108,000 1.7 M Open access books available International authors and editors Downloads Our
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationINCREMENTAL DISPLACEMENT ESTIMATION METHOD FOR VISUALLY SERVOED PARIED STRUCTURED LIGHT SYSTEM (ViSP)
Blucher Mechanical Engineering Proceedings May 2014, vol. 1, num. 1 www.proceedings.blucher.com.br/evento/10wccm INCREMENAL DISPLACEMEN ESIMAION MEHOD FOR VISUALLY SERVOED PARIED SRUCURED LIGH SYSEM (ViSP)
More informationEfficient decoupled pose estimation from a set of points
Efficient decoupled pose estimation from a set of points Omar Tahri and Helder Araujo and Youcef Mezouar and François Chaumette Abstract This paper deals with pose estimation using an iterative scheme.
More information1. What is cos(20 ) csc(70 )? This is a review of the complementary angle theorem that you learned about last time.
Math 121 (Lesieutre); 8.1, cont.; Novemer 10, 2017 1. What is cos(20 ) csc(70 )? This is a review of the complementary angle theorem that you learned aout last time. cos(20 ) csc(70 ) = cos(20 ) sec(90
More informationImage Augmented Laser Scan Matching for Indoor Dead Reckoning
Image Augmented Laser Scan Matching for Indoor Dead Reckoning Nikhil Naikal, John Kua, George Chen, and Avideh Zakhor Abstract Most existing approaches to indoor localization focus on using either cameras
More informationVisual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching
Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of
More informationSUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS
SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS M. Hofer and H. Pottmann Institute of Geometry Vienna University of Technology, Vienna, Austria hofer@geometrie.tuwien.ac.at, pottmann@geometrie.tuwien.ac.at
More informationA General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More information