3D Accuracy Improvement from an Image Evaluation and Viewpoint Dependency. Keisuke Kinoshita ATR Human Information Science Laboratories
|
|
- Dina Logan
- 5 years ago
- Views:
Transcription
1 3D Accuracy Improvement from an Image Evaluation and Viewpoint Dependency Keisuke Kinoshita ATR Human Information Science Laboratories Abstract In this paper, we focus on a simple but important problem: given the coordinates of 3D points and their uncertainties, how much can we improve them provided there is a new set of corresponding image points. This is difficult because 3D points, image points and camera parameters, as well as their uncertainties, are closely coupled to each other. We fully model and update these uncertainties by utilizing constraints found in the projectivity between the 3D world and 2D image plane. The updated, improved uncertainties, represented as covariance matrices, can be used as a goodness measure of reconstructed 3D points. All systems that recover 3D scene information can profit from this approach. In particular, this measure is used for an active vision system that automatically moves the camera to the best viewpoint for reconstructing an object shape. Experimental results with synthesized and real data are shown. 1 Introduction Camera calibration and 3D reconstruction have been central interests for many computer vision researchers. This paper combines the two problems into a single framework and provides the details of how to estimate the camera parameters and to improve the 3D reconstruction together with their accuracy measures. As shown in Fig. 1, assume a set of 3D points and their uncertainties as well as a set of corresponding 2D points and their uncertainties. We update the coordinates and the uncertainties of the 3D points such that in order to obtain more accurate estimates. The importance of accuracy evaluation in computer vision was introduced by Kanatani[1]. He Hikaridai Seika-cho Soraku-gun Kyoto Kyoto Japan proposed using statistically optimal estimation methods and evaluating accuracy bounds for many problems, including 3D reconstruction. However, he did not consider the correlation between estimated 3D points and camera parameters. Although [2] and [3] discuss error analysis of 3D projective reconstruction, they do not discuss the relationship of two reconstructed 3D points. Regarding 3D reconstruction from image sequences, a single cycle of the Kalman filtering methods[4][5] is comparable to the approach in this paper. Also, these authors assume that camera parameters and estimated 3D points are statistically independent. Consequently, the assumption of uncorrelated input data makes the problem seem easier to analyze and faster to calculate. In fact, none of these authors consider the existing stochastic relations, which we consider important for precise 3D point re-estimation and error description. Extending the theory of Kanatani, we model and utilize all stochastic relations to re-estimate 3D points and their error description. 3D points, corresponding 2D points, and camera parameters, i.e. the three entities of the problem, are not indepen- 3D information + uncertainty + =? Uncalibrated image More accurate 3D information Figure 1: The objective of this paper is to find out how much we can improve the uncertainties of 3D point data when an uncalibrated image is provided. Initial 3D point coordinates and image point coordinates as well as their uncertainties (covariance matrices) are given. 1
2 dent of each other. The relation that constrains them is the projectivity between the 3D world and an image plane through a camera. We fully utilize this constraint to improve the accuracies 1.Reestimation is optimal and the optimality is inferred from Kanatani[1]. Our aim is to utilize this approach in the optimal camera pose prediction component of an active vision system. To verify the advantage of our proposed method, we estimate the best suitable camera pose for the reconstruction of the object shape, together with its improved 3D point estimations. From experiments, we found that the accuracy differs to the power of two between the best and the worst viewpoints. In section 2, after formulating the problem in a standard manner, the simultaneous estimation of 3D points and camera parameters under a strong correlation is shown. To maintain legibility, most of the equations are described in the appendix. Section 4 shows experiments on the method, and section 5 shows an application to an active vision system that seeks the best viewpoint for the 3D reconstruction task. Section 6 concludes the paper. 2 Problem Formulation Let N points in 3D space be X α, α =1,,N. A perspective camera projects a point X α onto an image point x α. Let the coordinates of X α and x α be X α =(X α,y α,z α, 1) T and x α =(x α,y α, 1) T,respectively. The coordinates X α and x α are related by the 3 4 camera projection matrix P =(p ij ): x α P X α. (1) Reformulating (1), we get a simple form of where α =1,,N, (a (i), u) =, (i =1,, 2N), (2) a (2α 1) = ( T X α T x α X α ) T, a (2α) = ( T X α T y α X α ) T, u = ( p 11 p 34 ) T and (, ) represents the inner product of two vectors. All subsequent calculations are derived from this basic constraint. Let the true coordinates of X α and x α be X α and x α, respectively. Observed data X α and x α,which are corrupted with noise, are given as where X α and x α are noise terms. A definition of the problem to be solved in this paper is stated as follows: Given observed data X α and covariance matrices V [X α, X β ], for α, β = 1,, N as well as independently estimated image points x α, optimally re-estimate ˆX α and their covariance matrices V [ ˆX α, ˆX β ]. Furthermore, estimate the optimal camera parameters û (see Fig. 2). It is important to note that noise X α and X β arecorrelatedevenwhenα β. Thisisparticularly true if our proposed method is iteratively applied. EveninthecaseofV [X α, X β ]=(α β), the updated estimates ˆX α and ˆX β become correlated when the uncertainties of the estimated camera parameters are considered. On the other hand, the image noise x α is assumed to be uncorrelated. Because methods that find feature points are usually based on local operators, therefore, correlations among the image points are negligible. In general cases, the noise contained in x α is not known. However, as the noise level of x α can be easily estimated from a by-product of other estimations, such as estimation of the fundamental matrix, we assume that the variance V [x α ]isknown. The problem stated above can be solved by extending Kanatani s optimal estimation theory [1] by applying it to (2). For the rest of the section, we analyze (2) instead of directly handling X α or x α. a (i) and their covariance matrices V [a (i), a (j) ]for i, j =1 2N, which can be derived from X α, x α and their covariance matrices, are the input data to be used in (2). Let the true camera parameters be u. a ( ) and u should satisfy (2). The problem can be restated, when using estimations a ( ) and their covariance matrices V [a (i), a (j) ], as finding â ( ) and û that strictly satisfy(2) as a pair and estimating their covariance matricesv [â (i), â (j) ]andv[û]. Estimation of covariance matrices V [â (i), â (j) ], which earlier work neglected, is novel in this paper. This problem can be interpreted as an optimization of the solution to (2). However, to make the structure of the problem clearer, we tentatively divide the problem into two parts: estimation of camera parameters and update of the points 3D positions. First, we estimate the camera parameter û and its covariance matrix V [û] that satisfy (2). It is known that minimization of (3) gives an optimal estimation. X α = X α + X α, x α = x α + x α, 1 More theoretical and general analysis is found in [6] J[u] = W (u) () (a (k), u)(a (l), u), (3) 2
3 3D data Camera pose Image data Figure 2: Camera parameters u are estimated from observed 3D point data X α and V [X α, X β ], image points and their covariance matrices. Simultaneously, the positions of the 3D data are updated such that the projection relation exactly holds where more accuracy is expected. Figure 3: Observed a (i), which is the sum of the true a (i) and the noise term a (i), is corrected to an updated â (i) such that it satisfies the projection relation. The covariance matrix V [â (i) ] of the updated â (i) is also calculated. where W (u) () is defined according to (W (u) () )=((u,v[a (k), a (l) ]u)). Now we impose the constraints (2) on the input data a ( ). That is, a ( ) should be corrected to exactly satisfy (2) together with û, as shown in Fig. 3. Let the correction be a (i) and the updated data be â (i). Then, â (i) = a (i) a (i), (4) a (i) = W () (a (l), û) V [a (i), a (k) ] û.(5) The pair â ( ) and û satisfy (2). After a long derivation, the covariance matrix of û is calculated as follows. V [û] =(M), M = W () a (k) a (l)t. In reality, we substitute W (), a ( ) with their optimal estimations W () = W (u) ()) and a ( ) = â ( ). The final step is to estimate covariance matrices V [â (i), â (j) ]. The details of the derivation are found in the appendix. Note that even when V [a (i), a (j) ]=(i j), V [â (i), â (j) ] becomes nonzero. This justifies our assumption that the correlation among the input data should not be omitted. 3 Experiments 3.1 Synthesized data Thirty-two points positioned in grid form are used for input 3D points. Image point data is synthe- sized from these 3D point coordinates and predefined camera parameters. Uncorrelated noise of the standard deviation of 1 pixel is added to each image point. The two images used in the experiments are shown in Fig. 4(a) and (b). For each X α, covariance matrix V [X α ]issetto V [X] = (6) Covariance matrices V [X α, X β ] between 3D points are preset to. The upper left 3 3 submatrix of the covariance matrix V [ ˆX α ] 3 3 corresponds to the uncertainty of the 3D point, namely, an error ellipsoid. The performance of the re-estimation can be measured by comparing the volume of the two error ellipsoids that correspond to V [X α ] 3 3 and V [ ˆX α ] 3 3. Re-estimation of V [X α ]ofα = 1, for example, withahelpoftheimageshowninfig.4(a)gives V [ ˆX 1 ] 3 3 = (7) The volume of the error ellipsoid that corresponds to V [ ˆX 1 ] 3 3 is only 21% of that of V [X 1 ] 3 3.For each point α =1,, 32, the s of the two ellipsoids are plotted in Fig. 5. The volume ratios are in the range of.1 to.3. The covariance matrices between two points, which were uncorrelated before, now become correlated. The correlation between points 1 and 2, 3
4 (a) (b) Figure 4: Images of 32 points used in the simulation. for example, is V [ ˆX 1, ˆX 2 ] 3 3 = (8) Next, to this correlated point data, we added another image depicted in Fig. 4(b) for further update. The variance matrix of point 1, for example, now becomes, V [ ˆX 1 ] 3 3 = (9) A comparison of (6), (7) and (9) shows that the error ellipsoid become.21 times smaller in the first update and then.61 times smaller in the second update,.13 times smaller in total. Again, Fig. 5 shows the error ratios of this second update procedure. To verify that the estimated covariance matrix is correct, 5 trials changing noise X 1 and x 1 according to their preset covariance matrices are plotted in Fig. 6. The figure reveals that all of the plotted points are inside the 3σ error ellipsoid. Therefore, the covariance matrix is estimated correctly. Note that the distribution of the updated 3D points is statistically unbiased with respect to the noise. Similar results can be obtained for any other point X α. These results empirically show that our estimation method is statistically optimal and unbiased as well. 3.2 Real data Data obtained from a vision system [7] is used. A box-shaped object with 31 points on its surface is used in the experiment. The 3D positions and their error ellipsoids are shown in Fig. 7(a). For simplicity, the covariance matrices between the points are not shown in the figure. Fig. 7(b) shows a set of corresponding image points used in the experiment. Fig. 7(c) shows the updated 3D points to- Figure 5: Volume ratios of the error ellipsoids that correspond to the covariance matrix of the 3D estimation. The s are.1 to.3 in the first update stage and.3 to.6 in the second update stage. Figure 6: Given true point and noise, updated 3D point ˆX 1 is estimated for 5 times. The results are superimposed on the 3σ error ellipsoid of the estimated covariance matrix V [ ˆX 1 ], which is centered at the true position X 1. Note that almost all trials of ˆX 1 are distributed inside the ellipsoid. gether with their error ellipsoids as well as the opticalaxisofthecamerashownwithanarrow. Aswe can be seen, the error ellipsoids remarkably shrink after the update procedure. Intuitively, the error ellipsoids should have larger errors along the optical axis of the camera in which depth information degenerates. However, the results show that for this experiment that assumption is not true. This is because the 3D points have different contributions to the estimation of the camera parameters as well as to their own updates. 4
5 + = (a) (b) (c) Figure 7: (a) 3D object points and their covariance matrices. (b) An image taken from an uncalibrated camera. This image is used to update the 3D data and to estimate the camera parameters. (c) Updated 3D points shown with their variance matrices. The volumes of the error ellipsoids are notably decreased. The optical axis of the camera is indicated by the arrow. 4 Application to an Active Vision System Our method can be applied to a system that seeks the best viewpoint for 3D reconstruction tasks. Iteratively moving the camera to the best pose should give 3D positions of the object points to be improved step by step. Since it is not realistic to find the camera pose that minimizes the covariance matrices analytically, we compute the covariance matrices of all possible poses and find their minimum. We place a virtual camera distributed around the object on a sphere with its radius set to the distance between the camera and the object and then compute the covariance matrices of the point accuracy for each pose. The criterion to be evaluated is the total volume of the error ellipsoids. Here, the possible camera poses are parameterized in spherical coordinates θ and φ with a range of 18 <θ<18 and 9 <θ<9 divided into 1 steps. We used two objects, a cube with 16 points and a L-shaped object with 16 points. The two object shapes have apparent symmetries, and this property may reveal the importance of the selection of viewpoints for the task. Fig. 9 and Fig. 1 show the viewpoint dependencies of the accuracy of the updated point positions. The small points on Fig. 9(a) and Fig. 1(a) show the object points with their uncertainties. Fig. 9(b) and Fig. 1(b) show the error volume reduction ratio with respect to the viewpoints. The ratios vary between.2 to.28 with respect to the viewpoints. The larger surface object in Fig. 9(a) is the projection of Fig. 9(b) regarding the viewpoints. This shows that when the viewpoints are from the corners, the error is smaller. When the viewpoints are perpendicular to the facets, the ratios are larger, which means it is not a good pose for camera placement. The same explanation applies to Fig. 1(a) with the L-shaped object. It is preferable to place the camera from the corners than from a pose parallel to the plane where the points lie. Fig. 9(c) and Fig. 1(c) show how much the error was reduced after updates from four consecutive best camera poses. The camera is placed at the best pose each time it observes the object. The total error ratios dropped from.2 to.1. In fact, the differences among the viewpoints are not larger than expected. However, to show the difference more distinctively, the errors between the best pose and the worst pose are compared. Fig. 11(a) and (b) show the viewpoint dependencies between the best three consecutive camera poses and the worst three consecutive camera poses. The errors ellipsoids grow two to three times in volume as depicted in Fig. 9(d) and Fig. 1(d). This demonstrates the importance of choosing a good viewpoint for object reconstruction. 5
6 Z φ X θ Y Figure 8: A virtual camera is placed on a sphere parameterized by θ and φ. For each camera pose, an image of the object is synthesized and used to update the 3D point accuracy. (a) "count.max.mat" 5 Conclusions In this paper, a method to update a set of 3D points by using a set of corresponding 2D points was proposed. The stochastic relations are fully modeled in our method: the correlation between the 3D points, the error propagation from the 3D points to the camera parameters, and that from the camera parameters back to the 3D points. The constraints of projectivity through the camera was utilized to obtain an optimal re-estimation of 3D points. This optimality was empirically shown in the experiments on synthesized data. The error ellipsoids can be applied to a vision system that requires an intelligent camera motion. For example, the camera can be guided to the pose where the volume of the error ellipsoids are minimum. This saves time and improves the accuracy of the 3D reconstruction (b) 6 (c) "count.max.mat" "count1.max.mat" "count2.max.mat" "count3.max.mat" Acknowledgements The author thanks Dr. Martin Tonko, who also worked at ATR Human Information Processing Research Labs., for fruitful discussions and Mr. Keiichi Sumita for helping with the experiments. The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, Research on Human Communication. (d) Figure 9: Cube experiments. (a) 16 points on a cube and a plot of viewpoint dependency of the 3D point accuracy. (b) Viewpoint dependency of the error. (c) Plots of error updated four times. (d) Error volume plot updated from three consecutive best camera poses. 6
7 "count4.max.mat" "count4.min.mat" (a) (a) "count.max.mat" "count4.max.mat" "count4.min.mat" (b) (b) "count.max.mat" "count1.max.mat" "count2.max.mat" "count3.max.mat" Figure 11: Comparison of plots of error volume after three consecutive best camera poses (upper plot) and those after three consecutive worst camera poses (lower plot). (a) Cube object. (b) L-shaped object. The accuracies differ more than 2 times in both cases. (c) (d) Figure 1: L-shaped object. Figures (a) to (d) correspond to those of Fig. 9 References [1] Kenichi Kanatani. Statistical Optimization for Geometric Computation: Theory and Practice. Elsevier Science, [2] Gabriella Csurka, Cyril Zeller, Zhengyou Zhang, and O. D. Faugeras. Characterizing the uncertainty of the fundamental matrix. Computer Vision and Image Understanding, 68(1):18 37, [3] N. Georgis, M. Petrou, and J. Kittler. Error guided design of a 3D vision system. IEEE Trans. Pattern Analysis and Machine Intelligence, 2(4): , [4] P. A. Beardsley, A. Zisserman, and D. W. Murray. Sequential updating of projective and affine structure from motion. International Journal of Computer Vision, 23(3): , [5] Q. T. Luong and O. D. Faugeras. Self-calibration of a moving camera from point correspondences and fundamental matrices. International Journal of Computer Vision, 22(3): , [6] Kenichi Kanatani and Keisuke Kinoshita. On covariance update by geometric constraints(in Japanese). In IPSJ SIG Notes Vol.CVIM , pages , 22. [7] Martin Tonko and Keisuke Kinoshita. On the integration of point feature acquisition, tracking and uncalibrated metric reconstruction. In SPIE Conference on Intelligent Robots and Computer Vision, pages ,
8 A Details of V [a (i), a (j) ] V [a (i), a (j) ] can be in eight forms depending on i and j. Here, we use V [b (2α 1), b (2β) ] as one example of them. If α β, V [a (2α 1), a (2β 1) ] V [Xα, X β] 4 x β V [X α, X β ] x αv [X α, X β ] 4 x αx β V [X α, X β ] and if α = β, V [a (2α 1), a (2α 1) ] V [Xα] x αv [X α] 4 x αv [X α] 4 x 2 αv [X α]+v [x α](x αx T α + V [X α]) 1 A, 1 A. The true values X α,x α are substituted with good estimations of X α and x α. The other combinations of i and j, V [a (2α 1), a (2β) ], V [a (2α), a (2β 1) ] and V [a (2α), a (2β) ], can be calculated similarly. B Details of V [â (i), â (j) ] The covariance matrices after the update, which are denoted V [â (i), â (j) ], can be rewritten as V [â (i), â (j) ]=E[(a (i) a (i) )(a (j) a (j) ) T ] = E[ a (i) a (j)t ] E[ a (i) a (j)t ] E[ a (i) a (j)t ]+E[ a (i) a (j)t ]. (1) To compute V [â (i), â (j) ], covariance matrices of a ( ) and a ( ) have to be determined. As a first step, we rewrite a ( ) using a ( ),whose covariance matrix is already known. a (i) = W () (a (l), û) V [a (i), a (k) ] û. (11) However, a ( ), a ( ) and also û have strong correlations among them. We substitute a (i) = a (i) + a (i), (12) û = u + u (13) into (11) and compute the first-order approximation with respect to a (i), u. Usingtherelation (a ( ), u) =, we get 2NX a (i) W () V [a (i), a (k) ] {(a (l), u)i 12 +u a (l)t } u 2NX + W () V [a (i), a (k) ] u u T a (l). Now, a (i) is described using u and a ( ),which are known. Furthermore, we get ([1](7.26)), u = M 2N W () M M a (k) û T a (l) = V [û] W () a (k) û T a (l). Finally, u is rewritten using a ( ).Let G (i) 1 = 2NX W () V [a (i), a (k) ]{(a (l), û)i 12 + u a (l)t }, G (i,l) 2 = { X k W () ( G (i) 1 V [û]a(k) + V [a (i), a (k) ]u)}u T then, a (i) can be rewritten in relation to a (l) as a (i) = G (i,l) 2 a (l). (14) l Substituting (14) into (1), V [â (i), â (j) ]=V [a (i), a (j) ] X l ( X n G (j,n) 2 V [a (n), a (i) ]) T + X l,n G (i,l) 2 V [a (l), a (j) ] G (i,l) 2 V [a (l), a (n) ]G (j,n) T 2, where we use V [a (l), a (j) ] = V [a (l), a (j) ], V [ a (l), a (n) ]=V [a (l), a (n) ]andsoon. To summarize, V [â (i), â (j) ] is computed by using V [a ( ), a ( ) ], V [û], a ( ),andu. a ( ), u is substituted by their optimal estimations â ( ), û. 8
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationEuclidean Reconstruction Independent on Camera Intrinsic Parameters
Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean
More informationCamera Calibration Using Line Correspondences
Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationSilhouette-based Multiple-View Camera Calibration
Silhouette-based Multiple-View Camera Calibration Prashant Ramanathan, Eckehard Steinbach, and Bernd Girod Information Systems Laboratory, Electrical Engineering Department, Stanford University Stanford,
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationHartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne
Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationChapter 7: Computation of the Camera Matrix P
Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationComputer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z
Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationStructure and motion in 3D and 2D from hybrid matching constraints
Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se
More informationContents. 1 Introduction Background Organization Features... 7
Contents 1 Introduction... 1 1.1 Background.... 1 1.2 Organization... 2 1.3 Features... 7 Part I Fundamental Algorithms for Computer Vision 2 Ellipse Fitting... 11 2.1 Representation of Ellipses.... 11
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationFactorization with Missing and Noisy Data
Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationTriangulation from Two Views Revisited: Hartley-Sturm vs. Optimal Correction
Triangulation from Two Views Revisited: Hartley-Sturm vs. Optimal Correction Kenichi Kanatani 1, Yasuyuki Sugaya 2, and Hirotaka Niitsuma 1 1 Department of Computer Science, Okayama University, Okayama
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationMultiple-View Structure and Motion From Line Correspondences
ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA
More informationSpatio-Temporal Stereo Disparity Integration
Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationRobust Model-Free Tracking of Non-Rigid Shape. Abstract
Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction
More informationCS231A Course Notes 4: Stereo Systems and Structure from Motion
CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationTask selection for control of active vision systems
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October -5, 29 St. Louis, USA Task selection for control of active vision systems Yasushi Iwatani Abstract This paper discusses
More informationStereo Observation Models
Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative
More informationA Canonical Framework for Sequences of Images
A Canonical Framework for Sequences of Images Anders Heyden, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: andersp@maths.lth.se kalle@maths.lth.se Abstract This
More informationarxiv:hep-ex/ v1 24 Jun 1994
MULTIPLE SCATTERING ERROR PROPAGATION IN PARTICLE TRACK RECONSTRUCTION M. Penţia, G. Iorgovan INSTITUTE OF ATOMIC PHYSICS RO-76900 P.O.Box MG-6, Bucharest, ROMANIA e-mail: pentia@roifa.bitnet arxiv:hep-ex/9406006v1
More informationA Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space
A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space Naoyuki ICHIMURA Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki, 35-8568 Japan ichimura@etl.go.jp
More informationA Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India
A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract
More informationSilhouette Coherence for Camera Calibration under Circular Motion
Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE
More informationA simple method for interactive 3D reconstruction and camera calibration from a single view
A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute
More informationRoom Reconstruction from a Single Spherical Image by Higher-order Energy Minimization
Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Kosuke Fukano, Yoshihiko Mochizuki, Satoshi Iizuka, Edgar Simo-Serra, Akihiro Sugimoto, and Hiroshi Ishikawa Waseda
More information3D Corner Detection from Room Environment Using the Handy Video Camera
3D Corner Detection from Room Environment Using the Handy Video Camera Ryo HIROSE, Hideo SAITO and Masaaki MOCHIMARU : Graduated School of Science and Technology, Keio University, Japan {ryo, saito}@ozawa.ics.keio.ac.jp
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationPerspective Projection in Homogeneous Coordinates
Perspective Projection in Homogeneous Coordinates Carlo Tomasi If standard Cartesian coordinates are used, a rigid transformation takes the form X = R(X t) and the equations of perspective projection are
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationEpipolar geometry contd.
Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More information3D Modeling using multiple images Exam January 2008
3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider
More informationError Analysis of Feature Based Disparity Estimation
Error Analysis of Feature Based Disparity Estimation Patrick A. Mikulastik, Hellward Broszio, Thorsten Thormählen, and Onay Urfalioglu Information Technology Laboratory, University of Hannover, Germany
More informationApplying Synthetic Images to Learning Grasping Orientation from Single Monocular Images
Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic
More informationMulti-Camera Calibration with One-Dimensional Object under General Motions
Multi-Camera Calibration with One-Dimensional Obect under General Motions L. Wang, F. C. Wu and Z. Y. Hu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,
More informationA Calibration Algorithm for POX-Slits Camera
A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments
More informationSingle View Metrology
International Journal of Computer Vision 40(2), 123 148, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Single View Metrology A. CRIMINISI, I. REID AND A. ZISSERMAN Department
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationWide Baseline Matching using Triplet Vector Descriptor
1 Wide Baseline Matching using Triplet Vector Descriptor Yasushi Kanazawa Koki Uemura Department of Knowledge-based Information Engineering Toyohashi University of Technology, Toyohashi 441-8580, JAPAN
More informationCS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003
CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass
More informationEECS 442: Final Project
EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective
More informationPlanar homographies. Can we reconstruct another view from one image? vgg/projects/singleview/
Planar homographies Goal: Introducing 2D Homographies Motivation: What is the relation between a plane in the world and a perspective image of it? Can we reconstruct another view from one image? Readings:
More informationDETC APPROXIMATE MOTION SYNTHESIS OF SPHERICAL KINEMATIC CHAINS
Proceedings of the ASME 2007 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2007 September 4-7, 2007, Las Vegas, Nevada, USA DETC2007-34372
More informationQuasiconvex Optimization for Robust Geometric Reconstruction
Quasiconvex Optimization for Robust Geometric Reconstruction Qifa Ke and Takeo Kanade, Computer Science Department, Carnegie Mellon University {Qifa.Ke,tk}@cs.cmu.edu Abstract Geometric reconstruction
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationTHE TRIFOCAL TENSOR AND ITS APPLICATIONS IN AUGMENTED REALITY
THE TRIFOCAL TENSOR AND ITS APPLICATIONS IN AUGMENTED REALITY Jia Li A Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for the degree of
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationTracking of Human Body using Multiple Predictors
Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,
More informationAccurate and Dense Wide-Baseline Stereo Matching Using SW-POC
Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationIntroduction à la vision artificielle X
Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf
More information/10/$ IEEE 4048
21 IEEE International onference on Robotics and Automation Anchorage onvention District May 3-8, 21, Anchorage, Alaska, USA 978-1-4244-54-4/1/$26. 21 IEEE 448 Fig. 2: Example keyframes of the teabox object.
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationCamera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de
More informationOcclusion Detection of Real Objects using Contour Based Stereo Matching
Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationModel Fitting. Introduction to Computer Vision CSE 152 Lecture 11
Model Fitting CSE 152 Lecture 11 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 10: Grouping and Model Fitting What to do with edges? Segment linked edge chains into curve features (e.g.,
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationarxiv: v1 [cs.cv] 18 Sep 2017
Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationFACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET
FACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET Sudhakar Y. Reddy and Kenneth W. Fertig Rockwell Science Center, Palo Alto Laboratory Palo Alto, California and Anne Hemingway
More informationC / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.
C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;
More informationLecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject
More informationThis paper describes an analytical approach to the parametric analysis of target/decoy
Parametric analysis of target/decoy performance1 John P. Kerekes Lincoln Laboratory, Massachusetts Institute of Technology 244 Wood Street Lexington, Massachusetts 02173 ABSTRACT As infrared sensing technology
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationShape from Shadows. A Hilbert Space Setting. Michael Hatzitheodorou 1. INTRODUCTION
JOURNAL OF COMPLEXITY 14, 63 84 (1998) ARTICLE NO. CM97448 Shape from Shadows A Hilbert Space Setting Michael Hatzitheodorou Department of Computer Information Systems, American College of Greece, 6 Gravias
More informationUniversity of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract
Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationAssignment 2 : Projection and Homography
TECHNISCHE UNIVERSITÄT DRESDEN EINFÜHRUNGSPRAKTIKUM COMPUTER VISION Assignment 2 : Projection and Homography Hassan Abu Alhaija November 7,204 INTRODUCTION In this exercise session we will get a hands-on
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationGeometric Hand-Eye Calibration for an Endoscopic Neurosurgery System
2008 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 2008 Geometric Hand-Eye Calibration for an Endoscopic Neurosurgery System Jorge Rivera-Rovelo Silena Herold-Garcia
More informationUncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images
Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More information