PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
|
|
- Reginald Fields
- 5 years ago
- Views:
Transcription
1 PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science, University of Melbourne, Parkville, Victoria, Australia 3052 ABSTRACT We present some preliminary results on a system for tracking 3D motion using input from a calibrated stero pair of video camreas, and on a new stereo camera calibration technique. The system is organized as a collection of intercommunicating \agents", which can run concurrently, using a cycle of prediction and verication of 3D motion. This system architecture is designed for real time operation, but as yet only parts of the system are running in real time. 1 INTRODUCTION Real time 3D motion tracking is one of the important problems in computer vision research. In general, our approach for 3D motion tracking can be decomposed into four components: motion parameter estimation, feature extraction, predictionverication, and segmentation. The denition of real time can vary. For us, it means that the system is able to obtain the information it needs in time for that information still to be useful. L R h hhh ( (( h hhh ( (( ( ( STEREO FEATURE 3D MOTION CLUSTER DISPLAY CAMERA CALIBRATION FEATURE MATCHING Figure 1: 3D Feature Based Tracker Cooper and Kitchen [1, 2] have been developed a 2D motion tracking system which can run in real time. In their system, they subdivide the problem into several subproblems and distribute such subproblems to dierent agents. These agents run as concurrent processes in a distributed architecture so as to exploit 1 This research was supported by Australian Research Council, Small Grants Scheme.
2 parallelism to minimize the time consumed. The system presented here is a further development of this work, to handle 3D motion. We will give a brief review of the 2D motion tracking system in Section 2. The overall system design of the proposed system (3DFBT, for \3D Feature-Based Tracker") is shown in Figure 1. This set of agents can be subdivided into two major parts. The rst includes the initialization processes. These include the Calibration and Matching agents. The rest of the system (only partially implemented as yet) is intended to perform the continuous tracking and segmentation processes. The detailed system design for 3DFBT will be described in Section 3. Our system uses stereo views of interest-point features for the tracking. A stereo calibration technique solves for the mapping 2D to 3D, and vice versa. After the relationship has been established, we can use the 3D point positions directly to do motion prediction and motion parameter estimation. In the rest of this paper, we present the details of two agents (Calibration, Stereo- Feature) which have been implemented, and some preliminary results based on using vector velocity prediction. 2 REVIEW OF 2D FEATURE-BASED TRACKER The major components of the 2D Feature Based Tracker (2DFBT) are three separate agents. Figure 2 is the overall structure for 2DFBT. Each agent runs on FEATURE CLUSTER DISPLAY Figure 2: 2D Feature Based Tracker a separate machine and they communicate via ETHERNET. At the moment, FEATURE runs on an IBM-compatible with an Intel 80486/33 CPU, CLUSTER runs on Silicon Graphics IRIS INDIGO and DISPLAY runs on Silicon Graphics PERSONAL IRIS. Here are the functions for each agent: Feature This agent extracts [1, 2] interest points (feature points) using the \Early Jump-Out Method" to do the feature matching between two images. This agent only understands the world through feature points, in terms of statement such as \feature 99 has position (x, y) and velocity (u, v)". In addition, it can use the current estimate of the feature's image velocity to predict the position of the feature in the new data (next frame). In each frame-time, the current state of the known features is sent to CLUSTER for further processing. Cluster This agent groups the feature points as reported by FEATURE into clusters such that the motion of each feature is consistent with rigid 2-D motion of the group to which it belongs. It is able to guess the locations
3 and velocities of features that have been lost by FEATURE. Finally, it sends the information to DISPLAY. Display The major purpose of this agent is to give a graphical representation of the model being maintained by CLUSTER. Each frame-time, CLUSTER sends to DISPLAY a summary of the scene as recorded in CLUSTER's model. It acts as a graphic display of the features, their velocity, and their group memberships. The system can track about 70 features at 5 frames per second. 3 OVERVIEW OF THE 3D FEATURE-BASED TRACKER The major components for 3DFBT have been shown in Figure 1. This set of agents can be split into two parts. The rst comprises the initialization process and the second comprises the continous tracking and segmentation processes. As with 2DFBT, all the agents except the initialization process run as concurrent processes in a distributed architecture. All the agents communicate via WORLD (The communication program developed by the University of Melbourne Computer Vision and Pattern Recognition Laboratory). Stereo Feature runs on an IBMcompatible with an Intel 80486/33 CPU under OS/2; all the other agents run on SGI PERSONAL IRIS or SGI IRIS INDIGO machines. The major functions for each agent are shown below: Camera Calibration Agent As its name implies, this agent tries to establish the relationship between the 2D images and the 3D world. It is the rst one to run in the system. A more detailed description for this agent can be found in Section 4. Feature Matching Agent One of the initialization processes, this agent extracts the images from the Left and Right stereo cameras and detects possible feature points from each image. After the feature points from each images are found, then it tries to nd all the pairs of corresponding feature points from the Left and Right image. At the moment, the matching for corresponding feature points is done manually, since the initial correspondence problem is not the main focus of our work. Stereo Feature Agent One of the continous tracking processes, this agent controls the two cameras to extract the images of the scene, detect the stereofeature points, and calculate the vector velocity for each feature point. A more detailed description for this agent will be found in Section 5. 3D Motion Cluster Agent One of the tracking and segmentation processes, this agent maintains all information for the known objects found by the system. It receives the 3D vector velocity and 3D position for each feature point from Stereo Feature Agent in each stereo frame time. Initially, the agent assumes all the feature points belong to the same object (i.e. a stationary object). After it detects the movement in a set of feature points,
4 then it tries to use a general motion estimation method to determine the motion parameters. After the motion parameters are found, it tries to nd which feature points move consistently with this set of parameters. This agent also uses a prediction/verication cycle to predict the position and motion of the known objects. Display Agent The function of this agent is exactly same as the Display in 2DFBT. It gives a graphical representation of the model being maintained by 3D motion cluster agent. In the continous tracking process, we try to run the system at about 5 frames per second. Because of the overhead of the 2D and 3D information exchange, we plan to track only about features at the 5 frame per second rate. 4 CALIBRATION The major purpose of the calibration agent is to establish the relationship between 3D world coordinates and their corresponding 2D image coordinates. More specically, it takes as input two sets of corresponding control points from the two cameras and computes two sets of camera parameters. Because we need to calibrate two cameras frequently, we have developed a new calibration technique which calibrates two cameras at a same time. We simply call it Simultaneous Camera Calibration. This section will describe the idea of this calibration method and some related experimental results. Currently this agent runs as a separate o-line process. 4.1 Pinhole Camera Model Camera calibration is a process of representing a camera in terms of some mathematical models. The model that is used for this report is the Pinhole camera model. (X; Y; Z) is the object point in a real-world coordinate system, and (x; y) is the corresponding image point projected into the image plane. In general, we can use the two equations below to do the calibration process: x = C x + f M x m 11 (X? T x ) + m 12 (Y? T y ) + m 13 (Z? T z ) m 31 (X? T x ) + m 32 (Y? T y ) + m 33 (Z? T z ) y = C y + f M y m 21 (X? T x ) + m 22 (Y? T y ) + m 23 (Z? T z ) m 31 (X? T x ) + m 32 (Y? T y ) + m 33 (Z? T z ) where (T x ; T y ; T z ) are the translation parameters, (m ij ) is the rotation matrix for the camera, (M x ; M y ; f) are the scale factor and the focal length for the camera, and (C x ; C y ) are the coordinates of the image center. These two equations are the so-called collinearity equations. It is necessary to do a full-scale search to solve these nonlinear equations to nd the 11 independent camera parameters. Therefore the method used to establish the calibration using the above two equations is called the Nonlinear Camera Calibration Method [3, 5, 4].
5 4.2 Simultaneous Camera Calibration (SCC) The idea of this method comes directly from the nonlinear camera calibration method. In the case of the two stereo cameras, four equations will be involved, so 22 camera parameters need to be established (11 for each camera). We could use the Newton-Raphson method to solve the above problem. In our system, we simply combine the residuals of the four equations together, giving an objective function to be minimized which looks like: = (C x1 + f 1 M x1 m 11 (X? T x1 ) + m 12(Y? T y1 ) + m 13(Z? T z1 ) m 31 (X? T x1 ) + m 32 (Y? T y1 ) + m 33 (Z? T z1 )? x 1) 2 + (C y1 + f 1 M y1 m 21 (X? T x1 ) + m 22 (Y? T y1 ) + m 23 (Z? T z1 ) m 31 (X? T x1 ) + m 32 (Y? T y1 ) + m 33 (Z? T z1 )? y 1) 2 + (C x2 + f 2 M x2 n 11 (X? T x2 ) + n 12(Y? T y2 ) + n 13(Z? T z2 ) n 31 (X? T x2 ) + n 32 (Y? T y2 ) + n 33 (Z? T z2 )? x 2) 2 + (C y2 + f 2 M y2 n 21 (X? T x2 ) + n 22(Y? T y2 ) + n 23(Z? T z2 ) n 31 (X? T x2 ) + n 32 (Y? T y2 ) + n 33 (Z? T z2 )? y 2) 2 where (x i ; y i ) are the image coordinates of the control point's projection in camera i, (T xi ; T yi ; T zi ) are the translation parameters for camera i, (m ij ) is the rotation matrix for camera 1, (n ij ) is the rotation matrix for camera 2, (C xi ; C yi ) is the camera center for camera i, (M xi ; M yi ; f i ) are the scale factors and focal length for camera i. The intended advantage of using this function is that we can use the relationship between 3D coordinates and two 2D feature coordinates to improve our calibration accuracy. In normal calibration method, we only can calibrate one camera at a time. In case of the stereo system, we need to calibrate two cameras in two separate process. This is a time-consuming task. More importantly, when two cameras are close together, and hence the baseline is short, the ambiguity for 3D recovery becomes large. This source of error can reduce our experimental accuracy. In SCC method, we can reduce the dependency between parameters, and the ambiguity in their solution. Even if the two cameras are close together, we still can use their dependency in the above function to obtain better parameters hence a better approximation. Other advantage are that the process start-up time is reduced (one process for two cameras) and that the required number of control points for solving two cameras remains the same as a single one. The major disadvantage for this method is that is greatly increases the number of unknowns to be solved for simultaneously. In the following section we will present some experimental results and compare such results with some obtained from the normal nonlinear calibration method. 4.3 Experimental Results In our experiment, two images are extracted from the left and right stereo cameras. For normal nonlinear calibration, 32 control points are used to calibrate each camera. For SCC, 32 stereo control points are used. The measurement error for the control points is about 0:1cm. Table 1 shows two sets of parameters which were found by SCC. Table 2 shows two sets of parameters which were found by the
6 Camera M x M y C x C y T x T y T z! Right ?0.13?0.73 Left The distance between the camera and object is xed at 110:5cm. Table 1: Two sets of camera parameters found by SCC. Camera M x M y C x C y T x T y T z! Right ?0.11?0.76 Left ?0.83 The distance between the camera and object is xed at 110:5cm. Table 2: Two sets of camera parameters found by normal Calibration method. normal calibration method. Having these two sets of parameters, we randomly choose some known 3D points and pairs of 2D points to test both parameters. The RMS error for image projection of the 3D points using parameters found by the normal calibration method is about pixels, and the RMS error for the parameters found by our method is about pixels. The RMS error for 3D position calculated by back-projection of the 2D features using parameters found by the normal calibration method is about cm, and the average error for the parameters found by our method is about cm. We also have done considerable analysis of the performance of SCC. The results showed the improvement is not signicant. This contradicts our original expectation that using 3D coordinates and simultaneous 2D feature coordinates from two cameras would improve our calibration accuracy. At the moment, we are still working on this problem, therefore we do not have any nal conclusion for this unexpected behaviour. However, we think this may be caused by the dimensionality of the objective function being too high (20 parameters). So this problem may be solved by reducing the dimensionality of the function, or by breaking the function parameters into two parts. One part could be solved by a linear method, and the rest of the parameters could be solved by a nonlinear method [6]. 5 STEREO FEATURE The basic structure of this agent is similar to the Feature agent in [1, 2]. We use the same feature matching technique (the Early Jump-Out Method) for matching the features between two frame times. At the moment, correspondence of image features between the two stereo views is done manually in an initialization process. Therefore, for now, we do not handle new feature points entering the view once the tracking process has started. After the initial stage, we have a set of corresponding 2D feature positions. Then we can use the camera parameters which come from the Calibration agent to construct a set of 3D feature point positions. When tracking features, the Stereo Feature agent uses the current estimate of the 3D feature velocity (assumed zero
7 initially) to predict the future 3D position of the feature. After we have this information, we can convert the 3D position into the two 2D image positions in the stereo views. The correlation values are then computed in a small region near each predicted location and thresholded to produce a set of possible match points. The mean position of the possible match points is used as the new 2D feature position. From the 2D feature positions in two new frames, we can compute the 3D position and then update the 3D feature velocity. The updated 3D velocity can then be used to make further predictions, in an ongoing prediction/verication cycle. 5.1 Experimental Results At this stage, we have run the 3DFBT on a number of simple examples, of which the one presented here is typical. The 3DFBT is not yet running in real time, because of some technical diculties with networking. So currently, it completes processing on each stereo pair before requesting the next stereo pair in the sequence. The motion in this experiment is generated by moving a target object manually 1mm to the right per stereo frame time for 20 frames. The camera set-up had been previously calibrated using the SCC method on a separate test object marked with 32 control points. Figure 3 shows a stereo image pair taken during the experiment. The corresponding feature point positions are shown in Table 3. We used about 20 feature points for this experiment, but show only 6 in Table 3. Figure 3: A stereo pair taken at frame number 10 of the sequence. Features detected are superimposed as small black squares in the top pair of images. The bottom pair of images have superimposed on them the feature positions predicted from frame number 9.
8 Feature Number Predicted Position Actual Position 1 (2.13, -3.57, 21.95) (2.13, -3.56, 21.95) 2 (-3.64, 3.55, 21.94) (-3.63, 3.54, 21.93) 3 (-2.16, 0.73, 18.92) (-2.16, 0.73, 18.91) 4 (0.62, -0.36, 15.54) (0.62, -0.35, 15.54) 5 (-3.75, 3.29, 12.56) (-3.74, 3.29, 12.55) 6 (2.27, -3.49, 12.50) (2.26, -3.48, 12.49) Table 3: Comparison of the 3D predicted position and actual position found by the system for tracked features in frame number 10 of example sequence. During the experiment, the system may give a wrong prediction for some feature points or there may be errors in the image input. This may cause the system to fail to detect that feature point in that frame. In the case that the system fails to nd the feature in one image (Left or Right) but not both, then the system tries to use the found one as reference to determine the missing one's position and velocity. In the case that the system cannot nd the feature points in either image, then we assume the prediction is correct and use that for the feature's new position and update its velocity. If we cannot nd a feature in either image for 5 consecutive stereo frames, then the system declares that feature is lost. Using this strategy, we can prevent the loss of features due to input errors or occlusion. 6 REFERENCES [1] J. Cooper and L. Kitchen. Multi-agent motion segmentation for real-time task directed vision. In Australian Joint Conference on A. I., Perth, Australia, November [2] J. Cooper and L. Kitchen. A region based object tracker. In Third National Conference on Robotics, Melbourne, Australia, June [3] W. Faig. Calibration of close-range photogrammetry systems: Mathematical formulation. Photogrammetric Eng. Remote Sensing, 41:1479{1486, [4] D. B. Gennery. Stereo-camera calibration. In Proc. Image Understanding Workshop, pages 101{108, [5] I. Sobel. On calibrating computer controlled camera for perceiving 3-d scenes. Artical Intelligence, 5:185{188, [6] Roger Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using o-the-shelf TV cameras and lens. IEEE Journal of Robotics and Automation, RA-3(4):323{344, August 1987.
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationArm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot
Czech Technical University, Prague The Center for Machine Perception Camera Calibration and Euclidean Reconstruction from Known Translations Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationCarlo Tomasi John Zhang David Redkey. coordinates can defeat the most sophisticated algorithm. for a good shape and motion reconstruction system.
Preprints of the Fourth International Symposium on Experimental Robotics, ISER'95 Stanford, California, June 0{July, 995 Experiments With a Real-Time Structure-From-Motion System Carlo Tomasi John Zhang
More informationA Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland
W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD
More informationMeasurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation
2016 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-8-0 Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point
More informationCamera Calibration With One-Dimensional Objects
Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and
More informationMassachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior
More informationVideo Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.
More informationCalibration of a Multi-Camera Rig From Non-Overlapping Views
Calibration of a Multi-Camera Rig From Non-Overlapping Views Sandro Esquivel, Felix Woelk, and Reinhard Koch Christian-Albrechts-University, 48 Kiel, Germany Abstract. A simple, stable and generic approach
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationCamera Calibration Utility Description
Camera Calibration Utility Description Robert Bryll, Xinfeng Ma, Francis Quek Vision Interfaces and Systems Laboratory The university of Illinois at Chicago April 6, 1999 1 Introduction To calibrate our
More informationZ (cm) Y (cm) X (cm)
Oceans'98 IEEE/OES Conference Uncalibrated Vision for 3-D Underwater Applications K. Plakas, E. Trucco Computer Vision Group and Ocean Systems Laboratory Dept. of Computing and Electrical Engineering Heriot-Watt
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationCHAPTER 5 MOTION DETECTION AND ANALYSIS
CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationStereoScan: Dense 3D Reconstruction in Real-time
STANFORD UNIVERSITY, COMPUTER SCIENCE, STANFORD CS231A SPRING 2016 StereoScan: Dense 3D Reconstruction in Real-time Peirong Ji, pji@stanford.edu June 7, 2016 1 INTRODUCTION In this project, I am trying
More informationMultiple View Reconstruction of Calibrated Images using Singular Value Decomposition
Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition Ayan Chaudhury, Abhishek Gupta, Sumita Manna, Subhadeep Mukherjee, Amlan Chakrabarti Abstract Calibration in a multi
More informationVideo Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods
More informationAn Automatic Method for Adjustment of a Camera Calibration Room
An Automatic Method for Adjustment of a Camera Calibration Room Presented at the FIG Working Week 2017, May 29 - June 2, 2017 in Helsinki, Finland Theory, algorithms, implementation, and two advanced applications.
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationCreating a distortion characterisation dataset for visual band cameras using fiducial markers.
Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council
More informationRobot vision review. Martin Jagersand
Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More informationCalibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland
Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106
More informationDEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD
DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,
More informationCS664 Lecture #18: Motion
CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use
More informationVision-based Manipulator Navigation. using Mixtures of RBF Neural Networks. Wolfram Blase, Josef Pauli, and Jorg Bruske
Vision-based Manipulator Navigation using Mixtures of RBF Neural Networks Wolfram Blase, Josef Pauli, and Jorg Bruske Christian{Albrechts{Universitat zu Kiel, Institut fur Informatik Preusserstrasse 1-9,
More informationMODEL-BASED MOTION ANALYSIS OF FACTORY WORKERS USING MULTI- PERSPECTIVE VIDEO CAMERAS
MODEL-BASED MOTION ANALYSIS OF FACTORY WORKERS USING MULTI- PERSPECTIVE VIDEO CAMERAS Kazunori Sakaki, Takako Sato and Hiroshi Arisawa Yokohama National University Abstract: Key words: Motion simulation
More informationCOS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)
COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera
More informationReal-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov
Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for
More informationTracking of Human Body using Multiple Predictors
Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,
More informationTemporally Coherent Stereo: Vladimir Tucakov, David G. Lowe. Dept. of Computer Science, University of British Columbia. Vancouver, B.C.
Temporally Coherent Stereo: Improving Performance Through Knowledge of Motion Vladimir Tucakov, David G. Lowe Dept. of Computer Science, University of British Columbia Vancouver, B.C., Canada ftucakov,loweg@cs.ubc.ca
More informationMultiple View Geometry in Computer Vision Second Edition
Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics
More informationTwo-View Geometry (Course 23, Lecture D)
Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the
More informationAvailable online at Procedia Engineering 7 (2010) Procedia Engineering 00 (2010)
Available online at www.sciencedirect.com Procedia Engineering 7 (2010) 290 296 Procedia Engineering 00 (2010) 000 000 Procedia Engineering www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationDRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri
The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More information3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1. A Real Time System for Detecting and Tracking People
3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1 W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David
More informationVision-based Mobile Robot Localization and Mapping using Scale-Invariant Features
Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett
More informationTransactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN
Hopeld Network for Stereo Correspondence Using Block-Matching Techniques Dimitrios Tzovaras and Michael G. Strintzis Information Processing Laboratory, Electrical and Computer Engineering Department, Aristotle
More informationSelf-calibration of a pair of stereo cameras in general position
Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible
More informationLocal qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:
Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu
More informationCourse 23: Multiple-View Geometry For Image-Based Modeling
Course 23: Multiple-View Geometry For Image-Based Modeling Jana Kosecka (CS, GMU) Yi Ma (ECE, UIUC) Stefano Soatto (CS, UCLA) Rene Vidal (Berkeley, John Hopkins) PRIMARY REFERENCE 1 Multiple-View Geometry
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationCS201: Computer Vision Introduction to Tracking
CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationMixture Models and EM
Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering
More informationVisual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing
Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,
More informationwith respect to some 3D object that the CAD model describes, for the case in which some (inexact) estimate of the camera pose is available. The method
Error propagation for 2D{to{3D matching with application to underwater navigation W.J. Christmas, J. Kittler and M. Petrou Vision, Speech and Signal Processing Group Department of Electronic and Electrical
More informationUsing Optical Flow for Stabilizing Image Sequences. Peter O Donovan
Using Optical Flow for Stabilizing Image Sequences Peter O Donovan 502425 Cmpt 400 Supervisor: Dr. Mark Eramian April 6,2005 1 Introduction In the summer of 1999, the small independent film The Blair Witch
More informationProc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992
Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,
More informationFast Natural Feature Tracking for Mobile Augmented Reality Applications
Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationSimuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen
Simuntaneous Localisation and Mapping with a Single Camera Abhishek Aneja and Zhichao Chen 3 December, Simuntaneous Localisation and Mapping with asinglecamera 1 Abstract Image reconstruction is common
More informationof human activities. Our research is motivated by considerations of a ground-based mobile surveillance system that monitors an extended area for
To Appear in ACCV-98, Mumbai-India, Material Subject to ACCV Copy-Rights Visual Surveillance of Human Activity Larry Davis 1 Sandor Fejes 1 David Harwood 1 Yaser Yacoob 1 Ismail Hariatoglu 1 Michael J.
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationInternational Journal of Advance Engineering and Research Development
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative
More informationBinocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationQuaternion-Based Tracking of Multiple Objects in Synchronized Videos
Quaternion-Based Tracking of Multiple Objects in Synchronized Videos Quming Zhou 1, Jihun Park 2, and J.K. Aggarwal 1 1 Department of Electrical and Computer Engineering The University of Texas at Austin
More informationStability Study of Camera Calibration Methods. J. Isern González, J. Cabrera Gámez, C. Guerra Artal, A.M. Naranjo Cabrera
Stability Study of Camera Calibration Methods J. Isern González, J. Cabrera Gámez, C. Guerra Artal, A.M. Naranjo Cabrera Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More information1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by
Physics-based 3D Position Analysis of a Soccer Ball from Monocular Image Sequences Taeone Kim, Yongduek Seo, Ki-Sang Hong Dept. of EE, POSTECH San 31 Hyoja Dong, Pohang, 790-784, Republic of Korea Abstract
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationCenter for Automation Research, University of Maryland. The independence measure is the residual normal
Independent Motion: The Importance of History Robert Pless, Tomas Brodsky, and Yiannis Aloimonos Center for Automation Research, University of Maryland College Park, MD, 74-375 Abstract We consider a problem
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt
ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with
More informationHorus: Object Orientation and Id without Additional Markers
Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky
More informationEECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration
1 Lab 1 Camera Calibration Objective In this experiment, students will use stereo cameras, an image acquisition program and camera calibration algorithms to achieve the following goals: 1. Develop a procedure
More information2 Algorithm Description Active contours are initialized using the output of the SUSAN edge detector [10]. Edge runs that contain a reasonable number (
Motion-Based Object Segmentation Using Active Contours Ben Galvin, Kevin Novins, and Brendan McCane Computer Science Department, University of Otago, Dunedin, New Zealand. Abstract: The segmentation of
More informationIntegration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement
Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of
More informationVision-Motion Planning with Uncertainty
Vision-Motion Planning with Uncertainty Jun MIURA Yoshiaki SHIRAI Dept. of Mech. Eng. for Computer-Controlled Machinery, Osaka University, Suita, Osaka 565, Japan jun@ccm.osaka-u.ac.jp Abstract This paper
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationA COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION
XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION David Samper 1, Jorge Santolaria 1,
More informationAn Event-based Optical Flow Algorithm for Dynamic Vision Sensors
An Event-based Optical Flow Algorithm for Dynamic Vision Sensors Iffatur Ridwan and Howard Cheng Department of Mathematics and Computer Science University of Lethbridge, Canada iffatur.ridwan@uleth.ca,howard.cheng@uleth.ca
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationMULTIPLE-SENSOR INTEGRATION FOR EFFICIENT REVERSE ENGINEERING OF GEOMETRY
Proceedings of the 11 th International Conference on Manufacturing Research (ICMR2013) MULTIPLE-SENSOR INTEGRATION FOR EFFICIENT REVERSE ENGINEERING OF GEOMETRY Feng Li, Andrew Longstaff, Simon Fletcher,
More informationCamera calibration. Robotic vision. Ville Kyrki
Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis
More informationOn-line and Off-line 3D Reconstruction for Crisis Management Applications
On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be
More informationGround Plane Motion Parameter Estimation For Non Circular Paths
Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew
More informationTrajectory Fusion for Multiple Camera Tracking
Trajectory Fusion for Multiple Camera Tracking Ariel Amato 1, Murad Al Haj 1, Mikhail Mozerov 1,andJordiGonzàlez 2 1 Computer Vision Center and Department d Informàtica. Universitat Autònoma de Barcelona,
More informationCAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM
SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE-AFASES 2016 CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM Titus CIOCOIU, Florin MOLDOVEANU, Caius SULIMAN Transilvania University, Braşov, Romania (ciocoiutitus@yahoo.com,
More informationNonlinear State Estimation for Robotics and Computer Vision Applications: An Overview
Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Arun Das 05/09/2017 Arun Das Waterloo Autonomous Vehicles Lab Introduction What s in a name? Arun Das Waterloo Autonomous
More informationMultiple Motion Scene Reconstruction from Uncalibrated Views
Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu
More informationA High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More information