Top joint. Middle joint. Bottom joint. (joint angle (2)) Finger axis. (joint angle (1)) F (finger frame) P (palm frame) R, t (palm pose) Camera frame

Size: px
Start display at page:

Download "Top joint. Middle joint. Bottom joint. (joint angle (2)) Finger axis. (joint angle (1)) F (finger frame) P (palm frame) R, t (palm pose) Camera frame"

Transcription

1 Hand Pose Recovery with a Single Video Camera Kihwan Kwon, Hong Zhang, and Fadi Dornaika Dept. of Computing Science, University of Alberta Edmonton, Alberta, Canada fkihwan, zhang, dornaikag@cs.ualberta.ca Keywords: Hand pose, object pose, real-time tracking, glove, hand animation Abstract In this paper, we present a system to determine human hand pose from a single monochrome image in real-time. We formulate the hand pose problem as determining the position and orientation of the palm as well as the joint angles of the fingers. 3D pose of the palm and each joint angle are computed simultaneously using feature points on the human hand. We exploit an existing fast object pose algorithm developed by De- Menthon and Davis [1] together with a heuristic minimization of a 1D function. We make use of a special glove to remove noise caused by the deformation of the hand and to help find correspondence between 3D object points and their images. We demonstrate our system by using a mock-up Cardboard hand with one finger. Experimental results with a real human hand wearing the special glove are presented. 1 Introduction Hand pose recovery has applications in teleoperation of robot hand, human computer interface, hand animation in computer graphics, and virtual reality. One of the popular methods for those applications is to use gloves capable of sensing its joint positions. A problem for the glove is its invasiveness since a user has to wear aglove wired to amachine. In contrast, a computer vision based approach has been attempted but with limited success. One of the problems with computer vision is that it is slow due to complex algorithms and costly image processing. For a computer vision based hand tracking system to be successful and to be widely available, four essential elements should be satisfied: inexpensive, fast, robust and accurate. To be inexpensive, the system should be simple. Rather than using several cameras, one is preferred. The system should be fast since for most of hand tracking applications (e.g., gesture recognition) real-time performance is more desirable. It should be robust in dealing with occlusion and image noise. Of course the computed hand pose should be accurate. Dorner [3] presented a hand tracking system to serve as an interface for American Sign Language signers. The system used non-linear optimization techniques to recover 26 parameters of hand pose by minimizing the difference between the projected model and features in the image. The system did not work in real-time. Rehg [4] implemented a system called DigitEyes which tracked an unadorned hand using line and point features. He used residual functions for fitting projected model and features in the image. He had to use special hardware for real-time performance. Stereo was employed to track full hand articulation. Segen and Kumar [5] implemented a system to drive an articulated hand model with stereo images. It worked in real-time although their system controlled only wrist, index finger and thumb with some of joints disabled. Our system relates hand pose problem to a linear Perspective-n-Point pose algorithm. Joint angles and the palm pose are determined by aperspective-n-point pose algorithm together with a heuristic minimization of a 1D function. Our system has several advantages. First, the system uses an object pose algorithm which exploits linear techniques, allowing the system to be fast. Second, we find joint angles in order by applying the simple technique repeatedly, rather than recovering all defined hand pose parameters with one nonlinear minimization process. One same technique is applied to a different subset of the feature points depending on which joint angle we compute. The subset is composed of five points for computation of a bottom joint angle and six for computation of a middle joint angle. Third, we use a single camera and the estimation of the joint angles and pose of human palm in 3D space are accurate as long as feature points can be seen. The remainder of this paper is organized as follows.

2 In section 2, assumptions for our system are given. In section 3, we elaborate on our hand pose algorithm. In section 4, the object pose algorithm we used is briefly discussed. In section 5, we describe some of our experiments and we conclude with our future directions of research in section 6. Top joint Middle joint Bottom joint 2_ θ3= 3 θ 2 θ 2 θ 1 2 Assumptions Several assumptions are made for the simplicity ofour algorithm and to help real-time execution. We assume the palm is rigid without considering its movement. Note that the palm deformation has not been studied in the literature of hand tracking although it was allowed as noise. In our case, we force the palm to be rigid by using a wood board so that our feature points can not be deformed or can not move independently by the palm deformation. Each finger (the thumb is not considered in this paper) has three links. The bottom joint joined with the palm has one degree of freedom (DOF) instead of two. Abduction and adduction are not considered in our system. The top joint angle of each fingeris dependent on the middle joint angle and their coupling is assumed linear. Rijpkema and Girard [2] suggest that the relationship between middle joint angle and top joint angle is almost linear. This will help increase the speed of system by obtaining the top joint angles from middle joint angles. We regard the top joint angle as the two thirds of the middle joint angle. Note that each finger has two DOF's since the bottom and middle joints have one DOF each, and the top joint is dependent on the middle one. We assume that all the feature points are visible all the time and that they are correctly labeled. 3 Hand pose using only 12 markers We decided to use simple points for the hand model (Figure 1) for at least two reasons. First, experiments on human motion perception [6] showed that humans could recognize biological motions in 3D space by observing just the light markers attached to the subject. This implies that simple points carry enough information for the motion of an articulated object. Second, 3D pose of a rigid object can be determined using at least three model points [8]. Since the hand is composed of rigid segments, it should be possible to determine hand pose using points. Figure 1: Our hand model: 12 points are used to describe all defined hand parameters. C Camera frame θ 2 (joint angle (2)) Finger axis R, t (palm pose) θ1 P (palm frame) (joint angle (1)) F (finger frame) Figure 2: Problem definition of hand pose (one finger is shown for the sake of simplicity). We attach four coplanar points on the palm. Four points are the minimum number of points required by the object pose algorithm we used [1]. We tried to use only one feature point for each segment of a finger. Moreover, since we assumed the top joints were linearly related to the middle joints, computation of the top joint angles does not require any marker. So 14 DOF's (6 DOF's for palm and 8 DOF's for four fingers) of the human hand are determined by only 12 points. Figure 1 shows the configuration of feature points on a right hand. 3.1 Hand pose Our problem can be stated as follows. Given one captured image featuring all markers and the knowledge of the palm model and the fingers' kinematics, we estimate the palm pose as well as the joint angles of each finger (see Figure 2). We can, of course, simultaneously recover the palm

3 pose and the joint angles using the perspective equations associated with each imagepoint. However, this leads to non-linear systems, which is not our goal. Thus, we havedeveloped the following heuristic which combines a real-time object pose algorithm with the minimization of a 1D function. First consider the problem of finding the 3D object pose of the model shown in Figure 3 using five points labeled 1 through 5. To find the object pose, we need to know the 3D coordinates of the five points with respect to the palm frame, and actual image coordinates corresponding to those five points. The problem here is that we do not know the coordinates of feature 5 with respect to the palm frame since we donot know the joint angle of the finger (note that the finger moves in a plane perpendicular to the palm plane according to our assumption). To resolve this, the angle is set to an initial value that will be upgraded using a specific criterion. Using this initial estimation (although it may be far from the actual angle), we can compute the coordinates of the feature 5. Now, we can determine the 3D pose of the hand model using the five points and their corresponding image points. Once the 3D pose transformation of the hand model is found, we can back-project the five points onto the image plane to verify the transformation. It is easy to see whether the transformation is accurate by comparing the projected image points and the actual image points. We have to repeat the process with another angle value until an accurate transformation is found. Once an accurate transformation is found, that means the angle guessed is the actual angle of the finger. The transformation itself represents the palm pose. We use the following notations: (u i ;v i ) is the image coordinates of the object points (x i ;y i ;z i ), i 2 f1 :::5g. F T P is the transformation from the finger frame to the palm frame. M is the camera projection matrix. L is the distance of feature 5 from the origin of the finger frame F. Note that M and F T P are known. Our algorithm can be summarized as follows: 1. Set the joint angle to a lower bound. 2. Compute the object coordinate of the feature 5 with respect to the palm frame: (x5;y5;z5; 1) T = F T P L cos( ) L sin( ) CA 3. Using an object pose algorithm, compute P T C, which is the transformation from the palm frame to the camera frame. 4. Back-project the object points using the transformation found and the camera projection matrix: (u 0 i;v 0 i)=m P T C (x i ;y i ;z i ; 1) T 5. Compute the following residual error: P N 1 P 40 e = i=0 E i E i = N 5 where E i is the Euclidean distance between the projected image points (u 0 i ;v0 i ) and the actual image points (u i ;v i ). 6. Save the angle and residual error e pair. 7. If the angle is greater than an upper bound, stop, otherwise increment by and go to step (2). 8. Choose the angle which corresponds to the least residual error. That is the estimated joint angle. However, there are several concerns with the algorithm. First, the algorithm will take time since we have to run the object pose algorithm many times for computation of each joint angle, making the system slow. This can be overcome by using the fast object pose algorithm like theonedeveloped by DeMenthon and Davis [1]. Section 4 describes howwe used it. Second, it might be the case that even if the projected image points and actual image points coincide, it does not always mean a joint angle is correct. Figure 4 shows a degenerate case where two solutions are possible. This kind of problem can be overcome using a small search range. In our system, except for the initial run, the search range can be reduced significantly. If we know the initial angle, assuming the finger joint has not rotated much with a fast frame rate, the next angle is close to the previous one and a situation of multiple solutions can be avoided. Also the finger physiology can be considered to exclude an unreasonable angle. By repeating the above algorithm, all other joint angles can be computed in the same way although computation of a middle joint angle requires six feature points. Note that the palm pose can be either derived from the above algorithm or derived from the four coplanar points attached to the palm. 3.2 The glove To support our assumptions (the finger moves in a plane and the palm is rigid) on a real hand, we developed a special glove (Figure 5). For fingers, thin

4 x θ x z 5 L y F Finger frame 3 4 y 2 P Palm frame z 1 Figure 3: A simple example describing our algorithm. All five points should be expressed in the palm frame. Center of projection Image plane a,b Loci of the finger tip (circle) Line of sight Projection of the circle A B Finger segment Figure 4: A degenerate case where the plane of the finger passes through the center of projection. In this case, the trajectory of the finger tip-a circle- will project onto a straight line. Thus, there are two possible angles associated with each lineofsight. wood board segments are joined together using small hinges which force each finger to move in a plane and disable adduction and abduction. Those joined wood board segments are attached to a wood board palm, virtually making a very rigid palm, removing all the noises caused by non-rigidness of the human palm. This hand-like structure is attached to a black glove. Feature points are printed and glued on the palm and on the finger segments. Note that feature points are positioned between finger joints, not onto joints, to prevent deformation of feature points when the fingers bent. We tried to reduce the invasiveness of glove by usingawool glove and thin wood board segments. The advantage of using such a supporting structure is two fold. First, it helps to acquire more accurate feature positions leading to accurate computation of joint angles. Second, the glove will fit almost all the adult subjects, requiring no calibration of the system, which is necessary if naked hand were used since each person's hand has a different size. Figure 5: the special glove to be worn by the subject. 4 Object pose Determining object pose using images of model points is a well-known problem in photogrammetry and computer vision. This so-called Perspective-n-Point problem [8] can be solved with closed-form or numerical solutions. Closed-form solutions have been applied only to a limited numberofpoints ([8], [9]). Numerical solutions ([10], [11]) were required for an arbitrary number of points. However, good initial guesses were necessary for the numerical algorithms to converge. They were also computationally expensive. Recently, alin- ear n-point algorithm was suggested by [12], making a closed form solution available with at least four or more points. We used the iterative method developed by DeMenthon and Davis [1]. Their algorithm exploits the scaled orthographic projection (SOP). In SOP, object points are projected onto a plane parallel to the image plane and passing through the origin of the object frame, then they are projected with the true perspective onto the image plane (Figure 6). The algorithm has several advantages compared to non-linear methods. First, it does not require initial guesses. Second, it is fast so that it suits real-time applications. As non-linear methods, it can be applied to an arbitrary number of points. The algorithm deals with two distinct cases: the coplanar case (all the object points lie on a plane) and the non-coplanar case. However, in our system the configuration of points can be coplanar or noncoplanar depending on the joint angles of the moving fingers. In Figure 7, if the finger joint angle is zero, all five points lie on a plane. Experiments showed that when the joint angle was small (approximately less than 20 degrees), we found out that both the coplanar algorithm and the non-coplanar algorithm do not work well since the configuration is not coplanar and not

5 C wi p p i 0 I Image plane A Plane approximating the object P 0 W Z X Y Pi Optical axis Figure 6: Perspective projection (p i ) and scaled orthographic projection (w i ) for an object point P i. The reference point P o is projected onto p o in both projections. imaginary points Palm Finger Figure 7: Four virtual feature points are added to the palm model in order to make the 3D model of the whole hand non-coplanar regardless of the joint angle values. quite non-coplanar. To make the configuration always non-coplanar so that we do not have to worry about the case when joint angle is zero or small, we use four imaginary points (see Figure 7). Since we can compute the transformation from the camera to the palm using the four actual coplanar points on the palm using the coplanar algorithm, we can back-project imaginary object points using the transformation found from the four coplanar points to get the corresponding imaginary image coordinates required by the non-coplanar algorithm. With this one more extra step, we can always use the non-coplanar algorithm with nine points. We also impose orthogonality constraint, which is not forced by this object pose algorithm, by using the method described in [7]. This way we can get more accurate object pose. 5 Experiments To recognize markers easily, we use black background and black glove with white markers. We use different sizes of circular dots for the palm pose. Once the palm pose is known, we can use the associated palm planeto-image plane mapping in order to project each finger marker onto the image. The associated angle value can besettoafixedvalue (e.g., the average value) or to the computed value from the previous image. This provides predicted 2D location for each finger marker. Then, distinguishing between fingers can be carried out by comparing the predicted 2D locations and the actual 2D locations. Note that the use of circular dots as markers considerably reduces the image processing cost and allows a very accurate localization of image points. Gray scale NEC T1-23A CCD camera was set up about one meter away from the hand. The hand size was about 15 cm. Since we are using a weak perspective object pose algorithm, the hand should be at some distance from the camera for better performance. For the weak perspective pose algorithm to converge, an object should lie in the neighborhood of the optical axis and beyond some distance from the camera (for details, see [7] and [1]). For easy verification of our algorithm, we developed an animated hand with OpenGL so that the animated hand can follow the motion of the real hand wearing the special glove. The machine we are using is a Pentium 500MHz PC running Linux. We did two types of experiments. First, we made ahandwith athick Cardboard which has one finger with only one joint (Figure 8). This allows us to verify our algorithm since the Cardboard finger angle can be measured accurately. Second, we tested with the real hand wearing the developed glove. 5.1 Experiment with a Cardboard hand To test the accuracy of the proposed algorithm, we used the Cardboard hand described above. Table 1 shows the measured and computed angles for several configurations. These configurations are shown in Figure 9. Each configuration has a different joint angle except for B and C. As can be seen in Table 1, errors are within 5 degrees in arbitrary configurations in 3D space.

6 (A) (B) (C) Figure 8: The model hand made of a Cardboard with only one finger segment. configuration A B C D E F measured angle computed angle Table 1: Test results associated with six typical configurations. 5.2 Experiments with a real hand wearing the glove Wearing the glove described previously, the animated hand model was following the real hand successfully in real-time. The hand pose algorithm runs at 8Hz. It is difficult to get the ground-truth for the joint angles of moving fingers. Although the ground-truth for the fingers' motion is not available, one can evaluate the performance by comparing the real motion and the one performed by the synthetic model. We also plan to measure it accurately using a robot hand attached with feature points. Figure 10 shows images of the real hand moving in 3D space wearing the glove, the animated hand model following the motion of real hand, and the tracked real hand. We found out that the search range of the joint angle can be ±5 degrees from the previous angle and the increment step is 1 degree. 6 Summary and future research We have described how to track a human hand using a single camera. The tracked features consist of 12 simple markers. The algorithm is simple and accurate. Moreover, it works in real-time. We used a special glove made of wood board segments. One of (D) (E) (F) Figure 9: Typical configurations used for testing. its advantages is that it fits almost all subjects and no system calibration is necessary for different subjects. Our future research will focus on the occlusion problem. We assumed that all feature circles can be seen at any given time. However, in reality this is not the case and feature points are easily occluded. We are considering feature tracking algorithms to alleviate occlusion problem. Also, different features like rings as in Dorner [3] are also considered since a ring can be seen mostly from any direction. References [1] D.F. DeMenthon and L.S. Davis. Model-based object pose in 25 lines of code. International Journal of Computer Vision, 15(1/2): , [2] H. Rijpkema and M. Girard. Computer animation of knowledge-based human grasping. Computer Graphics, vol. 25, Number 4, July 1991 [3] B.Dorner. Hand shape identification and tracking for sign language interpretation. Looking at People Workshop, Chambery, France, [4] J.M. Rehg and T. Kanade. Visual tracking of high DOF articulated structures: An application to human hand tracking. In Proc. 3rd ECCV, Stockholm, Sweden,volume II, pages 35-45, [5] J. Segen and S. Kumar. Driving a 3D articulated hand model in real-time, IMDSP'98, Alpbach, Austria July, 1998.

7 [10] J. S.-C. Yuan. A general photogrammetric method for determining object position and orientation. IEEE Transactions on Robotics and Automation, 5(2): , April [11] R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robotics and Automation, vol.3, pp , [12] L. Quan, Z. Lan. Linear n-point camera pose determination. Transactions on PAMI, vol. 21, No. 7, July Figure 10: Left: Real hand in action(images were taken using another camera from different directions), Middle: The animated hand model following the 3D motion of the real hand (palm and fingers), Right: Tracked real hand. [6] G. Johansson. Visual perception of biological motion and a model for its analysis. Perceptive Psychophysics, 14(2): , [7] R. Horaud, F. Dornaika, B. Lamiroy, and S. Christy. Object pose: the link between weak perspective, paraperspective, and full perspective. International Journal of Computer Vision, 22(2): , March 1997 [8] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM, vol.24, pp , [9] R. Horaud, B. Conio, O. Leboulleux, and B. Lacolle. An analytic solution for the perspective 4- point problem. Computer vision, Graphics, and Image Processing, 47(1):33-44, July 1989.

3d Pose Estimation. Algorithms for Augmented Reality. 3D Pose Estimation. Sebastian Grembowietz Sebastian Grembowietz

3d Pose Estimation. Algorithms for Augmented Reality. 3D Pose Estimation. Sebastian Grembowietz Sebastian Grembowietz Algorithms for Augmented Reality 3D Pose Estimation by Sebastian Grembowietz - 1 - Sebastian Grembowietz index introduction 3d pose estimation techniques in general how many points? different models of

More information

Simultaneous Pose and Correspondence Determination using Line Features

Simultaneous Pose and Correspondence Determination using Line Features Simultaneous Pose and Correspondence Determination using Line Features Philip David, Daniel DeMenthon, Ramani Duraiswami, and Hanan Samet Department of Computer Science, University of Maryland, College

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Grasp Recognition using a 3D Articulated Model and Infrared Images

Grasp Recognition using a 3D Articulated Model and Infrared Images Grasp Recognition using a 3D Articulated Model and Infrared Images Koichi Ogawara Institute of Industrial Science, Univ. of Tokyo, Tokyo, Japan Jun Takamatsu Institute of Industrial Science, Univ. of Tokyo,

More information

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10 BUNDLE ADJUSTMENT FOR MARKERLESS BODY TRACKING IN MONOCULAR VIDEO SEQUENCES Ali Shahrokni, Vincent Lepetit, Pascal Fua Computer Vision Lab, Swiss Federal Institute of Technology (EPFL) ali.shahrokni,vincent.lepetit,pascal.fua@epfl.ch

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Hand Gesture Recognition. By Jonathan Pritchard

Hand Gesture Recognition. By Jonathan Pritchard Hand Gesture Recognition By Jonathan Pritchard Outline Motivation Methods o Kinematic Models o Feature Extraction Implemented Algorithm Results Motivation Virtual Reality Manipulation of virtual objects

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Development of a 25-DOF Hand Forward Kinematic Model Using Motion Data

Development of a 25-DOF Hand Forward Kinematic Model Using Motion Data Development of a 25-DOF Hand Forward Kinematic Model Using Motion Data Xiaopeng Yang 1, Kihyo Jung 2, Jangwoon Park 1, Heecheon You 1 1 Department of Industrial and Management Engineering, POSTECH 2 Department

More information

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract Scene Reconstruction from Multiple Uncalibrated Views Mei Han Takeo Kanade January 000 CMU-RI-TR-00-09 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1513 Abstract We describe a factorization-based

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray Petra Dorn, Peter Fischer,, Holger Mönnich, Philip Mewes, Muhammad Asim Khalil, Abhinav Gulhar, Andreas Maier

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

Determining pose of a human face from a single monocular image

Determining pose of a human face from a single monocular image Determining pose of a human face from a single monocular image Jian-Gang Wang 1, Eric Sung 2, Ronda Venkateswarlu 1 1 Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 2 Nanyang

More information

Global Hand Pose Estimation by Multiple Camera Ellipse Tracking

Global Hand Pose Estimation by Multiple Camera Ellipse Tracking Global Hand Pose Estimation by Multiple Camera Ellipse Tracking Jorge Usabiaga 1, Ali Erol 1, George Bebis 1 Richard Boyle 2, and Xander Twombly 2 1 Computer Vision Laboratory, University of Nevada, Reno,

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

[2] J. "Kinematics," in The International Encyclopedia of Robotics, R. Dorf and S. Nof, Editors, John C. Wiley and Sons, New York, 1988.

[2] J. Kinematics, in The International Encyclopedia of Robotics, R. Dorf and S. Nof, Editors, John C. Wiley and Sons, New York, 1988. 92 Chapter 3 Manipulator kinematics The major expense in calculating kinematics is often the calculation of the transcendental functions (sine and cosine). When these functions are available as part of

More information

Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere

Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere Pietro Cerri, Oscar Gerelli, and Dario Lodi Rizzini Dipartimento di Ingegneria dell Informazione Università degli Studi

More information

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics

More information

4th Finger. 4th Finger Side View. Link 3. Anchor Point

4th Finger. 4th Finger Side View. Link 3. Anchor Point Appears in Third European Conf. on Computer Vision, Stockholm, Sweden, May 1994, pages 35-46. Visual Tracking of High DOF Articulated Structures: an Application to Human Hand Tracking James M. Rehg 1 and

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines

Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines International Journal of Intelligent Engineering & Systems http://www.inass.org/ Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines Chang Liu 1,2,3,4, Feng Zhu 1,4, Jinjun Ou 1,4,

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Factorization Method Using Interpolated Feature Tracking via Projective Geometry

Factorization Method Using Interpolated Feature Tracking via Projective Geometry Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,

More information

Virtual Interaction System Based on Optical Capture

Virtual Interaction System Based on Optical Capture Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling Computer Graphics Instructor: Brian Curless CSE 457 Spring 215 Homework #1 Displays, Image Processing, Affine Transformations, Hierarchical Modeling Assigned: Thursday, April 9 th Due: Thursday, April

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter University of Hamburg Faculty of Mathematics, Informatics

More information

Low Cost Motion Capture

Low Cost Motion Capture Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Visual Tracking of Human Body with Deforming Motion and Shape Average

Visual Tracking of Human Body with Deforming Motion and Shape Average Visual Tracking of Human Body with Deforming Motion and Shape Average Alessandro Bissacco UCLA Computer Science Los Angeles, CA 90095 bissacco@cs.ucla.edu UCLA CSD-TR # 020046 Abstract In this work we

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

A deformable model driven method for handling clothes

A deformable model driven method for handling clothes A deformable model driven method for handling clothes Yasuyo Kita Fuminori Saito Nobuyuki Kita Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology (AIST) AIST

More information

A Two-stage Scheme for Dynamic Hand Gesture Recognition

A Two-stage Scheme for Dynamic Hand Gesture Recognition A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal (james,sc,tush)@ee.iitb.ac.in Department of Electrical Engg. Indian Institute of Technology,

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Vehicle Occupant Posture Analysis Using Voxel Data

Vehicle Occupant Posture Analysis Using Voxel Data Ninth World Congress on Intelligent Transport Systems, Chicago, Illinois, October Vehicle Occupant Posture Analysis Using Voxel Data Ivana Mikic, Mohan Trivedi Computer Vision and Robotics Research Laboratory

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector Inverse Kinematics Given a desired position (p) & orientation (R) of the end-effector q ( q, q, q ) 1 2 n Find the joint variables which can bring the robot the desired configuration z y x 1 The Inverse

More information

POME A mobile camera system for accurate indoor pose

POME A mobile camera system for accurate indoor pose POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble

More information

CH2605-4/88/0000/0082$ IEEE DETERMINATION OF CAMERA LOCATION FROM 2D TO 3D LINE AND POINT CORRESPONDENCES

CH2605-4/88/0000/0082$ IEEE DETERMINATION OF CAMERA LOCATION FROM 2D TO 3D LINE AND POINT CORRESPONDENCES DETERMINATION OF CAMERA LOCATION FROM 2D TO 3D LINE AND POINT CORRESPONDENCES Yuncai Liu Thomas S. Huang and 0. D. Faugeras Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Articulated Structure from Motion through Ellipsoid Fitting

Articulated Structure from Motion through Ellipsoid Fitting Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 179 Articulated Structure from Motion through Ellipsoid Fitting Peter Boyi Zhang, and Yeung Sam Hung Department of Electrical and Electronic

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Dynamic Model Of Anthropomorphic Robotics Finger Mechanisms

Dynamic Model Of Anthropomorphic Robotics Finger Mechanisms Vol.3, Issue.2, March-April. 2013 pp-1061-1065 ISSN: 2249-6645 Dynamic Model Of Anthropomorphic Robotics Finger Mechanisms Abdul Haseeb Zaidy, 1 Mohd. Rehan, 2 Abdul Quadir, 3 Mohd. Parvez 4 1234 Mechanical

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

ActivityRepresentationUsing3DShapeModels

ActivityRepresentationUsing3DShapeModels ActivityRepresentationUsing3DShapeModels AmitK.Roy-Chowdhury RamaChellappa UmutAkdemir University of California University of Maryland University of Maryland Riverside, CA 9252 College Park, MD 274 College

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Triangulation: A new algorithm for Inverse Kinematics

Triangulation: A new algorithm for Inverse Kinematics Triangulation: A new algorithm for Inverse Kinematics R. Müller-Cajar 1, R. Mukundan 1, 1 University of Canterbury, Dept. Computer Science & Software Engineering. Email: rdc32@student.canterbury.ac.nz

More information

3D Face and Hand Tracking for American Sign Language Recognition

3D Face and Hand Tracking for American Sign Language Recognition 3D Face and Hand Tracking for American Sign Language Recognition NSF-ITR (2004-2008) D. Metaxas, A. Elgammal, V. Pavlovic (Rutgers Univ.) C. Neidle (Boston Univ.) C. Vogler (Gallaudet) The need for automated

More information

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Jae-Hak Kim 1, Richard Hartley 1, Jan-Michael Frahm 2 and Marc Pollefeys 2 1 Research School of Information Sciences and Engineering

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

To appear in ECCV-94, Stockholm, Sweden, May 2-6, 1994.

To appear in ECCV-94, Stockholm, Sweden, May 2-6, 1994. To appear in ECCV-94, Stockholm, Sweden, May 2-6, 994. Recognizing Hand Gestures? James Davis and Mubarak Shah?? Computer Vision Laboratory, University of Central Florida, Orlando FL 3286, USA Abstract.

More information

Calibration of a Multi-Camera Rig From Non-Overlapping Views

Calibration of a Multi-Camera Rig From Non-Overlapping Views Calibration of a Multi-Camera Rig From Non-Overlapping Views Sandro Esquivel, Felix Woelk, and Reinhard Koch Christian-Albrechts-University, 48 Kiel, Germany Abstract. A simple, stable and generic approach

More information

Gesture Recognition Technique:A Review

Gesture Recognition Technique:A Review Gesture Recognition Technique:A Review Nishi Shah 1, Jignesh Patel 2 1 Student, Indus University, Ahmedabad 2 Assistant Professor,Indus University,Ahmadabad Abstract Gesture Recognition means identification

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Fingertips Tracking based on Gradient Vector

Fingertips Tracking based on Gradient Vector Int. J. Advance Soft Compu. Appl, Vol. 7, No. 3, November 2015 ISSN 2074-8523 Fingertips Tracking based on Gradient Vector Ahmad Yahya Dawod 1, Md Jan Nordin 1, and Junaidi Abdullah 2 1 Pattern Recognition

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

Ground Plane Motion Parameter Estimation For Non Circular Paths

Ground Plane Motion Parameter Estimation For Non Circular Paths Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew

More information

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics J. Software Engineering & Applications, 2010, 3: 230-239 doi:10.4236/jsea.2010.33028 Published Online March 2010 (http://www.scirp.org/journal/jsea) Applying Neural Network Architecture for Inverse Kinematics

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Automatic Kinematic Chain Building from Feature Trajectories of Articulated Objects

Automatic Kinematic Chain Building from Feature Trajectories of Articulated Objects Automatic Kinematic Chain Building from Feature Trajectories of Articulated Objects Jingyu Yan and Marc Pollefeys Department of Computer Science The University of North Carolina at Chapel Hill Chapel Hill,

More information

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh

More information

Pose Estimation from Circle or Parallel Lines in a Single Image

Pose Estimation from Circle or Parallel Lines in a Single Image Pose Estimation from Circle or Parallel Lines in a Single Image Guanghui Wang 1,2, Q.M. Jonathan Wu 1,andZhengqiaoJi 1 1 Department of Electrical and Computer Engineering, The University of Windsor, 41

More information

Camera Calibration Utility Description

Camera Calibration Utility Description Camera Calibration Utility Description Robert Bryll, Xinfeng Ma, Francis Quek Vision Interfaces and Systems Laboratory The university of Illinois at Chicago April 6, 1999 1 Introduction To calibrate our

More information

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical

More information

EEE 187: Robotics Summary 2

EEE 187: Robotics Summary 2 1 EEE 187: Robotics Summary 2 09/05/2017 Robotic system components A robotic system has three major components: Actuators: the muscles of the robot Sensors: provide information about the environment and

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Combining Appearance and Topology for Wide

Combining Appearance and Topology for Wide Combining Appearance and Topology for Wide Baseline Matching Dennis Tell and Stefan Carlsson Presented by: Josh Wills Image Point Correspondences Critical foundation for many vision applications 3-D reconstruction,

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

Model-Based Human Motion Capture from Monocular Video Sequences

Model-Based Human Motion Capture from Monocular Video Sequences Model-Based Human Motion Capture from Monocular Video Sequences Jihun Park 1, Sangho Park 2, and J.K. Aggarwal 2 1 Department of Computer Engineering Hongik University Seoul, Korea jhpark@hongik.ac.kr

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models Emanuele Ruffaldi Lorenzo Peppoloni Alessandro Filippeschi Carlo Alberto Avizzano 2014 IEEE International

More information

Animation. Keyframe animation. CS4620/5620: Lecture 30. Rigid motion: the simplest deformation. Controlling shape for animation

Animation. Keyframe animation. CS4620/5620: Lecture 30. Rigid motion: the simplest deformation. Controlling shape for animation Keyframe animation CS4620/5620: Lecture 30 Animation Keyframing is the technique used for pose-to-pose animation User creates key poses just enough to indicate what the motion is supposed to be Interpolate

More information

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Wolfgang Niem, Jochen Wingbermühle Universität Hannover Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Fitting (LMedS, RANSAC)

Fitting (LMedS, RANSAC) Fitting (LMedS, RANSAC) Thursday, 23/03/2017 Antonis Argyros e-mail: argyros@csd.uoc.gr LMedS and RANSAC What if we have very many outliers? 2 1 Least Median of Squares ri : Residuals Least Squares n 2

More information

Perspective Projection Describes Image Formation Berthold K.P. Horn

Perspective Projection Describes Image Formation Berthold K.P. Horn Perspective Projection Describes Image Formation Berthold K.P. Horn Wheel Alignment: Camber, Caster, Toe-In, SAI, Camber: angle between axle and horizontal plane. Toe: angle between projection of axle

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information