Render. Predicted Pose. Pose Estimator. Feature Position. Match Templates
|
|
- Allison Lewis
- 6 years ago
- Views:
Transcription
1 3D Model-Based Head Tracking Antonio Colmenarez Ricardo Lopez Thomas S. Huang University of Illinois at Urbana-Champaign 405 N. Mathews Ave., Urbana, IL ABSTRACT This paper introduces a new approach to feature-based head tracking and pose estimation. Head tracking and pose estimation nd their most important applications in motion analysis for model-based video coding. The proposed algorithm employs an underlying 3D head model, feature-based pose estimation, and texture mapping to produce accurate templates for the feature tracking. In this way, the set of templates used for the matching is constantly updated with the pose changes, allowing the algorithm to track the features over a large range of head motion without loss of precision and error accumulation. Given a rough estimate of the head scale, the initial feature identication is performed automatically and the tracking is successful over a large number of video frames. Computational complexity is also considered with the aim towards creating a real-time end-to-end model-based video coding system. Keywords: head tracking, feature tracking, model-based coding, feature extraction 1 INTRODUCTION Object tracking and motion analysis on video sequences are two aspects of computer vision that connect and relate to a number of applications ranging from video annotation to video coding to human computer interfaces. In most scenarios, objects are not rigid in that their motion with respect to the scene is mixed with deformations and changes in light conditions. A framework for automatic detection and tracking of moving objects in such complex conditions is very important in computer vision, and mandatory in model-based video coding in which the bandwidth is reduced by sending only motion information. Much of the work in automatic object tracking has sought to relax the constraints under which such systems will produce accurate results. In early work, detection and tracking of moving objects was carried out in limited scenarios and only supercial understanding of the scene was achieved. 1{4 Low level modeling such as snakes, active contours, and deformable templates introduced some improvements. 5{7 However, to obtain a robust and complete analysis of the objects and their motion, high level modeling is required. Note that in approaches such as analysis by synthesis, the scene understanding is limited by the model's capability to represent the scene. In this paper we present a scheme for model-based facial feature tracking and head pose estimation that is robust, accurate, and can be implemented in real-time. The system consists of three modules acting in a feedback loop, see Fig. 1: (i) 3D Object Modeling, (ii) 2D-3D Pose Estimation, and (iii) Synthesis-based Template
2 Matching. Because the algorithm employs analysis-by-synthesis using a 3D model, the system overcomes many common problems such as error accumulation over long sequences, changing light conditions, and occlusions. Video Stream 1st Frame Locate Initial Features Cyberscan Texture Compute Initial Pose Cyberscan Range Map Texture texture mapped model initial features Compute Pose Track Features Render Templates 3D Motion Estimation 2D Feature tracking Initialization Stage Tracking Loop Figure 1: Head tracking system overview 2 SYSTEM OVERVIEW As indicated in the block diagram in Fig. 1, the system consist of three modules: (i) 3D head modeling, (ii) feature detection, and (iii) pose estimation. A more detailed block diagram of the main loop is shown in Fig. 2. Given a predicted head pose, the 3D head model provides a synthetic view from which templates are made. Features are detected via template matching with such templates. Finally, the 2D feature positions and their correspondent 3D positions in the head model are used to estimate the new head pose. Kalman lters are used to predict the head pose and the 2D feature locations using the previous frames. Note that if this procedure is applied repeatedly on the same frame, it becomes an iterative approach to rene the current head pose estimation. The system is initialized assuming a front view in the rst frame, and the rough location of the facial features; the visual pattern recognition technique in 13 provides these initial eye corner locations. Cyberscan Data 3D Features Render Synthetic View, and Feature Positions Synthesized Images Kalman Predictor Predicted Pose 3D Motion Estimated Pose Pose Estimator 2D Feature Tracking Kalman Predictor Feature Position Template Maker Predicted Feature Position Match Templates Templates Video Sequence Figure 2: Block diagram of the main tracking loop. 3 HEAD POSE ESTIMATION One of the main steps in the proposed system is the update of the head pose from the corresponding 2D-3D facial features. A large set of 3D features are extracted manually from the initial head scan and a smaller subset
3 of 2D features are obtained from the feature tracking module. Using these correspondences, we can compute an alignment transform that maps the 3D features to their 2D locations, 14, This transform is then applied to the entire head model to create a new image from which the template database is obtained. One diculty is that, since only 3 feature points are used, the pose estimation results in 2 mathematically equivalent transforms, only one of which is correct for our purposes. We limit ourselves to 3 features (2 eye corners and nose tip) because of rigidity constraints and also to reduce complexity. However, a 4th feature point (mouth center) can be roughly approximated and used to resolve the ambiguity in the transforms. Since we know the 3D location of this 4th feature, the two resulting transforms can be applied to this point and compared to the estimated 2D position. The correct transform will result in a signicantly smaller mapping error. The problem setup is as follows. We are given three pairs of points in correspondence: (a m ; a i ), (b m ; b i ), (c m ; c i ). The model points are measured with respect to the 3D model coordinate system and the image points are measured with respect to the 2D sensor coordinates (image plane). We can describe the algorithm with the following steps: Step 1 Rotate and translate the model points so that the new a m is at the origin (0,0,0) and the new b m and c m are in the x-y plane. First we translate the model points so that a m is at the origin: a 0 m = 0; b 0 m = b m? a m ; c 0 m = c m? a m : (1) Next, we rotate about the X axis until b 0 m is in the XY plane: where, a 00 m = 0; b 00 m = R b 0 m ; c00 m = R c 0 m ; (2) tan() = b0 z ; sin() = b0 z b 0 y kb 0 mk ; cos() = b0 y kb 0 mk : Finally, we rotate the model points about b 00 m so that c 00 m is in the XY plane: where, a 000 m = 0; b 000 m = b 00 m; c 000 m = R c 00 m (3) tan() =? c00 z c b? m ; and c b? m = (c00 xb 00 y)? (c 00 yb 00 x) kb 00 m k Step 2 Translate the image points so that a i is at the origin. b i is at b i? a i and c i is at c i? a i Step 3 Solve for the 2x2 transformation matrix mapping the model points to the image points. Use Lb m = b i and Lc m = c i to solve for L. L = l11 l 12 ) L = [b l 21 l i jc i ][b m jc m ]?1 (4) 22 Step 4 Solve for the remaining elements of the 3x3 Ane matrix and for the scale to bring the points into alignment. 16 Call this matrix sr where s is the scale transformation. sr = 2 4 l 11 l 12 (c 2 l 21? c 1 l 22 )=s l 21 l 22 (c 1 l 12? c 2 l 11 )=s c 1 c 2 (l 11 l 22? l 21 l 12 )=s p where s = l l c2 1 and we solve for c 1 and c 2 using: r 1 p c 1 = 2 (w + w 2 + 4q 2 ); and c 2 =?q (5) c 1 3 5
4 where w = l l 2 22? (l l 2 21); and q = l 11 l 12 + l 21 l 22 Step 5 Combine the transform in Step 1 with the resulting transformation Q (from steps 2 and 4) to obtain the nal, rotation and translation needed to align the 3D model points with the image points. x 0 m = (R R T m )x m, and x i = O(sRx 0 m) + a i (6) where the O indicates taking the (x,y) coordinates of the resulting 3D vector. 4 SYNTHESIS BASED FEATURE TRACKING A typical feature-based motion estimation system relies on the precise detection of the features. Small errors in the feature locations can produce large errors in the estimated homogeneous transformation matrix representing the motion. Classical template-based feature tracking accumulates error over long sequences, and full search feature detection at every frame is too computationally expensive. With these issues in mind, our algorithm uses a combination of template-based detection and feature tracking that overcomes the error accumulation problem and can be eciently implemented. In addition, the system can also be used as an iterative approach to estimate and rene the global head pose transformation for a single frame. The system tracks the features at each frame using templates obtained from a synthetic image of the previous frame, and predicts the search areas using the previous feature positions. Shown in Fig. 3 are the actual images regions with the corresponding synthesized templates used in the correlation stage. One major advantage of our approach is that, because the facial feature locations on the synthetic images are known from the range data, no error is accumulated over the sequence during the tracking. Additionally, good spatial localization is achieved by using weighted-template matching where the match error is weighted with a Gaussian function centered at the position of the features. Problems with lighting conditions are dealt with by using the texture obtained from the rst frame of the sequence (or in general, any previous near-frontal view frame). (a) (b) Figure 3: Templates used in the matching stage. (a) Actual image regions, (b) synthesized templates using the recovered pose and the 3D model. As mentioned earlier, the proposed system is intended for real time analysis of video sequences. To achieve such performance we implement a weighted correlation using a set of look-up tables with pre-computed distance functions. If I 2 (k; l) is the template image, and I 1 (k; l) is the frame image, the weighted-error at position (n; m) is: X d(n; m) = w(k; l) f(i 1 (k? n; l? m)? I 2 (k; l)) (k;l)2wr where W r dene the size of the template, w(k; l) is the weight map, and f(e) is the distance function. In classical template matching, every position is equally weighted, and the square error is used; i.e. w(k; l) = 1 and f(e) = e 2.
5 By quantizing the weights to N levels and the gray-level image to M levels, the weighted-error can be implemented with a 2M N-entry-look-up table, and its computation reduces to a number xed point additions, and indexations: X d(n; m) = h[w(k; l)][i 1 (k? n; l? m)? I 2 (k; l)] where h is a Nx2M matrix. (k;l)2wr 5 EXPERIMENTAL RESULTS Several sequences were tested with the proposed system, three of which are presented here. The rst two were composed of greyscale, 200x200 images captured at 15 fps, while the third was 320x240 greyscale at 30 fps. The head models for each subject were created with the range scanner and the necessary 3D features were extracted. Results of the automatic feature initialization and tracking are shown in Fig. 4, Fig. 5, and Fig. 6. The top row of the rst 2 gures shows the results of the automatic feature tracking. The bottom row represents the synthesized images created by applying the head pose estimates to the texture-mapped 3D head model. The initial frontal image for each frame has been used to provide the texture map. While the main focus of this paper has been on accurate and automatic feature tracking, we can see in these images the possibility of creating extremely low bit rate video streams using an analysis-synthesis approach. The results of tracking both rigid and non-rigid points is shown in Fig. 6. Finally, the images in Fig. 7 show the results of wireframe tracking. The 3D wireframe at the computed pose is overlayed over the original image. Figure 4: Tracking Results: (top) The original sequences with tracked features. sequence using recovered head motion. (bottom) The synthesized
6 Figure 5: Tracking Results: (top) The original sequences with tracked features. sequence using recovered head motion. (bottom) The synthesized 6 CONCLUSIONS AND FUTURE WORK In this paper we have presented a robust and novel approach to real-time feature tracking using a 3D modelbased framework. A small set of facial features were tracked successfully over a large range of head motion. The combination of the 3D model, head pose estimation and texture mapping avoids the error accumulation problem and allows better localization of the features. Future work includes using generic head models tted to the person automatically, instead of subject specic head scans. Also, for use in video coding, local motion estimation as well as periodic texture updates need to be implemented. Finally, the speed of the matching algorithm can be signicantly improved using hierarchical methods. 7 ACKNOWLEDGMENTS This work was supported in part by Joint Services Electronics Program ONR N , the Army Research Laboratory under Cooperative Agreement No. DAAL , a grant from Rockwell International, and an AT&T Bell Laboratories Fellowship. 8 PRINCIPAL AUTHOR BIO Antonio Colmenarez received the B.S. and M.S. degrees from the Simon Bolivar University, Caracas, Venezuela in 1991 and 1993 respectively. He is currently with the Beckman Institute at the University of Illinois at Urbana-
7 Champaign pursuing a Ph.D. degree in computer vision and image processing. His current research interests include model-based video coding and pattern recognition. 9 REFERENCES [1] I. K. Sethi and R. Jain, Finding Trajectories of Feature Points in a Monocular Image Sequences, PAMI, pages 56{73,Jan [2] D. Huttenlocher, J. Noh, W. Rucklidge, Tracking Non-rigid Objects in Complex Scenes, ICCV [3] A Framework for Real-Time Window-Based Tracking Using O-The-Shelf Hardware,Technical Report for Version 0.95 Alpha, August 25, [4] Y. Yao and R. Chellappa, Dynamic Feature Point Tracking in an image sequence, IEEE Int. Conf. Pattern Recognition, Oct [5] F. Leymarie and M. D. Levine, Tracking Deformable Objects in the Plane Using an Active Contour Model, PAMI Jun [6] C. Kervrann and F. Heitz, Robust Tracking of Stochastic Deformable Models in Long Image Sequences, IEEE Int. Conf. Mach. Intel. Jun [7] F. G. Meyer and P. Bouthemy, Region-Based Tracking Using Ane Motion Models in Long Image Sequences, CVGIP:Image Understanding, Sep [8] K. Aizawa and T. S. Huang, Model-Based Image Coding: Advanced Video Coding Techniques for Very Low Bit-Rate Applications, Proceedings of the IEEE Vol. 83, pages 259{271, Feb [9] Y. Altunbasak, A. m. Tekalp, and G. Bozdagi, Two-Dimensional Object-Based Coding Using a Content-Based Mesh And Ane Motion Parameterization, Proc. IEEE Int. Conf. on Image Proccessing, Washington DC, [10] Y. Wang and O Lee, Active Mesh - A Feature Seeking and Tracking Image Sequence Representation Scheme, IEEE Tran. Image Processing, pages 610{624, Sep [11] I. A. Essa and A. Pentland, A Vision System for Observing and Extracting Facial Action Parameters, CVPR [12] D. Stork and M. Hennecke, Speechreading: An Overview of Image Processing, Feature Extraction, Sensory Integration and Pattern Recognition Techniques. Int. Conf. Automatic Face and Gesture Recognition [13] A. Colmenarez and T. S. Huang, Maximum Likelihood Face Detection, Int. Conf. Automatic Face and Gesture Recognition, pages 307{309, October [14] Ricardo Lopez and Thomas Huang. Head pose computation for very low bit-rate video coding. In Vaclav Hlavac and Radim Sara, editors, Computer Analysis of Images and Patterns, pages 440{447, Prague, Czech Republic, September Springer. [15] Ricardo Lopez and Thomas Huang. 3d head pose computation from 2d images: Templates versus features. In IEEE International Conference in Image Processing, pages 220{224, Washington DC, USA, October IEEE, IEEE Press. [16] Shimon Ullman and D. P. Huttenlocher. Recognizing solid objects by alignment with an image. International Journal of Computer Vision, 5(2):195{212, 1990.
8 Figure 6: Tracking Results for rigid and non-rigid points.
9 Figure 7: Wireframe Tracking Results
Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test
Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying
More informationRecognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17
Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationA Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland
W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD
More informationA Hierarchical Statistical Framework for the Segmentation of Deformable Objects in Image Sequences Charles Kervrann and Fabrice Heitz IRISA / INRIA -
A hierarchical statistical framework for the segmentation of deformable objects in image sequences Charles Kervrann and Fabrice Heitz IRISA/INRIA, Campus Universitaire de Beaulieu, 35042 Rennes Cedex,
More informationPRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationNonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.
Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2
More informationAbstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component
A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,
More informationSample Based Texture extraction for Model based coding
DEPARTMENT OF APPLIED PHYSICS AND ELECTRONICS UMEÅ UNIVERISTY, SWEDEN DIGITAL MEDIA LAB Sample Based Texture extraction for Model based coding Zhengrong Yao 1 Dept. Applied Physics and Electronics Umeå
More informationFace Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation
Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition
More informationLocal qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:
Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu
More informationHuman pose estimation using Active Shape Models
Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body
More informationFacial Expression Analysis for Model-Based Coding of Video Sequences
Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationEstimation of eye and mouth corner point positions in a knowledge based coding system
Estimation of eye and mouth corner point positions in a knowledge based coding system Liang Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung Universität Hannover, Appelstraße 9A,
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationHuman Activity Recognition Using Multidimensional Indexing
Human Activity Recognition Using Multidimensional Indexing By J. Ben-Arie, Z. Wang, P. Pandit, S. Rajaram, IEEE PAMI August 2002 H. Ertan Cetingul, 07/20/2006 Abstract Human activity recognition from a
More informationDEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta
DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION Ani1 K. Jain and Nicolae Duta Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824-1026, USA E-mail:
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational
More information3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1. A Real Time System for Detecting and Tracking People
3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1 W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David
More informationEnhanced Active Shape Models with Global Texture Constraints for Image Analysis
Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,
More informationOcclusion Detection of Real Objects using Contour Based Stereo Matching
Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More information2 Algorithm Description Active contours are initialized using the output of the SUSAN edge detector [10]. Edge runs that contain a reasonable number (
Motion-Based Object Segmentation Using Active Contours Ben Galvin, Kevin Novins, and Brendan McCane Computer Science Department, University of Otago, Dunedin, New Zealand. Abstract: The segmentation of
More informationMirror C 6 C 2. b. c. d. e.
To appear in Symmetry Culture and Science 1999, 4th Int. Conf. of the Interdisciplinary Study of Symmetry ISIS98 Computational Aspects in Image Analysis of Symmetry and of its Perception Hagit Hel-Or Dept
More informationA Robust Facial Feature Point Tracker using Graphical Models
A Robust Facial Feature Point Tracker using Graphical Models Serhan Coşar, Müjdat Çetin, Aytül Erçil Sabancı University Faculty of Engineering and Natural Sciences Orhanlı- Tuzla, 34956 İstanbul, TURKEY
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationGender Classification Technique Based on Facial Features using Neural Network
Gender Classification Technique Based on Facial Features using Neural Network Anushri Jaswante Dr. Asif Ullah Khan Dr. Bhupesh Gour Computer Science & Engineering, Rajiv Gandhi Proudyogiki Vishwavidyalaya,
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationModel-based Enhancement of Lighting Conditions in Image Sequences
Model-based Enhancement of Lighting Conditions in Image Sequences Peter Eisert and Bernd Girod Information Systems Laboratory Stanford University {eisert,bgirod}@stanford.edu http://www.stanford.edu/ eisert
More informationCombined Histogram-based Features of DCT Coefficients in Low-frequency Domains for Face Recognition
Combined Histogram-based Features of DCT Coefficients in Low-frequency Domains for Face Recognition Qiu Chen, Koji Kotani *, Feifei Lee, and Tadahiro Ohmi New Industry Creation Hatchery Center, Tohoku
More informationSYNTHESIS OF PLANAR MECHANISMS FOR PICK AND PLACE TASKS WITH GUIDING LOCATIONS
Proceedings of the ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2013 August 4-7, 2013, Portland, Oregon, USA DETC2013-12021
More informationVideo Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods
More informationVideo Representation. Video Analysis
BROWSING AND RETRIEVING VIDEO CONTENT IN A UNIFIED FRAMEWORK Yong Rui, Thomas S. Huang and Sharad Mehrotra Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign
More informationColor Image Segmentation
Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.
More information3D Active Appearance Model for Aligning Faces in 2D Images
3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active
More informationSpeech Driven Synthesis of Talking Head Sequences
3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University
More informationArm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot
Czech Technical University, Prague The Center for Machine Perception Camera Calibration and Euclidean Reconstruction from Known Translations Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech
More informationOcclusion Robust Multi-Camera Face Tracking
Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,
More informationActive Motion Detection and Object Tracking. Joachim Denzler and Dietrich W.R.Paulus.
0 Active Motion Detection and Object Tracking Joachim Denzler and Dietrich W.R.Paulus denzler,paulus@informatik.uni-erlangen.de The following paper was published in the Proceedings on the 1 st International
More informationApplying Synthetic Images to Learning Grasping Orientation from Single Monocular Images
Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic
More informationTracking of Human Body using Multiple Predictors
Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,
More informationSkill. Robot/ Controller
Skill Acquisition from Human Demonstration Using a Hidden Markov Model G. E. Hovland, P. Sikka and B. J. McCarragher Department of Engineering Faculty of Engineering and Information Technology The Australian
More informationIllumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model
Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering
More informationLandmark Detection on 3D Face Scans by Facial Model Registration
Landmark Detection on 3D Face Scans by Facial Model Registration Tristan Whitmarsh 1, Remco C. Veltkamp 2, Michela Spagnuolo 1 Simone Marini 1, Frank ter Haar 2 1 IMATI-CNR, Genoa, Italy 2 Dept. Computer
More informationIFACE: A 3D SYNTHETIC TALKING FACE
IFACE: A 3D SYNTHETIC TALKING FACE PENGYU HONG *, ZHEN WEN, THOMAS S. HUANG Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign Urbana, IL 61801, USA We present
More informationof human activities. Our research is motivated by considerations of a ground-based mobile surveillance system that monitors an extended area for
To Appear in ACCV-98, Mumbai-India, Material Subject to ACCV Copy-Rights Visual Surveillance of Human Activity Larry Davis 1 Sandor Fejes 1 David Harwood 1 Yaser Yacoob 1 Ismail Hariatoglu 1 Michael J.
More informationAdvances in Neural Information Processing Systems, in press, 1996
Advances in Neural Information Processing Systems, in press, 1996 A Framework for Non-rigid Matching and Correspondence Suguna Pappu, Steven Gold, and Anand Rangarajan 1 Departments of Diagnostic Radiology
More informationCategorization by Learning and Combining Object Parts
Categorization by Learning and Combining Object Parts Bernd Heisele yz Thomas Serre y Massimiliano Pontil x Thomas Vetter Λ Tomaso Poggio y y Center for Biological and Computational Learning, M.I.T., Cambridge,
More information3-D Head Pose Recovery for Interactive Virtual Reality Avatars
640 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 51, NO. 4, AUGUST 2002 3-D Head Pose Recovery for Interactive Virtual Reality Avatars Marius D. Cordea, Dorina C. Petriu, Member, IEEE, Emil
More informationMulti-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature
0/19.. Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature Usman Tariq, Jianchao Yang, Thomas S. Huang Department of Electrical and Computer Engineering Beckman Institute
More information2D to pseudo-3d conversion of "head and shoulder" images using feature based parametric disparity maps
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2001 2D to pseudo-3d conversion of "head and shoulder" images using feature
More informationPROJECTION MODELING SIMPLIFICATION MARKER EXTRACTION DECISION. Image #k Partition #k
TEMPORAL STABILITY IN SEQUENCE SEGMENTATION USING THE WATERSHED ALGORITHM FERRAN MARQU ES Dept. of Signal Theory and Communications Universitat Politecnica de Catalunya Campus Nord - Modulo D5 C/ Gran
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationDepth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences
Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität
More informationObject Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision
Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition
More informationSilhouette-based Multiple-View Camera Calibration
Silhouette-based Multiple-View Camera Calibration Prashant Ramanathan, Eckehard Steinbach, and Bernd Girod Information Systems Laboratory, Electrical Engineering Department, Stanford University Stanford,
More informationEvaluation of Expression Recognition Techniques
Evaluation of Expression Recognition Techniques Ira Cohen 1, Nicu Sebe 2,3, Yafei Sun 3, Michael S. Lew 3, Thomas S. Huang 1 1 Beckman Institute, University of Illinois at Urbana-Champaign, USA 2 Faculty
More informationAppearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization
Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationRapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis
Rapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis Jingu Heo and Marios Savvides CyLab Biometrics Center Carnegie Mellon University Pittsburgh, PA 15213 jheo@cmu.edu,
More informationReal-time 3-D Hand Posture Estimation based on 2-D Appearance Retrieval Using Monocular Camera
Real-time 3-D Hand Posture Estimation based on 2-D Appearance Retrieval Using Monocular Camera Nobutaka Shimada, Kousuke Kimura and Yoshiaki Shirai Dept. of Computer-Controlled Mechanical Systems, Osaka
More informationIntensity-Depth Face Alignment Using Cascade Shape Regression
Intensity-Depth Face Alignment Using Cascade Shape Regression Yang Cao 1 and Bao-Liang Lu 1,2 1 Center for Brain-like Computing and Machine Intelligence Department of Computer Science and Engineering Shanghai
More informationImage-Based Face Recognition using Global Features
Image-Based Face Recognition using Global Features Xiaoyin xu Research Centre for Integrated Microsystems Electrical and Computer Engineering University of Windsor Supervisors: Dr. Ahmadi May 13, 2005
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationFACIAL ANIMATION FROM SEVERAL IMAGES
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information
More informationTransactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN
Hopeld Network for Stereo Correspondence Using Block-Matching Techniques Dimitrios Tzovaras and Michael G. Strintzis Information Processing Laboratory, Electrical and Computer Engineering Department, Aristotle
More informationA Hierarchical Face Identification System Based on Facial Components
A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of
More informationLearning based face hallucination techniques: A survey
Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)
More informationEfficient Model-based Linear Head Motion Recovery from Movies
Efficient Model-based Linear Head Motion Recovery from Movies Jian Yao Wai-Kuen Cham Department of Electronic Engineering The Chinese University of Hong Kong, Shatin, N.T., Hong Kong E-mail: {ianyao,wkcham}@ee.cuhk.edu.hk
More informationP ^ 2π 3 2π 3. 2π 3 P 2 P 1. a. b. c.
Workshop on Fundamental Structural Properties in Image and Pattern Analysis - FSPIPA-99, Budapest, Hungary, Sept 1999. Quantitative Analysis of Continuous Symmetry in Shapes and Objects Hagit Hel-Or and
More informationA REAL-TIME FACIAL FEATURE BASED HEAD TRACKER
A REAL-TIME FACIAL FEATURE BASED HEAD TRACKER Jari Hannuksela, Janne Heikkilä and Matti Pietikäinen {jari.hannuksela, jth, mkp}@ee.oulu.fi Machine Vision Group, Infotech Oulu P.O. Box 45, FIN-914 University
More informationPose 1 (Frontal) Pose 4 Pose 8. Pose 6. Pose 5 Pose 9. Subset 1 Subset 2. Subset 3 Subset 4
From Few to Many: Generative Models for Recognition Under Variable Pose and Illumination Athinodoros S. Georghiades Peter N. Belhumeur David J. Kriegman Departments of Electrical Engineering Beckman Institute
More informationFace Recognition using Principle Component Analysis, Eigenface and Neural Network
Face Recognition using Principle Component Analysis, Eigenface and Neural Network Mayank Agarwal Student Member IEEE Noida,India mayank.agarwal@ieee.org Nikunj Jain Student Noida,India nikunj262@gmail.com
More informationPose Normalization for Robust Face Recognition Based on Statistical Affine Transformation
Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,
More informationDynamic Time Warping for Binocular Hand Tracking and Reconstruction
Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,
More informationFACIAL FEATURE EXTRACTION BASED ON THE SMALLEST UNIVALUE SEGMENT ASSIMILATING NUCLEUS (SUSAN) ALGORITHM. Mauricio Hess 1 Geovanni Martinez 2
FACIAL FEATURE EXTRACTION BASED ON THE SMALLEST UNIVALUE SEGMENT ASSIMILATING NUCLEUS (SUSAN) ALGORITHM Mauricio Hess 1 Geovanni Martinez 2 Image Processing and Computer Vision Research Lab (IPCV-LAB)
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationRENDERING AND ANALYSIS OF FACES USING MULTIPLE IMAGES WITH 3D GEOMETRY. Peter Eisert and Jürgen Rurainsky
RENDERING AND ANALYSIS OF FACES USING MULTIPLE IMAGES WITH 3D GEOMETRY Peter Eisert and Jürgen Rurainsky Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute Image Processing Department
More informationFeature Selection Using Principal Feature Analysis
Feature Selection Using Principal Feature Analysis Ira Cohen Qi Tian Xiang Sean Zhou Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign Urbana,
More informationFace Recognition Using Adjacent Pixel Intensity Difference Quantization Histogram
IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.8, August 2009 147 Face Recognition Using Adjacent Pixel Intensity Difference Quantization Histogram Feifei Lee, Koji Kotani,
More informationFACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT
FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT Shoichiro IWASAWA*I, Tatsuo YOTSUKURA*2, Shigeo MORISHIMA*2 */ Telecommunication Advancement Organization *2Facu!ty of Engineering, Seikei University
More informationAnimated Talking Head With Personalized 3D Head Model
Animated Talking Head With Personalized 3D Head Model L.S.Chen, T.S.Huang - Beckman Institute & CSL University of Illinois, Urbana, IL 61801, USA; lchen@ifp.uiuc.edu Jörn Ostermann, AT&T Labs-Research,
More informationDocument Image Restoration Using Binary Morphological Filters. Jisheng Liang, Robert M. Haralick. Seattle, Washington Ihsin T.
Document Image Restoration Using Binary Morphological Filters Jisheng Liang, Robert M. Haralick University of Washington, Department of Electrical Engineering Seattle, Washington 98195 Ihsin T. Phillips
More informationFace Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm
Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,
More informationImage Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations
Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations Mehran Motmaen motmaen73@gmail.com Majid Mohrekesh mmohrekesh@yahoo.com Mojtaba Akbari mojtaba.akbari@ec.iut.ac.ir
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More informationRange Image Registration with Edge Detection in Spherical Coordinates
Range Image Registration with Edge Detection in Spherical Coordinates Olcay Sertel 1 and Cem Ünsalan2 Computer Vision Research Laboratory 1 Department of Computer Engineering 2 Department of Electrical
More informationLOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM
LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany
More informationActive Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions Using Level Sets
Active Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions Using Level Sets Suvarna D. Chendke ME Computer Student Department of Computer Engineering JSCOE PUNE Pune, India chendkesuvarna@gmail.com
More informationRobust Face Detection Based on Convolutional Neural Networks
Robust Face Detection Based on Convolutional Neural Networks M. Delakis and C. Garcia Department of Computer Science, University of Crete P.O. Box 2208, 71409 Heraklion, Greece {delakis, cgarcia}@csd.uoc.gr
More informationLocalized Principal Component Analysis Learning for Face Feature Extraction and Recognition
Irwin King and Lei Xu. Localized principal component analysis learning for face feature extraction and recognition. In Proceedings to the Workshop on 3D Computer Vision 97, pages 124 128, Shatin, Hong
More informationBackground Subtraction based on Cooccurrence of Image Variations
Background Subtraction based on Cooccurrence of Image Variations Makito Seki Toshikazu Wada Hideto Fujiwara Kazuhiko Sumi Advanced Technology R&D Center Faculty of Systems Engineering Mitsubishi Electric
More information