The International Workshop on Automatic Face and Gesture Recognition, Zurich, June 26-28, 1995 TASC. 55 Walkers Brook Drive

Size: px
Start display at page:

Download "The International Workshop on Automatic Face and Gesture Recognition, Zurich, June 26-28, 1995 TASC. 55 Walkers Brook Drive"

Transcription

1 The International Workshop on Automatic Face and Gesture Recognition, Zurich, June 26-28, 1995 Face Recognition from Frontal and Prole Views Gaile G. Gordon TASC 55 Walkers Brook Drive Reading, MA 1867 Abstract This paper presents a unique face recognition system which considers information from both frontal and prole view images. This system represents the rst step toward the development of a face recognition solution for the intensity image domain based on a 3D context. In the current system we construct a 3D face centered model from the two independent images. Geometric information is used for view normalization and at the lowest level the comparison is based on general pattern matching techniques. We also discuss the use of geometric information to index the reference database to quickly eliminate impossible matches from further consideration. The system has been tested using 2 subjects from the FERET program database and has shown excellent results. For example, we consider the problem of identifying the 5% of the database which is most similar to the target. The correct match is included in this list 85% of the time in the system's fully automated mode and 98% of the time in the manually assisted mode. 1 Introduction This work focuses on face recognition applications domains which have not been successfully addressed in previous research: uncooperative orunaware subjects matching against large subject databases. These issues add important new dimensions to an already challenging problem. The major problem introduced by uncooperative subjects is that, unlike face recognition in controlled environments, there can be no assurance of view consistency. Previous systems have approached face matching as a 2D problem which, because faces are not at, requires good consistency in view from one image of the subject to another. If there is not good view consistency, recognition errors are introduced by confusing similarity inview with similarity iniden- tity. These facts dictate that a 3D representation is required [1]. Our approach is based on the computation of a face centered 3D model which will be independent of view. The computation of a 3D model requires multiple views of the subject. We beginbyinvestigating recognition from two images of each subject: frontal and prole. It is interesting to note that although many researchers have investigated recognition from frontal view, and to a smaller degree, recognition This work was supported by the FERET program of the Army Research Lab under contract DAAL1-93-C-118. from prole view, no automated system has combined these information sources. Our future work will expand these ideas for video sequence input. Large subject databases introduce twokey problems. The rst problem is combinatorical. As database sizes grow, it becomes impractical to compare the target with each entry in the database. Our approach involves precomputing database indexing information based on face geometry. With this index information only a fraction of the total database must be compared one-to-one. The second problem in large databases is that, independent of the similarity metric used, as the number of subjects increases, the distance between \neighboring" subjects will decrease. Thus, more accurate subject descriptions are required to maintain the same recognition performance. By reducing the sources of error due to variations in view, our approach also addresses this issue. 2 Algorithm Overview The TASC system processes pairs of independently acquired frontal and prole images. These image pairs t into two categories: subjects to identify, and subjects who will be used for reference in the identication. The rst set will be referred to as probe or query subjects, and the second as the gallery or reference subjects. The overall organization of the algorithm is summarized in Figure 1. The recognition process begins by computing a 3D representation or model for the gallery images and the probe images. The models constructed from the gallery images form the reference database. The contents and construction of the models are discussed in more detail in Section 3 including feature extraction. An index is created for the reference database which groups the models based on basic geometric measures of the face. This index allows quick identication of those subjects in the database who fall within a given range in geometry. The potential of this process to reduce search time is examined in Section 5.3. The nal step is to compare each remaining reference model with the query model. The bases for this comparison are normalized image regions extracted from the original frontal and prole images. Any robust pattern matching algorithm could be used at this point our system is based on normalized cross correlation. This matching process is addressed in Section 4. Each comparison results in a similarity score. The output of the process is a ranked list of reference subjects, ordered by similarity to the query subject. Section 5 presents the results of algorithm evaluation,

2 and a discussion of the potential benets of video stream input on algorithm performance. Section 6 provides a nal summary of results and conclusions. Gallery Images Feature Extraction and Model Generation Database Query Images Feature Extraction and Model Generation 3D Database Pruning 3D Refined Analysis Using Normalized Pattern Information Figure 1: System block diagram 3 Contents of the Model Similarity Ranking The 3D model contains the location of key points on the face in a 3D coordinate system and a set of normalized image chips. Figure 2 shows a schematic example of a face model. Feature points are extracted using morphological operators, pattern matching, and other low level image analysis algorithms. The image chips are subimages taken from normalized versions of the original frontal and prole images. The rst normalization step adjusts the intensity range in the head region to the full dynamic range available, to 255. The second step adjusts the scale and rotation for each view. This process is based on the location of two key features within each image. For the frontal view the pupil centers are used, and for the prole view the tip of the nose and the tip of the chin are used. The feature extraction for the frontal view is discussed in detail in Section 3.2 and for the prole view in Section 3.3. The normalization process itself is discussed in Section 3.4. Figure 2: Components of the model Frontal and prole view planes provide the most complete 3D information available from two independent views. However, because the views are independent, scale or pose information computed from one image can not be applied to the other. E.g. it is impossible to compare distances measured in the XY plane (frontal view plane) with distances measured in the YZ plane (prole view plane). With two independent images we can only normalize for one degree of freedom in rotation in each image, which is about the optical axis of the view. This corresponds to the rotation about the Z axis for frontal view images, and about the X axis for prole views. Future work will focus on video sequence input because the constraints which exist on the relative pose between frames should allow a more complete specication of the model information. 3.1 Identifying the Head Bounds The the head bounding box is used both in the normalization of intensity and in the feature detection process which follows. The rst step in estimating the head bounding box is the segmentation of the subject from the background. In the FERET database the background is relatively homogeneous. The background is identi- ed based on detection of locally connected areas of low gradient magnitude in the vicinity of the image boundary. Once the subject silhouette has been identied, the next task is to identify the subregion associated with the head. This process is straightforward except for the inuence of dierent hair styles. For the frontal view images, we consider horizontal cross sectional area to identify the top and bottom of the head region, with the bottom of the head distinguished by the increase in area approaching the shoulders. Vertical cross sectional area is used to identify the sides of the head increase in areas between shoulder and head indicates a left or right bound of the head region. When considering prole images, the process is similar. However, because hair style and posture will have a strong eect on this portion of the image, we divide the silhouette vertically and discard the half containing the back of the head. Long hair or large scale images, in which the shoulders are not completely visible, complicate the process of head bound estimation. In these two cases, which are easily identied, general guidelines based on area of the foreground silhouette and location of the top of the head are used to set bounds. See Figures 3 for typical examples of the detection of the foreground, the location of the head region, and in the last two images, the adjusted intensity mapping. 3.2 Location of the Eyes The current normalization process for scale and view in the frontal view depends on the location of the pupil centers. The eye detection module nds likely candidates for individual eyes based both on similarity to a small sample of eye templates (this method has also been used by [2] and later [3]), as well as eye pupil detector based on circular image valleys (dark intensity regions) of the expected scale (this method has been discussed to a limited extent in [4]). A rough idea of the expected scale of the eyes, based on the width of the head region, is used to guide this

3 process. Both sets of candidates are combined and evaluated to identify the most likely eye pairs (set of two eye candidates). This process includes the following detailed steps: Identify individual eye candidates using template match with general eye templates Add candidates identied via valley based pupil detection Select likely eye pairs from the combined set using geometric constraints on separation (given scale range) and angle Normalize image view for each candidate pair selected Eliminate candidates with no evidence for nose and mouth features Select from remaining candidates (if any) based on bilateral symmetry, relative position within head region, eye candidate scores, and scale An example of eye location in the frontal view is given in Figure 3. The upper left image shows the detection of the foreground and the head extents, and the lower left image shows the nal normalized image with the eye centers and template regions marked. Figure 3: Processing of frontal and prole views. 3.3 Processing the Prole The description of prole data and its use in face recognition has been discussed since Sir Francis Galton's article in Nature in 1888 [5], although the work of Harmon in 1977 [6] is probably better known. Despite these encouraging recognition results, face recognition from frontal view images has dominated the attention of the research community recently, and little attention has been paid to the combined use of frontal and prole data. Our processing of the prole view is similar to processing of the frontal view. The current normalization process for scale and view in the prole view depends on the location of the tip of the nose and the tip of the chin. We chose these features because they can be identied a high percentage of the time. Also, because we locate positions to the nearest pixel, the relatively wide separation of these points reduces the errors due to spatial resolution. The steps to extract these points are as follow: Rene foreground border Extract prole line Extract approximate location of feature points from prole contour line using tangency constraints and a priori knowledge of head structure Rene position of feature points using bitangency constraints. Each step is discussed below. An example of the processing of the prole image is given in Figure 3. The upper right image shows the detection of the foreground, head extents, and rened prole line, and the lower right image shows the nal normalized image with the nose tip, chin tip, and template regions marked. The original contour of the subject identied during foreground background segmentation is rough because its edge is based on a specic threshold of the gradient magnitude. Generally, this contour will fall outside of the \ideal" edge. The renement ofthe contour is performed using a hierarchical watershed algorithm [7]. This process guarantees a single completely connected outline and makes ecient use of local and global information. A traditional watershed operation has also been used independently by researchers at Thomson-CSF [8] for extracting prole lines. Example of the rened prole contour is shown in the second image of Figure 3 juxtaposed with the original rough foreground segmentation. Contour following is used to extract the resulting outline as a connected list of points. Typical approaches used to extract the nose and chin tip include: computation of local curvature extrema, which has not been satisfactorily stable [8, 9], and use of the rightmost point in the subject's silhouette [6] which has problems with many hair styles and is not invariant to nodding of the head. Most previous work used a more limited subject database than the current work e.g. all male caucasian subjects with manually extracted silhouettes [6], or subjects of only asian descent who were all asked to \comb their hair back" [9]. Our approach is based on geometric tangency constraints and has been shown empirically to be quite robust with respect to head position and contour noise. We rst identify estimates of the chin and nose location which are rened by moving the points in a local neighborhood on the contour until a bitangent line can be identied. Bitangents are used in recognition of planar shapes or surfaces of revolution because they are perspective invariants []. Even though the 3D shape of the face doesn't t these criteria in general, bitangents do provide stability over

4 small left-right changes in face pose, and are invariant to motion in the prole plane. The last image in Figure 3 shows the normalized view of the head, and identies the location of the tip of the nose and chin. In the normalized position of the prole, the line connecting the nose and chin tip has a 2 degree oset with respect to the vertical axis. This position is close to the original pose in this example. 3.4 Normalizing for scale and rotation After the location of the eye pupil centers and the tip of the nose and chin, it is then possible to normalize the images and other feature points to conform to standardized positions. The frontal view is dened such that the optical axis is parallel to the Z axis, and the viewing distance places the eye centers at a specic xed separation. The prole view is dened such that the optical axis is parallel to the X axis (looking toward the positive direction - nose faces right), and the viewing distance places the tip of the nose and chin at a specic xed separation. These standardized views are computed by transforming the original frontal and prole images including a rotation about the optical axis, a general scale factor, and a translation. In the case of the prole images, an additional reection about the Y axis is added for left proles. 4 Template processing The nal comparison step is based on the ve image chips contained in each model. These regions are: left eye, right eye, nose, mouth, and prole. At aba- sic level, this process is similar to systems developed in [2, 3] in that it uses cross correlation. Key dierences include the introduction of prole image data, and the fact that the complete face is not used as one of the pattern inputs, only the smaller image chips corresponding to designated feature regions. Limited positioning errors in the selection of the template region are allowed by shifting the two patterns over a small neighborhood during comparison. The image is locally normalized to a zero mean intensity to minimize the eects of general lighting dierences. In most cases, a weighted sum of the 5 individual scores is used to produce a nal similarity score. The mouth region is weighted at less than half that of the eye and nose region because expression causes this area to be less reliable in identication. The region from the prole plane is weighted higher than the eyes and nose because it contains more information than the smaller individual templates from the frontal view. An additional aspect of the scoring involves compensation for errors in extraction of the features used for normalization. In cases where normalization features were not well located in one of the views, it may be more eective to base the similarity measure on only one view. In order to assist with this decision, we compute 3 similarity scores for each comparison: the weighted score from frontal view templates only, prole view only, and the normal score including both views. 5 Results 5.1 Current SystemPerformance The FERET program database was used for algorithm development andevaluation. It includes subjects photographed in four dierent time periods. There is very little overlap of subjects between the four sets. Three out of the four data sets included two frontal and two prole (left and right) images of each subject, enabling recognition testing. For each of these subjects, one of the two frontal view images and the left prole image was used to build a reference database model. The second frontal view image, paired with the right prole image, was used to build a query model. There is a wide variety of image scale and lighting and good representation of age, ethnic background, and gender in the database. Many of the subjects have glasses, several have moustaches and/or beards. The set of images available for testing included 236 subjects. A small number presented poor imaging conditions which our algorithms were not designed to accommodate. These conditions included eye glasses with glare spots covering the pupil, dark sunglasses, or prole images in which the prole line (nose to chin) was obscured by clothing, shoulder, or hair. Some of these problems will be more easily addressed from video data, but currently these cases were not included in the test group. The nal test pool contained 194 possible subjects. Results reported in the following section use a full 194 subject gallery and 194 subject probe set. It is expected in many face recognition applications that several of the top candidates, not just the top candidate, are presented for further human consideration. In the presentation of results we use the concept of rank threshold, R, which is the percentage of the best database entries to be presented as the result of the comparison. If the \ideal" match is contained within the rst R candidates in the ranking list, we consider this a successful recognition result. We report system performance by giving error rate as a function of rank threshold. This provides error rates for all choices of R. Figure 4 shows this result for the 194 subject test which we call Test 1. Error Rate (%) Rank threshold (% of gallery) frontal and profile Figure 4: Test 1: Error rates vs. rank threshold. To test the hypothesis that the major source of error was the extraction of normalization features we

5 used two data sets: 22 subject gallery vs. 22 subject probe with models based on manual feature extraction Test 1: results from subjects with good automatic normalization and total dataset The 22 subject database was constructed by entering the location of the normalization features (pupil centers, nose and chin tip) using a mouse. The 22 subjects included all of the 194 used in Test 1 plus a few of the subjects omitted from Test 1 because of poor imaging conditions for the automatic algorithm. The remainder of the model construction and comparison operation was exactly the same as in Test Test 1 Test 1 subset (subjects with good normalization) Using manually selected normalization points with two dierent data sets: 1) Test Scenario 1 (Figure 4) and 2) Similar models built with manual feature extraction (Figure 5(a)). Figure 6 shows this comparison graphically for the rst scenario. Our results support our thesis, showing that the addition of the prole information substantially reduces error rates, in these cases by between 45% (rst scenario) to 33% (second scenario) at R =5%. Error Rate (%) frontal only frontal and profile Figure 5: Performance with and without error in location of the 2 normalization features. It is clear from Figure 5 that the recognition results in cases where normalization feature detection was correct are very high. In Figure 5(a), where the normalization points are detected manually, the error rates are below 2%atR = 5%. The dotted line in Figure 5(b) shows that when the points are detected correctly by the automatic system, error rates are below 5%at R = 5%. Feature points were considered to be correctly located if the automatically identied position was within 4 pixels of the manually identi- ed location. Normalization error in the automatic feature detection increased system recognition errors by approx. % across most values of R. The residual error in the system (2% to 5% at R = 5%) were due to other sources of error including primarily large expression variations and view variation. 5.2 Evaluation of the Role of 3D Akey point of our approach is that the addition of 3D data should improve system performance by disambiguating eects of view from identity. The second view provides data about the 3rd dimension, that is, the relief or depth of the face, which is not available from the single frontal view. In the future we hope to enrich the model further through the use of video sequences. We tested this thesis by comparing recognition performance when only frontal images were used as input to the performance when both frontal and pro- le image pairs were used. The test was performed Rank threshold (% of gallery) Figure 6: Comparison with frontal only system. 5.3 Database Indexing Template based comparison systems must compare every probe subject with every gallery subject. Precomputed geometric features enable us to sort quickly to reduce the full gallery to the fraction of the database which is geometrically compatible with the query before the nal comparison process begins. We investigated the potential of this process to decrease search times using a subject dataset which was built with manually detected feature points. A vector of 12 geometric features consisting of distances and angles in the normalized frontal and prole images was computed from points identied in the image. The rst evaluation investigated the value of each feature individually in recognition as indicated by the Fisher's linear discriminate criteria: (Between Subject Variance) This criteria formalizes (Within Subject Variance) the intuition that a measure will be useful in discriminating between subjects if it varies very little over dierent instances of the same subject, but greatly between subjects. To compute the value of this criteria, we assembled a dataset of 3 subjects with 4-5 frontal/prole pairs per subject. These images contained some examples from the FERET database and some examples (14 subjects) which were taken at TASC over a series of 5days. The dierent features are shown in Figure 7 ordered from most discriminating to least discriminating. Interesting to note is that Kanade's work [11] this ratio was typically 2 or 3. Brunelli reports similar scores for his automated feature detectors [3]. These values indicate good potential for use in identication. The second evaluation performed was the use of a simple feature based classier which judged similarity on the basis of normalized distance in feature space.

6 G 3837 Discrimination Nose depth Nose height Nose ridge length Chin height Septum length Septum to nose ridge angle Nose length Face width Nose width Lower lip thickness Figure 7: Feature discrimination scores. The features used were: Nose depth, nose height, nose ridge length, chin height, septum length, septum to nose ridge angle, face width and nose width. This type of classier considers much less information than the template based system and therefore runs much faster (more than 2 times faster in our tests). The classier was tested with a database of subjects (1 pair of images per subject in the reference database, a dierent pair used as query). The error rates are shown in Figure 8, and indicate a % recognition rate at R = 38%. These results indicate that we can use geometric features to reduce the database to 38% of its size without introducing any further recognition errors. Results for classication using only those features derived from the frontal view are also shown, to point out the positive value of the prole features for this classier as well. Error Rate (%) Philtrim height Upper lip thickness Rank threshold (% of gallery) frontal only frontal and profile Figure 8: Error rates: feature based classication. In summary, the combination of feature based pruning with traditional template based comparison has the potential to dramatically reduce search times without eecting recognition error rates. We recommend the use of geometric features for coarse, conservative pruning, and localized pattern comparison for detailed dierentiation. 6 Conclusions We have discussed the design and implementation of a face recognition system which processes frontal/prole image pairs. We reported system per formance with respect to Rank Threshold, R, which is the number of most similar reference database subjects considered when judging correct identication. At R = 5% the automated system performed at a recognition rate of 85% for a test involving 194 subjects. The largest source of error was found to be the detection of two image feature points used during the normalization process. If wechange the recognition process only in the sense that these points are identied exactly (e.g. via manual intervention), the performance of the system is 98% for the same test. We showed that the addition of 3D information, in the form of the prole view in this case, substantially reduces error rates. In our tests, the use of the prole reduced errors by between 45% and 33%. It is likely that the benets of considering the prole view would be on a similar scale for any existing face recognition system which is based on frontalviewimagesonly. We also showed that the use of geometric features has the potential to substantially reduce search times. A feature based classier was shown to have the potential to reduce search times by 6% without aecting recognition rates. Again, this pruning process could be applied as a front end to any existing face recognition system. Our future work will address subject modeling from motion sequences. Motion sequences will enable improved performance through a more robust 3D model. References [1] G. G. Gordon, Face Recognition from Depth and Curvature. PhD thesis, Harvard University, Division of Applied Sciences, [2] R. Baron, \Mechanimsms of human facial recognition," International Journal of Man Machine Studies, vol. 15, pp. 137{178, [3] R. Brunelli and T. Poggio, \Face recognition: Features versus templates," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 42{ 52, October [4] A. L. Yuille, D. S. Cohen, and P. W. Hallinan, \Feature extraction from faces using deformable templates," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (San Diego, CA), pp. 4{9, June [5] F. Galton, \Personal identication and description 1," Nature, pp. 173{177, June [6] L. D. Harmon and W. F. Hunt, \Automatic recognition of human face proles," Computer Graphics and Image Processing, vol. 6, pp. 135{156, [7] L. Vincent and P. Soille, \Watersheds in digital spaces: an ecient algorithm based on immersion simulations," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, pp. 583{598, June [8] L. Najman, R. Vaillant, and E. Pernot, \From Face Sideviews to Identication," in Image Processing: Theory and Applications, (Gianni Vemazza, ed.), pp. 299{32, Elsevier, June [9] C. Wu and J. Huang, \Human face prole recognition by computer," Pattern Recognition, vol. 23, pp. 255{259, 199. [] A. P. Zisserman, D. A. Forsyth, J. L. Mundy, and C. A. Rothwell, \Recognizing general curved objects eciently," in Geometric Invariance in Computer Vision, (J. Mundy and A. Zisserman, eds.), pp. 228{251, Cambridge, MA: MIT Press, [11] T. Kanade, Picture Processing System by Computer Complex and Recognition of Human Faces. PhD thesis, Kyoto University, Department of Information Science.

LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES. Karin Sobottka Ioannis Pitas

LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES. Karin Sobottka Ioannis Pitas LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES Karin Sobottka Ioannis Pitas Department of Informatics, University of Thessaloniki 540 06, Greece e-mail:fsobottka, pitasg@zeus.csd.auth.gr Index

More information

Appeared in: Automatic Systems for the Identication and Inspection of Humans, SPIE Vol. 2277, July Baback Moghaddam and Alex Pentland

Appeared in: Automatic Systems for the Identication and Inspection of Humans, SPIE Vol. 2277, July Baback Moghaddam and Alex Pentland M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 301 Appeared in: Automatic Systems for the Identication and Inspection of Humans, SPIE Vol. 2277, July 1994 Face Recognition using

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES

TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES SIMON J.D. PRINCE, JAMES H. ELDER, JONATHAN WARRELL, FATIMA M. FELISBERTI IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda

More information

Bitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4

Bitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4 Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen 81667 Munich, Germany email: bueschin@informatik.tu-muenchen.de 1 Introduction Pictographs are ubiquitous

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

PROJECTION MODELING SIMPLIFICATION MARKER EXTRACTION DECISION. Image #k Partition #k

PROJECTION MODELING SIMPLIFICATION MARKER EXTRACTION DECISION. Image #k Partition #k TEMPORAL STABILITY IN SEQUENCE SEGMENTATION USING THE WATERSHED ALGORITHM FERRAN MARQU ES Dept. of Signal Theory and Communications Universitat Politecnica de Catalunya Campus Nord - Modulo D5 C/ Gran

More information

Automatic Mouth Localization Using Edge Projection

Automatic Mouth Localization Using Edge Projection Journal of Computer Science 6 (7): 679-683, 2010 ISSN 1549-3636 2010 Science Publications Automatic Mouth Localization Using Edge Projection Mohamed izon Department of Biomedical Technology, King Saud

More information

Template Matching Rigid Motion

Template Matching Rigid Motion Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition

More information

Eyes extraction from facial images using edge density

Eyes extraction from facial images using edge density Loughborough University Institutional Repository Eyes extraction from facial images using edge density This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:

More information

Information technology Biometric data interchange formats Part 5: Face image data

Information technology Biometric data interchange formats Part 5: Face image data INTERNATIONAL STANDARD ISO/IEC 19794-5:2005 TECHNICAL CORRIGENDUM 2 Published 2008-07-01 INTERNATIONAL ORGANIZATION FOR STANDARDIZATION МЕЖДУНАРОДНАЯ ОРГАНИЗАЦИЯ ПО СТАНДАРТИЗАЦИИ ORGANISATION INTERNATIONALE

More information

Object Detection System

Object Detection System A Trainable View-Based Object Detection System Thesis Proposal Henry A. Rowley Thesis Committee: Takeo Kanade, Chair Shumeet Baluja Dean Pomerleau Manuela Veloso Tomaso Poggio, MIT Motivation Object detection

More information

Mingle Face Detection using Adaptive Thresholding and Hybrid Median Filter

Mingle Face Detection using Adaptive Thresholding and Hybrid Median Filter Mingle Face Detection using Adaptive Thresholding and Hybrid Median Filter Amandeep Kaur Department of Computer Science and Engg Guru Nanak Dev University Amritsar, India-143005 ABSTRACT Face detection

More information

Attentive Face Detection and Recognition? Volker Kruger, Udo Mahlmeister, and Gerald Sommer. University of Kiel, Germany,

Attentive Face Detection and Recognition? Volker Kruger, Udo Mahlmeister, and Gerald Sommer. University of Kiel, Germany, Attentive Face Detection and Recognition? Volker Kruger, Udo Mahlmeister, and Gerald Sommer University of Kiel, Germany, Preuerstr. 1-9, 24105 Kiel, Germany Tel: ++49-431-560496 FAX: ++49-431-560481 vok@informatik.uni-kiel.de,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Model-Based Face Computation

Model-Based Face Computation Model-Based Face Computation 1. Research Team Project Leader: Post Doc(s): Graduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Hea-juen Hwang, Zhenyao Mo, Gordon Thomas 2.

More information

Object and Action Detection from a Single Example

Object and Action Detection from a Single Example Object and Action Detection from a Single Example Peyman Milanfar* EE Department University of California, Santa Cruz *Joint work with Hae Jong Seo AFOSR Program Review, June 4-5, 29 Take a look at this:

More information

Principal Component Analysis and Neural Network Based Face Recognition

Principal Component Analysis and Neural Network Based Face Recognition Principal Component Analysis and Neural Network Based Face Recognition Qing Jiang Mailbox Abstract People in computer vision and pattern recognition have been working on automatic recognition of human

More information

Categorization by Learning and Combining Object Parts

Categorization by Learning and Combining Object Parts Categorization by Learning and Combining Object Parts Bernd Heisele yz Thomas Serre y Massimiliano Pontil x Thomas Vetter Λ Tomaso Poggio y y Center for Biological and Computational Learning, M.I.T., Cambridge,

More information

Document Image Restoration Using Binary Morphological Filters. Jisheng Liang, Robert M. Haralick. Seattle, Washington Ihsin T.

Document Image Restoration Using Binary Morphological Filters. Jisheng Liang, Robert M. Haralick. Seattle, Washington Ihsin T. Document Image Restoration Using Binary Morphological Filters Jisheng Liang, Robert M. Haralick University of Washington, Department of Electrical Engineering Seattle, Washington 98195 Ihsin T. Phillips

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Decision Making. final results. Input. Update Utility

Decision Making. final results. Input. Update Utility Active Handwritten Word Recognition Jaehwa Park and Venu Govindaraju Center of Excellence for Document Analysis and Recognition Department of Computer Science and Engineering State University of New York

More information

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Recognition. Clark F. Olson. Cornell University. work on separate feature sets can be performed in

Recognition. Clark F. Olson. Cornell University. work on separate feature sets can be performed in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 907-912, 1996. Connectionist Networks for Feature Indexing and Object Recognition Clark F. Olson Department of Computer

More information

Subject-Oriented Image Classification based on Face Detection and Recognition

Subject-Oriented Image Classification based on Face Detection and Recognition 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition

Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition Berk Gökberk, M.O. İrfanoğlu, Lale Akarun, and Ethem Alpaydın Boğaziçi University, Department of Computer

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

An Object Detection System using Image Reconstruction with PCA

An Object Detection System using Image Reconstruction with PCA An Object Detection System using Image Reconstruction with PCA Luis Malagón-Borja and Olac Fuentes Instituto Nacional de Astrofísica Óptica y Electrónica, Puebla, 72840 Mexico jmb@ccc.inaoep.mx, fuentes@inaoep.mx

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

A face recognition system based on local feature analysis

A face recognition system based on local feature analysis A face recognition system based on local feature analysis Stefano Arca, Paola Campadelli, Raffaella Lanzarotti Dipartimento di Scienze dell Informazione Università degli Studi di Milano Via Comelico, 39/41

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

An Evaluation of Information Retrieval Accuracy. with Simulated OCR Output. K. Taghva z, and J. Borsack z. University of Massachusetts, Amherst

An Evaluation of Information Retrieval Accuracy. with Simulated OCR Output. K. Taghva z, and J. Borsack z. University of Massachusetts, Amherst An Evaluation of Information Retrieval Accuracy with Simulated OCR Output W.B. Croft y, S.M. Harding y, K. Taghva z, and J. Borsack z y Computer Science Department University of Massachusetts, Amherst

More information

Illumination invariant face detection

Illumination invariant face detection University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2009 Illumination invariant face detection Alister Cordiner University

More information

Gabor wavelets. convolution result jet. original image

Gabor wavelets. convolution result jet. original image Proceedings of the Intern. Workshop on Automatic Face- and Gesture-Recognition, 1995, Zurich, pages 92-97 Face Recognition and Gender Determination Laurenz Wiskott y, Jean-Marc Fellous z, Norbert Kruger

More information

A Hierarchical Face Identification System Based on Facial Components

A Hierarchical Face Identification System Based on Facial Components A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of

More information

MINIMUM SET OF GEOMETRIC FEATURES IN FACE RECOGNITION

MINIMUM SET OF GEOMETRIC FEATURES IN FACE RECOGNITION MINIMUM SET OF GEOMETRIC FEATURES IN FACE RECOGNITION Ivana Atanasova European University - RM Skopje, RM Biljana Perchinkova European University RM Skopje, RM ABSTRACT Biometric technology is often used

More information

Boosting Sex Identification Performance

Boosting Sex Identification Performance Boosting Sex Identification Performance Shumeet Baluja, 2 Henry Rowley shumeet@google.com har@google.com Google, Inc. 2 Carnegie Mellon University, Computer Science Department Abstract This paper presents

More information

BMVC 1996 doi: /c.10.41

BMVC 1996 doi: /c.10.41 On the use of the 1D Boolean model for the description of binary textures M Petrou, M Arrigo and J A Vons Dept. of Electronic and Electrical Engineering, University of Surrey, Guildford GU2 5XH, United

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

Real time eye detection using edge detection and euclidean distance

Real time eye detection using edge detection and euclidean distance Vol. 6(20), Apr. 206, PP. 2849-2855 Real time eye detection using edge detection and euclidean distance Alireza Rahmani Azar and Farhad Khalilzadeh (BİDEB) 2 Department of Computer Engineering, Faculty

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

ometry is not easily deformed by dierent facial expression or dierent person identity. Facial features are detected and characterized using statistica

ometry is not easily deformed by dierent facial expression or dierent person identity. Facial features are detected and characterized using statistica A FEATURE-BASED FACE DETECTOR USING WAVELET FRAMES C. Garcia, G. Simandiris and G. Tziritas Department of Computer Science, University of Crete P.O. Box 2208, 71409 Heraklion, Greece E-mails: fcgarcia,simg,tziritasg@csd.uoc.gr

More information

Sherlock 7 Technical Resource. Search Geometric

Sherlock 7 Technical Resource. Search Geometric Sherlock 7 Technical Resource DALSA Corp., Industrial Products (IPD) www.goipd.com 978.670.2002 (U.S.A.) Document Revision: September 24, 2007 Search Geometric Search utilities A common task in machine

More information

Coregistering 3D Models, Range, and Optical Imagery Using Least-Median Squares Fitting

Coregistering 3D Models, Range, and Optical Imagery Using Least-Median Squares Fitting Computer Science Technical Report Coregistering 3D Models, Range, and Optical Imagery Using Least-Median Squares Fitting Anthony N. A. Schwickerath J. Ross Beveridge Colorado State University schwicke/ross@cs.colostate.edu

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory Computers and Mathematics with Applications 61 (2011) 2101 2106 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

HAND-GESTURE BASED FILM RESTORATION

HAND-GESTURE BASED FILM RESTORATION HAND-GESTURE BASED FILM RESTORATION Attila Licsár University of Veszprém, Department of Image Processing and Neurocomputing,H-8200 Veszprém, Egyetem u. 0, Hungary Email: licsara@freemail.hu Tamás Szirányi

More information

Global fitting of a facial model to facial features for model based video coding

Global fitting of a facial model to facial features for model based video coding Global fitting of a facial model to facial features for model based video coding P M Hillman J M Hannah P M Grant University of Edinburgh School of Engineering and Electronics Sanderson Building, King

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

Verslag Project beeldverwerking A study of the 2D SIFT algorithm

Verslag Project beeldverwerking A study of the 2D SIFT algorithm Faculteit Ingenieurswetenschappen 27 januari 2008 Verslag Project beeldverwerking 2007-2008 A study of the 2D SIFT algorithm Dimitri Van Cauwelaert Prof. dr. ir. W. Philips dr. ir. A. Pizurica 2 Content

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

Progress in Image Analysis and Processing III, pp , World Scientic, Singapore, AUTOMATIC INTERPRETATION OF FLOOR PLANS USING

Progress in Image Analysis and Processing III, pp , World Scientic, Singapore, AUTOMATIC INTERPRETATION OF FLOOR PLANS USING Progress in Image Analysis and Processing III, pp. 233-240, World Scientic, Singapore, 1994. 1 AUTOMATIC INTERPRETATION OF FLOOR PLANS USING SPATIAL INDEXING HANAN SAMET AYA SOFFER Computer Science Department

More information

Short Paper Boosting Sex Identification Performance

Short Paper Boosting Sex Identification Performance International Journal of Computer Vision 71(1), 111 119, 2007 c 2006 Springer Science + Business Media, LLC. Manufactured in the United States. DOI: 10.1007/s11263-006-8910-9 Short Paper Boosting Sex Identification

More information

Tracking facial features using low resolution and low fps cameras under variable light conditions

Tracking facial features using low resolution and low fps cameras under variable light conditions Tracking facial features using low resolution and low fps cameras under variable light conditions Peter Kubíni * Department of Computer Graphics Comenius University Bratislava / Slovakia Abstract We are

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Refine boundary at resolution r. r+1 r. Update context information CI(r) based on CI(r-1) Classify at resolution r, based on CI(r), update CI(r)

Refine boundary at resolution r. r+1 r. Update context information CI(r) based on CI(r-1) Classify at resolution r, based on CI(r), update CI(r) Context Based Multiscale Classication of Images Jia Li Robert M. Gray EE Department EE Department Stanford Univ., CA 94305 Stanford Univ., CA 94305 jiali@isl.stanford.edu rmgray@stanford.edu Abstract This

More information

The Processing of Form Documents

The Processing of Form Documents The Processing of Form Documents David S. Doermann and Azriel Rosenfeld Document Processing Group, Center for Automation Research University of Maryland, College Park 20742 email: doermann@cfar.umd.edu,

More information

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada

More information

Estimation of eye and mouth corner point positions in a knowledge based coding system

Estimation of eye and mouth corner point positions in a knowledge based coding system Estimation of eye and mouth corner point positions in a knowledge based coding system Liang Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung Universität Hannover, Appelstraße 9A,

More information

Line Segment Based Watershed Segmentation

Line Segment Based Watershed Segmentation Line Segment Based Watershed Segmentation Johan De Bock 1 and Wilfried Philips Dep. TELIN/TW07, Ghent University Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium jdebock@telin.ugent.be Abstract. In this

More information

UW Document Image Databases. Document Analysis Module. Ground-Truthed Information DAFS. Generated Information DAFS. Performance Evaluation

UW Document Image Databases. Document Analysis Module. Ground-Truthed Information DAFS. Generated Information DAFS. Performance Evaluation Performance evaluation of document layout analysis algorithms on the UW data set Jisheng Liang, Ihsin T. Phillips y, and Robert M. Haralick Department of Electrical Engineering, University of Washington,

More information

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Tsuyoshi Moriyama Keio University moriyama@ozawa.ics.keio.ac.jp Jing Xiao Carnegie Mellon University jxiao@cs.cmu.edu Takeo

More information

A Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland

A Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Design Specication. Group 3

Design Specication. Group 3 Design Specication Group 3 September 20, 2012 Project Identity Group 3, 2012/HT, "The Robot Dog" Linköping University, ISY Name Responsibility Phone number E-mail Martin Danelljan Design 072-372 6364 marda097@student.liu.se

More information

Multi-Modal Human- Computer Interaction

Multi-Modal Human- Computer Interaction Multi-Modal Human- Computer Interaction Attila Fazekas University of Debrecen, Hungary Road Map Multi-modal interactions and systems (main categories, examples, benefits) Face detection, facial gestures

More information

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis From: AAAI Technical Report SS-03-08. Compilation copyright 2003, AAAI (www.aaai.org). All rights reserved. Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis Ying-li

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47 Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2

More information

Sea Turtle Identification by Matching Their Scale Patterns

Sea Turtle Identification by Matching Their Scale Patterns Sea Turtle Identification by Matching Their Scale Patterns Technical Report Rajmadhan Ekambaram and Rangachar Kasturi Department of Computer Science and Engineering, University of South Florida Abstract

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

BCC Sphere Transition

BCC Sphere Transition BCC Sphere Transition The Sphere Transition shape models the source image onto a sphere. Unlike the Sphere filter, the Sphere Transition filter allows you to animate Perspective, which is useful in creating

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Advances in Face Recognition Research

Advances in Face Recognition Research The face recognition company Advances in Face Recognition Research Presentation for the 2 nd End User Group Meeting Juergen Richter Cognitec Systems GmbH For legal reasons some pictures shown on the presentation

More information