CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

Size: px
Start display at page:

Download "CHAPTER 3 FACE DETECTION AND PRE-PROCESSING"

Transcription

1 59 CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 3.1 INTRODUCTION Detecting human faces automatically is becoming a very important task in many applications, such as security access control systems or contentbased indexing video retrieval systems. The automatic detection of human faces provides users with powerful indexing capacities of the video material. Although face detection is closely related to face recognition as a preliminary required step, face recognition algorithms have received most of the attention in the academic literature compared to face detection algorithms. Considerable progress has been made on the problem of face recognition, especially under stable conditions such as small variations in lighting, facial expression and poses. The illumination problem is basically the variability of an object s appearance from one image to another with slight changes in lighting conditions. Psychophysical experiments show that the human visual system can identify faces of the same person from novel images despite considerable variations in illumination. Lighting variations is a challenging problem in a face recognition research. It is regarded as one of the most critical factors for robust face recognition. The same person, with the same facial expression and view, seems very different under varying lighting conditions as in Figure 3.1. Changes in lighting conditions produce a considerable decrease in the performances of the recognition system.

2 60 Figure 3.1 Face image with variation in pose, expression, illumination shown in Figure 3.2. The general block diagram of the proposed face detection system is Figure 3.2 Block diagram for face detection 3.2 COLOR MODELS FOR SKIN COLOR CLASSIFICATION The skin color classification has gained increasing attention in recent years due to the active research in content-based image representation. For instance, the ability to locate image object as a face can be exploited for image coding, editing, indexing or other user interactivity purposes. Moreover, face localization also provides a good stepping stone in facial expression studies. It would be fair to say that the most popular algorithm to

3 61 face localization is in the use of color information, whereby estimating areas with skin color is often the first vital step of such strategy. Hence, skin color classification has become an important task. Much of the research in skin color based face localization and detection is based on RGB, YCbCr and HSI color spaces. In this section the color spaces are described RGB Color Space The RGB color space consists of the three additive primaries: red, green and blue. Spectral components of these colors combine additively to produce a resultant color. The RGB model is represented by a 3- dimensional cube with red green and blue at the corners on each axis Figure 3.3. Black is at the origin. White is at the opposite end of the cube. The gray scale follows the line from black to white. In a 24-bit color graphics system with 8 bits per color channel, red is (255, 0, 0). On the color cube, it is (1, 0, 0). The RGB model simplifies the design of computer graphics systems but is not ideal for Figure 3.3 RGB color cube all applications. The red, green and blue color components are highly correlated. This makes it difficult to execute some image processing algorithms. Many processing techniques, such as histogram equalization, work on the intensity component of an image only.

4 YCbCr Color Space In response to increasing demands for digital algorithms in handling video information YCbCr color space has been defined, and has since become a widely used model in a digital video. It belongs to the family of television transmission color spaces. The family includes other color space models such as YUV and YIQ. YCbCr is a digital color system, while YUV and YIQ are analog spaces for the respective PAL and NTSC systems. These color spaces separate RGB (Red-Green-Blue) into luminance and chrominance information and are useful in compression applications but the specification of colors is somewhat unintuitive. The Recommendation 601 specifies 8 bit (i.e. 0 to 255) coding of YCbCr, whereby the luminance component Y has an excursion of 219 and an offset of +16. This coding places black at code 16 and white at code 235. In doing so, it reserves the extremes of the range for signal processing footroom and headroom. On the other hand, the chrominance components Cb and Cr have excursions of +112 and offset of +128, producing a range from 16 to 240 inclusively HSI Color Space Hue, saturation and intensity are the three properties used to describe color. So, it is logical that the corresponding color model, HSI is used. In using the HSI color space, a percentage of different colors need not known to produce a color. Adjustment in hue gets the color required. A dark red is changed into pink by adjusting the saturation. In the same way the intensity is altered to bring a change in contrast. Many applications use the HSI color model. Machine vision uses HSI color space in identifying the color of different objects. Image processing applications such as histogram operations, intensity transformations and convolutions operate only on an intensity image. These operations are

5 63 performed with much ease on an image in the HIS color space. The hue (H) is represented as the angle, varying from 0 o to 360 o. Saturation (S) corresponds to the radius, varying from 0 to 1. Intensity (I) varies along the z axis with 0 being black and 1 being white. When S = 0, color is a gray value of intensity 1. When S = 1, color is on the boundary of top cone base as shown in Figure 3.4. The greater the saturation, the farther the color is from white/gray/black (depending on the intensity). Adjusting the hue will vary the color from red at 0 o, through green at 120 o, blue at 240 o, and back to red at 360 o. When I = 0, the color is black and therefore H is undefined. When S = 0, the color is grayscale. H is also undefined in this case. By adjusting I, a color can be made darker or lighter. By maintaining S = 1 and adjusting I, shades of that color are created. Figure 3.4 Double cone model of HIS color space

6 FACE DETECTION A feature-based three-stage scheme is proposed for face detection. The first stage is used to obtain skin regions. Face candidates are detected in the second stage using the statistical SLBM features. Finally, faces are detected from the face candidates in the third stage using SVM classifier Skin-region Detection In order to detect faces under varying illumination, White is used as reference for lighting compensation. In this method, the top 5% of luminance values in the image is regarded as the reference white if the number of these pixels is sufficiently large (>100). The RGB components of the original image are then adjusted so that the average gray value of the reference-white pixels is linearly scaled to 255. Let i [lu; 255] be the top 5% gray levels and fi be the number of pixels of gray level i in the image. Therefore, the modified RGB components can be estimated Skin color based face detection in RGB color space Crowley et al (2000 ) perceived human color varies as a function of the relative direction to the illumination. The pixels for skin region can be detected using a normalized color histogram, and can be further normalized for changes in intensity on dividing by luminance. And thus converted a [R, G, B] vector is converted into an [r, g] vector of normalized color which provides a fast means of skin detection. This gives the skin color region which localizes face. The output is a face detected image which is from the skin region. This algorithm fails when there exists some more skin region like legs, arms, etc.

7 Skin color based face detection in YCbCr color space The skin color classification algorithm with color statistics gathered from YCbCr color space is detailed. The pixels belonging to skin region exhibit similar Cb and Cr values. Furthermore, it has been shown that skin color model based on the Cb and Cr values can provide good coverage of different human races. The thresholds be chosen as [Cr1, Cr2] and [Cb1, Cb2], a pixel is classified to have skin tone if the values [Cr, Cb] fall within the thresholds. The skin color distribution gives the face portion in the color image. This algorithm is also having the constraint that the image should be having only face as the skin region. Thus, RGB components are to be converted into YCbCr components using Equation (3.1) for removing the effect of luminosity during the image processing process. Y = 0.299R G B Cb = R G B (3.1) Cr = 0.500R G B In the YCbCr components, the luminance (brightness) information is contained in a Y component, and the chrominance information is contained in the Cb (blue) and Cr (red). The Cb and Cr components are independent of luminosity and give a good indication on whether a pixel is part of skin or not.background and faces can be distinguished by applying maximum and minimum threshold values for both Cb and Cr components. Thresholding is done with the condition in Equation 3.2 that if the Cb and Cr values fall in the range shown below the pixels are considered to represent the skin region.

8 <= Cb <= <= Cr <= 165 (3.2) In other words, pixels whose intensities fall in the above range are marked as 1 s, and the other pixels are marked as 0 s. A binary image with all skin color represented by a 1 is created. However, other parts of the body, such as exposed arms, legs, and other skin color background objects can also be captured. A post-processing step (e.g., morphological enhancement) is employed to extract all the skin objects separately for future classification Skin Color based face detection in HIS color space Skin color classification in HSI color space is the same as YCbCr color space but here the responsible values are hue (H) and saturation (S). Similar to YCbCr model, the threshold is chosen as [H1, S1] and [H2, S2], and a pixel is classified to have skin tone if the values [H,S] fall within the threshold and this distribution gives the localized face image. Similar to above two algorithm this algorithm is also having the same constraint Face Detection using Skin Color The skin region detected using the RGB model, YCbCr model and HIS model includes the noise and the removal of noise is important to detect the exact face region. So, Morphological operations are used to improve the face detection technique. A series of morphological operations are used as a post-processing step to extract all the skin objects separately for future classification. Figure 3.5 summarizes all the components in the process.

9 67 Figure 3.5 Morphological operations The various morphological operations are explained below. Flood Fill Operation A flood-fill operation is performed on the binary image to fill the holes created inside the face region. It is because of the eyes and lips as shown in Figure 3.6 or the holes created at the boundary after applying the dilation operation. These holes tend to separate all the objects from each other and therefore should be removed. Dilation Operation Dilation operation is applied on the binary images with holes, using a disk structuring element of radius of 1. This operation fine-tunes the boundary of the regions by connecting small breaks and enlarges the shapes a bit for further processing. Remove Small Regions The region whose area is less than 200 pixels is removed, since they are too small to be a face region.

10 68 Opening Operation Opening and closing operations are performed to remove noise. These are the morphological operations, opening operation means erosion followed by dilation to remove noise. An opening operation is applied using a disk structuring element of size 3 to remove connections between closely connected regions if there are any. Connected Component Labeling The connected component labeling algorithm is applied to obtain a separate component for each object. Ideally, each connected component should contain at most one face. But, it is possible that the connected component may contain two or more faces in them. Now the eyes, ears and mouth are extracted from the binary image by considering the threshold for areas which are darker in the mouth than a given threshold. Figure 3.7and 3.8 shows the detected skin region after the removal of noise, the mouth map. Figure 3.6 Skin detection

11 69 Figure 3.7.Skin with noise removal Figure 3.8. Mouth Map Feature Extraction Using Simplified Local Binary Mean (SLBM) The simplified local binary mean (Priya & Banu 2012) overcomes the disadvantage of the LBP algorithm. The LBP considers the center pixel for LBP operation and allocates the pixel value by comparing with the center pixel value. Using this 256 LBP bins are produced, which takes more space and slows down the speed. The intensity values of the neighbor pixels in the block are thresholded by that of the center pixel. In the 3 by 3 pixel block, there are 8 neighbors around the center pixel, so that a total number of 2 8 = 256 different labels can be calculated by the relation between the value of the center pixel and that of its neighbors. SLBM on the other hand, uses the mean of the 9 pixels for thresholding. This method involves three steps which include subdividing, thresholding and weighing. First, a 3 3 sub image is

12 70 cropped. The pixel values are represented as Ip. Thresholding is done using the mean of the 9 elements of the 3 3 sub image (Im). Thresholding is done based on the condition given in Equation (3.3). = 1, 0, (3.3) The SLBM operator thus generated assigns values for all the 9 pixels present in the 3 by 3 matrix under consideration. The weights are then assigned to all the pixel values and are summed as specified in Equation (3.4) SLMB feature thus obtained are considered for classification purpose: 7 p SLMB f (Ip I m)2 (3.4) P 0 The SLMB features are thus calculated. Many images of different types can have similar histograms. Histograms provide only a coarse characterization of an image. This is the main disadvantage of using histograms. So, the statistical features such as mean and standard deviation of the SLMB features of the total image are calculated. These statistical features are used for face detection using SVM. Figure 3.9 shows the detected occluded face region. Figure 3.9 Detected occluded face

13 FACE DETECTION USING SUPPORT VECTOR MACHINE Support vector machines are a set of related supervised statistical learning methods that analyze data and recognize patterns. They are typically used for classification (machine learning) and regression analysis. Vapnik (1995) invented the original SVM algorithm, and the current standard incarnation (soft margin) was proposed by Cortes & Vapnik et al ( 1995).The basic SVM is a non-probabilistic binary linear classifier. It uses a set of training samples marked as belonging to one of two categories to build a model that predicts whether a new sample falls into one category or the other. Intuitively, an SVM model finds a clear gap that is as wide as possible to separate training samples into two categories. New samples are then predicted to belong to a category based on which side of the gap they fall on. More formally, an SVM constructs a hyperplane or set of hyperplanes in a high-dimensional space for classification, regression, or other tasks. A good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. Figure 3.10 shows the support vectors for the linearly separable samples. These support vectors are decided by three samples on the dashed line. The solid line represents the hyperplane for the decision function. Figure 3.10 SVM for the linearly separable samples.

14 PREPROCESSING A face recognition system, based on computing the distance between unprocessed gray-level images, fails to recognize all the faces in the database and will confuse the faces. The approaches to this problem can be classified in three main categories: Illumination Normalization: In this approach, face images are preprocessed to normalize the illumination. For example, gamma correction, logarithmic transforms histogram equalization few of the methods are used here. Invariant features extraction: This approach attempts to extract facial features invariant to illumination variations. For example, edge maps, derivatives of the gray-level, Gabor-like filters and Fisher-face etc are few of the methods applicable here. Face modeling: illumination variations are mainly due to the 3D shape of human faces under lighting in different directions. There are researchers trying to construct a generative 3D face model that can be used to render face image with different pose and under varying lighting conditions. Figure 3.11 Varied illumination conditions

15 73 This research has been focused only on the first category, the process, called illumination normalization, attempts to transform an image with an arbitrary lighting condition to an image with standard lighting condition. Subsequently the Gamma Intensity Correction method, the Logarithm transform method, Discrete Cosine transform method and the Histogram Equalization method (global and local) are analyzed to process face images normalizing lighting variations Use of Gamma Intensity Correction (GIC) Gamma Intensity Correction is a technique commonly used in computer graphics. This method can control the overall brightness of an image changing the gamma parameter and it can be used to correct the lighting variations in the face image. The gamma correction is a pixel transform in which the output image is the exponential of the input image. (, ) = (, ) / (3.5) Depending on the output values on the transformed image whether it is darker or brighter, and considering a canonically illuminated face image Ic(x, y), the difference between the transformed image and the predefined normal face image is minimized. = arg [ (, ) (, )], (3.6) The intuitive effect of this transform is that the overall brightness of all processed images is adjust to the same level of the normal face image Ic(x, y).

16 74 In Figure 3.12(a), 3.12(d) and 3.12(g) there are the original images, the images 3.12(b), 3.12(e) and 3.12(h) are transformed with Gamma Correction = 0.5, the images 3.12(c), 3.12(f) and 3.12(i) are transformed with =0.7. a d g b e h c f i Figure 3.12 Gamma intensity correction

17 Logarithm Transform and its application It can be assumed that the image gray level f(x, y) is proportional to the product of the reflectance r(x, y) and the illumination e(x, y) f(x, y) = r(x, y) e(x, y) (3.7) Taking logarithm transform log f(x, y) = log r(x, y) + log e(x, y) (3.8) If the incident illumination e(x, y) and the desired uniform illumination e' are known, there is a new gray-scale image under desired uniform illumination: log f (x, y) = log r(x, y) + log e = log r(x, y) + log e(x, y) - (x, y) = log f(x, y) - (x, y) (3.9) where (x, y) = log e(x, y) log e So the normalized face image can be obtained from the original image and an additive compensation term (x, y). Figure 3.13 Pixel intensity mapping with logarithmic transform intensity

18 76 Figure 3.13 shows the mapping of pixel intensities ranging from 0 to 255. The result of the application of Logarithm Transform to a face image badly illuminated is an intensity enhancement of the shadowed regions. In Figure 3.14 there are some original images and the corresponding log transformed. Figure 3.14 Image Results of Logarithmic transformation

19 Discrete Cosine Transform (DCT) and its application The Discrete Cosine Transform is a novel illumination normalization approach for face recognition under varying lighting conditions that keeps facial features intact. The key idea of this method is that illumination variation can be significantly reduced truncating low frequency DCT coefficients in the logarithm DCT domain. The 2D M N discrete cosine transform (DCT) is defined as: (, ) = ( ) ( ) (, ) cos And the inverse transform is ( ) cos[ ( ) ] (3.10) (, ) = ( ) ( ) (, )cos[ ( ) ]cos[ ( ) ]] (3.11) Where (u)= u=0, u=1,2,, M-1 ( ) = v=0, v=1,2,, M-1 Illumination variation usually lies in the low-frequency band, so it can be reduced by removing low-frequency components. This can be activated by making them (Low frequency component) to zero. The function operation appears like an ideal high pass filter. from 3.10: For example, if the (p, q)th DCT coefficient is set to zero (, ) = (, ) + (, ) (3.12)

20 78 = (, ) (, ) (, ) (, ) where (, ) = ( ) ( ) (, )cos[ ( ) ]cos[ ( ) ] The desired normalized face image in the logarithm domain F 0 (x, y) is basically the difference between the original image gray level F(x, y) and the illumination compensation E(p, q) in logarithm domain. The first DCT coefficient determines the overall illumination of a face image, so it can be set to the same value to obtain the desired uniform illumination, for example: (0,0) = log. The value is normally chosen near the middle level of the original image. This approach, by discarding the DCT coefficients of the original image, only adjusts the brightness, whereas, by discarding the DCT coefficients of the logarithm image, adjusts the illumination and recovers the reflectance characteristic of the face. The logarithm images could be used directly for recognition, skipping the inverse logarithm transform step after inverse DCT. Some shadows could lie in the same frequency band of facial features and not in low-frequency band, as a consequence, illumination variation in small areas could not be correctly compensated. So if the logarithm image is restored to the original one incorrect adjustments can make it worse. This illumination compensation method takes a badly illuminated face image, transforms it in logarithm domain, computes the 2D discrete cosine transform and sets a DCT low-frequency coefficient to zero. The first

21 79 DCT coefficient is set to the mean illumination level of the image to determine the overall illumination. Then the inverse 2D discrete cosine transform and normalization of the image are computed. An half-lighted face image is highly correlated with the (0,1)th basis image, therefore, the illumination compensation has been obtained by discarding the (0,1)th DCT coefficient. Figure 3.15 shows four illuminated human face images and the corresponding DCT transformed images. Figure 3.15 Image Results of Discrete cosine transform

22 Application of Histogram Equalization (HE) An image histogram is a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the number of pixels in that particular tone. Particularly, an image histogram of a gray-scale image has in the horizontal axes the 256 brightness levels, and in the vertical axes the number of times this level appears in the image. In Histogram Equalization the global contrast enhancement is obtained using the cumulative density function of the image as a transfer function. Figure 3.16 Image histogram before equalization The result is a histogram approximately constant for all gray values. While Local Histogram Equalization can enhance details of the image, global Histogram Equalization enhances the contrast of whole image. The problem of the first approach is that the output is not always realistic, but in this case the representation of the face image must be invariant to lighting variations and not a realistic image.

23 81 In Figure 3.16 there is a frontal face image bad illuminated and the corresponding histogram. As it s possible to see in Figure, pixels are not balanced, there are more dark pixels than others. To equalize the image histogram the cumulative distribution function (cdf) has been computed. The cdf of each gray level is the sum of its recurrence and the recurrence of the previous gray level in image. Then the histogram equalization formula is : ( ) = ( ) ( ) ( 1) (3.13) Figure 3.17 Image histogram after equalization where cdfmin is the minimum value of the cumulative distribution function, M and N are the width and the height of image, and L is the number of gray levels used. The result is an equalized and normalized image. In Figure 3.16 the face image before histogram equalization and Figure 3.17 the face image after histogram equalization are shown.

24 Local Histogram Equalization In this case pixels are balanced all over the 256 gray levels. The Local Histogram Equalization method is the same as that of the global one but the image has been divided in two equal parts in vertical direction, assuming to have only half of the image badly illuminated. In Figure 3.18 there are two local histogram. In the first histogram, corresponding to the left part of image, the right side of face, there are more light pixels, whereas in the second histogram, corresponding to the right part of image, the left side of face, there are more dark pixels. In Figure 3.19 the original image and the corresponding Histogram Equalization transformed, in the first column, and the Local Histogram Equalization transformed in the last column. Figure 3.19 shows the image and the histogram of the image after local histogram equalization. Figure 3.18 Image and histogram before equalization

25 83 Figure 3.19 Image and the histogram after equalization(lhe) In Figure 3.20 illustrates the frontal face image badly illuminated and the images normalized by methods including GIC, Log transform, DCT, Histogram equalization and Local histogram equalization. Original GIC Log transform Figure 3.20 (Continued)

26 84 HE LHE Figure 3.20 Illumination Normalization 3.6 SUMMARY In this chapter, skin color based face detection with five preprocessing methods for illumination normalization in face images has been proposed. Comparison has been made for detecting faces, using skin color detection on RGB, YCbCr and HIS color spaces and it is found that YCbCr and HIS color space are more efficient in comparison to RGB to classify the skin region. But still both are not able to give very good results. Based on the results of the three algorithms, are combined with the SLBM statistical features and binary SVM classifier is used to detect the face region from the image. It is robust and efficient in classifying the skin color region and face region. Accuracy of proposed algorithm is 98.18%. The Gamma Intensity Correction method (GIC), the Logarithm Transform method, the Discrete Cosine Transform method (DCT), the global Histogram Equalization method (HE) and the Local Histogram Equalization method (LHE).The experimental results on GTAV face Database show that a simple illumination normalization method, like GIC, Logarithm Transform, DCT or HE, can generally improve the image, and consequently the facial features detection performance, compared with a non-preprocessed image. A more complex method, like LHE, also gives good results in a feature detection system, although the face image is not realistic.

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

A ROBUST AND ADAPTABLE METHOD FOR FACE DETECTION BASED ON COLOR PROBABILISTIC ESTIMATION TECHNIQUE

A ROBUST AND ADAPTABLE METHOD FOR FACE DETECTION BASED ON COLOR PROBABILISTIC ESTIMATION TECHNIQUE International Journal of Research in Computer Science eissn 2249-8265 Volume 3 Issue 6 (2013) pp. 1-7, A Unit of White Globe Publications doi: 10.7815/ijorcs.36.2013.072 A ROBUST AND ADAPTABLE METHOD FOR

More information

Face Detection Using Color Based Segmentation and Morphological Processing A Case Study

Face Detection Using Color Based Segmentation and Morphological Processing A Case Study Face Detection Using Color Based Segmentation and Morphological Processing A Case Study Dr. Arti Khaparde*, Sowmya Reddy.Y Swetha Ravipudi *Professor of ECE, Bharath Institute of Science and Technology

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Statistical Approach to a Color-based Face Detection Algorithm

Statistical Approach to a Color-based Face Detection Algorithm Statistical Approach to a Color-based Face Detection Algorithm EE 368 Digital Image Processing Group 15 Carmen Ng Thomas Pun May 27, 2002 Table of Content Table of Content... 2 Table of Figures... 3 Introduction:...

More information

INF5063: Programming heterogeneous multi-core processors. September 17, 2010

INF5063: Programming heterogeneous multi-core processors. September 17, 2010 INF5063: Programming heterogeneous multi-core processors September 17, 2010 High data volumes: Need for compression PAL video sequence 25 images per second 3 bytes per pixel RGB (red-green-blue values)

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Image Processing: Final Exam November 10, :30 10:30

Image Processing: Final Exam November 10, :30 10:30 Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology Course Presentation Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology Image Compression Basics Large amount of data in digital images File size

More information

Illumination Normalization in Face Recognition Using DCT and Supporting Vector Machine (SVM)

Illumination Normalization in Face Recognition Using DCT and Supporting Vector Machine (SVM) Illumination Normalization in Face Recognition Using DCT and Supporting Vector Machine (SVM) 1 Yun-Wen Wang ( 王詠文 ), 2 Wen-Yu Wang ( 王文昱 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Graduate Institute of Electronics

More information

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations Image Enhancement Digital Image Processing, Pratt Chapter 10 (pages 243-261) Part 1: pixel-based operations Image Processing Algorithms Spatial domain Operations are performed in the image domain Image

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

Image enhancement for face recognition using color segmentation and Edge detection algorithm

Image enhancement for face recognition using color segmentation and Edge detection algorithm Image enhancement for face recognition using color segmentation and Edge detection algorithm 1 Dr. K Perumal and 2 N Saravana Perumal 1 Computer Centre, Madurai Kamaraj University, Madurai-625021, Tamilnadu,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Face Detection System Based on Feature-Based Chrominance Colour Information

Face Detection System Based on Feature-Based Chrominance Colour Information Face Detection System Based on Feature-Based Chrominance Colour Information Y.H. Chan and S.A.R. Abu-Bakar Dept. of Microelectronics and Computer Engineering Faculty of Electrical Engineering Universiti

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision Color n Used heavily in human vision n Color is a pixel property, making some recognition problems easy n Visible spectrum for humans is 400nm (blue) to 700 nm (red) n Machines can see much more; ex. X-rays,

More information

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2 Color Theory Physical Color Visible energy - small portion of the electro-magnetic spectrum Pure monochromatic colors are found at wavelengths between 380nm (violet) and 780nm (red) 380 780 Color Theory

More information

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia Application Object Detection Using Histogram of Oriented Gradient For Artificial Intelegence System Module of Nao Robot (Control System Laboratory (LSKK) Bandung Institute of Technology) A K Saputra 1.,

More information

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2 International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 015) An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES

COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES Sanmati S. Kamath and Joel R. Jackson Georgia Institute of Technology 85, 5th Street NW, Technology Square Research Building,

More information

Fall 2015 Dr. Michael J. Reale

Fall 2015 Dr. Michael J. Reale CS 490: Computer Vision Color Theory: Color Models Fall 2015 Dr. Michael J. Reale Color Models Different ways to model color: XYZ CIE standard RB Additive Primaries Monitors, video cameras, etc. CMY/CMYK

More information

Lecture 4 Image Enhancement in Spatial Domain

Lecture 4 Image Enhancement in Spatial Domain Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain

More information

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

Visible Color. 700 (red) 580 (yellow) 520 (green)

Visible Color. 700 (red) 580 (yellow) 520 (green) Color Theory Physical Color Visible energy - small portion of the electro-magnetic spectrum Pure monochromatic colors are found at wavelengths between 380nm (violet) and 780nm (red) 380 780 Color Theory

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Face identification system using MATLAB

Face identification system using MATLAB Project Report ECE 09.341 Section #3: Final Project 15 December 2017 Face identification system using MATLAB Stephen Glass Electrical & Computer Engineering, Rowan University Table of Contents Introduction

More information

CHAPTER-5 WATERMARKING OF COLOR IMAGES

CHAPTER-5 WATERMARKING OF COLOR IMAGES CHAPTER-5 WATERMARKING OF COLOR IMAGES 5.1 INTRODUCTION After satisfactorily developing the watermarking schemes for gray level images, we focused on developing the watermarking schemes for the color images.

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri

More information

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation Lecture #13 Point (pixel) transformations Color modification Color slicing Device independent color Color balancing Neighborhood processing Smoothing Sharpening Color segmentation Color Transformations

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON ILLUMINATION COMPENSATION AND ILLUMINATION INVARIANT TRACKING METHODS

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 7. Color Transforms 15110191 Keuyhong Cho Non-linear Color Space Reflect human eye s characters 1) Use uniform color space 2) Set distance of color space has same ratio difference

More information

Handwritten Script Recognition at Block Level

Handwritten Script Recognition at Block Level Chapter 4 Handwritten Script Recognition at Block Level -------------------------------------------------------------------------------------------------------------------------- Optical character recognition

More information

MORPH-II: Feature Vector Documentation

MORPH-II: Feature Vector Documentation MORPH-II: Feature Vector Documentation Troy P. Kling NSF-REU Site at UNC Wilmington, Summer 2017 1 MORPH-II Subsets Four different subsets of the MORPH-II database were selected for a wide range of purposes,

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

Lecture 12 Color model and color image processing

Lecture 12 Color model and color image processing Lecture 12 Color model and color image processing Color fundamentals Color models Pseudo color image Full color image processing Color fundamental The color that humans perceived in an object are determined

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed

More information

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU 1. Introduction Face detection of human beings has garnered a lot of interest and research in recent years. There are quite a few relatively

More information

Fast and Efficient Automated Iris Segmentation by Region Growing

Fast and Efficient Automated Iris Segmentation by Region Growing Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 6, June 2013, pg.325

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

Motivation. Gray Levels

Motivation. Gray Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text

More information

One image is worth 1,000 words

One image is worth 1,000 words Image Databases Prof. Paolo Ciaccia http://www-db. db.deis.unibo.it/courses/si-ls/ 07_ImageDBs.pdf Sistemi Informativi LS One image is worth 1,000 words Undoubtedly, images are the most wide-spread MM

More information

Face Detection for Skintone Images Using Wavelet and Texture Features

Face Detection for Skintone Images Using Wavelet and Texture Features Face Detection for Skintone Images Using Wavelet and Texture Features 1 H.C. Vijay Lakshmi, 2 S. Patil Kulkarni S.J. College of Engineering Mysore, India 1 vijisjce@yahoo.co.in, 2 pk.sudarshan@gmail.com

More information

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT 1 NUR HALILAH BINTI ISMAIL, 2 SOONG-DER CHEN 1, 2 Department of Graphics and Multimedia, College of Information

More information

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images 1 Anusha Nandigam, 2 A.N. Lakshmipathi 1 Dept. of CSE, Sir C R Reddy College of Engineering, Eluru,

More information

Object Oriented Shadow Detection and an Enhanced Method for Shadow Removal

Object Oriented Shadow Detection and an Enhanced Method for Shadow Removal Object Oriented Shadow Detection and an Enhanced Method for Shadow Removal Divya S Kumar Department of Computer Science and Engineering Sree Buddha College of Engineering, Alappuzha, India divyasreekumar91@gmail.com

More information

Artifacts and Textured Region Detection

Artifacts and Textured Region Detection Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Digital image processing

Digital image processing Digital image processing Image enhancement algorithms: grey scale transformations Any digital image can be represented mathematically in matrix form. The number of lines in the matrix is the number of

More information

Color Space Invariance for Various Edge Types in Simple Images. Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003

Color Space Invariance for Various Edge Types in Simple Images. Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003 Color Space Invariance for Various Edge Types in Simple Images Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003 Abstract This paper describes a study done to determine the color

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Crowd Density Estimation using Image Processing

Crowd Density Estimation using Image Processing Crowd Density Estimation using Image Processing Unmesh Dahake 1, Bhavik Bakraniya 2, Jay Thakkar 3, Mandar Sohani 4 123Student, Vidyalankar Institute of Technology, Mumbai, India 4Professor, Vidyalankar

More information

Age Invariant Face Recognition Aman Jain & Nikhil Rasiwasia Under the Guidance of Prof R M K Sinha EE372 - Computer Vision and Document Processing

Age Invariant Face Recognition Aman Jain & Nikhil Rasiwasia Under the Guidance of Prof R M K Sinha EE372 - Computer Vision and Document Processing Age Invariant Face Recognition Aman Jain & Nikhil Rasiwasia Under the Guidance of Prof R M K Sinha EE372 - Computer Vision and Document Processing A. Final Block Diagram of the system B. Detecting Facial

More information

Real Time Detection and Tracking of Mouth Region of Single Human Face

Real Time Detection and Tracking of Mouth Region of Single Human Face 2015 Third International Conference on Artificial Intelligence, Modelling and Simulation Real Time Detection and Tracking of Mouth Region of Single Human Face Anitha C Department of Electronics and Engineering

More information

Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models

Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models SICE-ICASE International Joint Conference 2006 Oct. 8-2, 2006 in Bexco, Busan, Korea Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models Phuong-Trinh

More information

CS4495/6495 Introduction to Computer Vision. 8C-L1 Classification: Discriminative models

CS4495/6495 Introduction to Computer Vision. 8C-L1 Classification: Discriminative models CS4495/6495 Introduction to Computer Vision 8C-L1 Classification: Discriminative models Remember: Supervised classification Given a collection of labeled examples, come up with a function that will predict

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

A Comparison of Color Models for Color Face Segmentation

A Comparison of Color Models for Color Face Segmentation Available online at www.sciencedirect.com Procedia Technology 7 ( 2013 ) 134 141 A Comparison of Color Models for Color Face Segmentation Manuel C. Sanchez-Cuevas, Ruth M. Aguilar-Ponce, J. Luis Tecpanecatl-Xihuitl

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

2013, IJARCSSE All Rights Reserved Page 718

2013, IJARCSSE All Rights Reserved Page 718 Volume 3, Issue 6, June 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Face Detection

More information

Lecture 6: Segmentation by Point Processing

Lecture 6: Segmentation by Point Processing Lecture 6: Segmentation by Point Processing Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu September 27, 2005 Abstract Applications of point

More information

Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques

Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques Partha Sarathi Giri Department of Electronics and Communication, M.E.M.S, Balasore, Odisha Abstract Text data

More information

Video Compression An Introduction

Video Compression An Introduction Video Compression An Introduction The increasing demand to incorporate video data into telecommunications services, the corporate environment, the entertainment industry, and even at home has made digital

More information

Real time eye detection using edge detection and euclidean distance

Real time eye detection using edge detection and euclidean distance Vol. 6(20), Apr. 206, PP. 2849-2855 Real time eye detection using edge detection and euclidean distance Alireza Rahmani Azar and Farhad Khalilzadeh (BİDEB) 2 Department of Computer Engineering, Faculty

More information

Mouse Pointer Tracking with Eyes

Mouse Pointer Tracking with Eyes Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating

More information

Time Stamp Detection and Recognition in Video Frames

Time Stamp Detection and Recognition in Video Frames Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th

More information

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis N.Padmapriya, Ovidiu Ghita, and Paul.F.Whelan Vision Systems Laboratory,

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Laboratory of Applied Robotics

Laboratory of Applied Robotics Laboratory of Applied Robotics OpenCV: Shape Detection Paolo Bevilacqua RGB (Red-Green-Blue): Color Spaces RGB and HSV Color defined in relation to primary colors Correlated channels, information on both

More information

An Integration of Face detection and Tracking for Video As Well As Images

An Integration of Face detection and Tracking for Video As Well As Images An Integration of Face detection and Tracking for Video As Well As Images Manish Trivedi 1 Ruchi Chaurasia 2 Abstract- The objective of this paper is to evaluate various face detection and recognition

More information

Chapter 9 Morphological Image Processing

Chapter 9 Morphological Image Processing Morphological Image Processing Question What is Mathematical Morphology? An (imprecise) Mathematical Answer A mathematical tool for investigating geometric structure in binary and grayscale images. Shape

More information

Auto-Digitizer for Fast Graph-to-Data Conversion

Auto-Digitizer for Fast Graph-to-Data Conversion Auto-Digitizer for Fast Graph-to-Data Conversion EE 368 Final Project Report, Winter 2018 Deepti Sanjay Mahajan dmahaj@stanford.edu Sarah Pao Radzihovsky sradzi13@stanford.edu Ching-Hua (Fiona) Wang chwang9@stanford.edu

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

COLOR AND SHAPE BASED IMAGE RETRIEVAL

COLOR AND SHAPE BASED IMAGE RETRIEVAL International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol.2, Issue 4, Dec 2012 39-44 TJPRC Pvt. Ltd. COLOR AND SHAPE BASED IMAGE RETRIEVAL

More information

Robbery Detection Camera

Robbery Detection Camera Robbery Detection Camera Vincenzo Caglioti Simone Gasparini Giacomo Boracchi Pierluigi Taddei Alessandro Giusti Camera and DSP 2 Camera used VGA camera (640x480) [Y, Cb, Cr] color coding, chroma interlaced

More information

Linear Discriminant Analysis for 3D Face Recognition System

Linear Discriminant Analysis for 3D Face Recognition System Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.

More information

Robot Learning. There are generally three types of robot learning: Learning from data. Learning by demonstration. Reinforcement learning

Robot Learning. There are generally three types of robot learning: Learning from data. Learning by demonstration. Reinforcement learning Robot Learning 1 General Pipeline 1. Data acquisition (e.g., from 3D sensors) 2. Feature extraction and representation construction 3. Robot learning: e.g., classification (recognition) or clustering (knowledge

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information