Conversion of 2D Image into 3D and Face Recognition Based Attendance System

Similar documents
LUMS Mine Detector Project

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Stereo Image Rectification for Simple Panoramic Image Generation

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Stereo Vision. MAN-522 Computer Vision

Recognition of Non-symmetric Faces Using Principal Component Analysis

Haresh D. Chande #, Zankhana H. Shah *

Rectification and Disparity

Multiple View Geometry

Projector Calibration for Pattern Projection Systems

Computer Vision Lecture 17

Computer Vision Lecture 17

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Epipolar Geometry and Stereo Vision

Announcements. Stereo

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Pattern Feature Detection for Camera Calibration Using Circular Sample

DISTANCE MEASUREMENT USING STEREO VISION

What have we leaned so far?

CS5670: Computer Vision

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Model-based Enhancement of Lighting Conditions in Image Sequences

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Learning to Recognize Faces in Realistic Conditions

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Epipolar Geometry and Stereo Vision

Announcements. Stereo

Lecture 14: Basic Multi-View Geometry

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han

Recap from Previous Lecture

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Flexible Calibration of a Portable Structured Light System through Surface Plane

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Multiple View Geometry

Stereoscopic Vision System for reconstruction of 3D objects

Lecture 9 & 10: Stereo Vision

A Review on 2d to 3d Image Conversion Techniques

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Tracking facial features using low resolution and low fps cameras under variable light conditions

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Image Based Reconstruction II

CSCI 5980: Assignment #3 Homography

Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth?

Stereo Epipolar Geometry for General Cameras. Sanja Fidler CSC420: Intro to Image Understanding 1 / 33

Geometry of Multiple views

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

A Novel Stereo Camera System by a Biprism

Assignment 2: Stereo and 3D Reconstruction from Disparity

Perceptual Quality Improvement of Stereoscopic Images

Integrated three-dimensional reconstruction using reflectance fields

Epipolar Geometry and Stereo Vision

2D to pseudo-3d conversion of "head and shoulder" images using feature based parametric disparity maps

Visual Pathways to the Brain

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

Scalable geometric calibration for multi-view camera arrays

Lecture 14: Computer Vision

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

Miniature faking. In close-up photo, the depth of field is limited.

Color Segmentation Based Depth Adjustment for 3D Model Reconstruction from a Single Input Image

Depth from two cameras: stereopsis

Perception II: Pinhole camera and Stereo Vision

Autonomous Floor Cleaning Prototype Robot Based on Image Processing

Construction of Frontal Face from Side-view Images using Face Mosaicing

Stereo: Disparity and Matching

Stereo Reconstruction of a Submerged Scene

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

Lecture 6 Stereo Systems Multi-view geometry

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

A novel 3D torso image reconstruction procedure using a pair of digital stereo back images

Depth from two cameras: stereopsis

An Embedded Calibration Stereovision System

A Real Time Facial Expression Classification System Using Local Binary Patterns

Mouse Pointer Tracking with Eyes

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION

Dense 3D Reconstruction. Christiano Gava

Final Exam Study Guide

Computer Vision. Introduction

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Generation of Digital Watermarked Anaglyph 3D Image Using DWT

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

International Journal for Research in Applied Science & Engineering Technology (IJRASET) A Review: 3D Image Reconstruction From Multiple Images

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Camera Model and Calibration. Lecture-12

Transcription:

Conversion of 2D Image into 3D and Face Recognition Based Attendance System Warsha Kandlikar, Toradmal Savita Laxman, Deshmukh Sonali Jagannath Scientist C, Electronics Design and Technology, NIELIT Aurangabad, Aurangabad, India M-tech, Electronics Design and Technology, NIELIT Aurangabad, Aurangabad, India M-tech, Electronics Design and Technology, NIELIT Aurangabad, Aurangabad, India ABSTRACT: The world is being 3d and the 3d imaging having more advantages over 2d,presently there are attendance system based on fingerprint recognition but that are having some advantages like the surface should be dust free and also we have the contact with the sensor so that finger should have to keep properly. There are also 2D face recognition based attendance system. To overcome the problems in 2d i.e. Lighting variations, Expression variations, variations 3D face recognition is used. We propose a system that takes the attendance of the student using Conversion of 2d images into 3d And then face face detection and recognition. We are using two 2d images taken from two different camera and that is converted into 3d image using binocular disparity technique. After this image is use for the face recognition. Threedimensional face recognition (3D face recognition) is a modality of facial recognition methods in which the threedimensional geometry of the human face is used. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of facial recognition. KEYWORDS: 3d conversion, stereo imaging, binocular disparity, face recognition I. INTRODUCTION We are developing the attendance system using 3d face recognition technique. Two web cameras are used to capture the facial image which calibrated by using stereo camera calibration technique. Web camera image is used to analyze facial characteristics such as the distance between eyes, mouth or nose. These measurements are stored in a database and used to compare with a subject standing before a camera. In this system the standard images of the students in that class are stored in the database. The stored image has the Students information such as the student s roll number, class, year etc. The image are captured from two different cameras which are placed parallel and calibrated one with respect to other. The camera is converted into 3d image. Face image is detected from the captured image and recognized and then image is compared with the reference images stored in database. Then attendance of the student is marked. Fig.1 Block diagram of system Copyright to IJAREEIE www.ijareeie.com 10001

For conversion of 2d images into 3d image binocular disparity algorithm is used in which two or more 2d images are used for conversion. First the disparity image is obtain from two images then disparity image is converted to 3d and for face detection haar cascade is used and for recognition eigen faces technique is used. II. SURVEY 2D to 3D conversion is the process of transforming 2D ("flat") film to 3D form. 2D to stereo 3D conversion Also called stereo conversion. The vast majority of face recognition system use typical intensity images of the face. We refer to these as 2D images. In contrast, a 3D image of the face is one that represents three-dimentional shape. The world of 3D incorporates the third dimension of depth, which can be perceived by the human vision in the form of binocular disparity. An important step in any 3D system is the 3D content generation. Several special cameras have been designed to generate 3D model directly. For example, a stereoscopic dual-camera makes use of a co-planar configuration of two separate, monoscopic cameras, each capturing one eye s view, and depth information is computed using binocular disparity. 2D to 3D conversion method recovers the depth information by analyzing and processing the 2D image structures. Depending on the number of input images, we can categorize the existing conversion algorithms into two groups: algorithms based on two or more images and algorithms based on a single still image. Here we are using the second method i.e. Using two images. Depth-based conversion Most semiautomatic methods of stereo conversion use depth maps and depth-image-based rendering. The depth map is a separate grayscale image having the same dimensions as the original 2D image, with various shades of gray to indicate the depth of every part of the frame. Binocular disparity : With two images of the same scene captured from slightly different view points, the binocular disparity can be utilized to recover the depth of an object. This is the main mechanism for depth perception. First, a set of corresponding points in the image pair are found. Then, by means of the triangulation method, the depth information can be retrieved with a high degree of accuracy when all the parameters of the stereo system are known. When only intrinsic camera parameters are available, the depth can be recovered correctly up to a scale factor. In the case when no camera parameters are known, the resulting depth is correct up to a projective transformation. Assume p And p Are the projections of the 3D point P on the left image and right image; o And O Are the origin of camera coordinate systems of the left and right cameras. Based on the relationship between similar triangles (p,p,p ) and (p,o,o ) shown in Figure,the depth value Z of the point P can be obtained: Fig.2 Disparity Copyright to IJAREEIE www.ijareeie.com 10002

Z = f T d Where d x x Which measures the difference in retinal position between corresponding image points. The disparity value of a point is often interpreted as the inversed distances to the observed objects. Therefore, finding the disparity map is essential for the construction of the depth map. Epipolar geometry and camera calibration are the two most frequently used constraints. With these two constraints, image pairs can be rectified. Another widely accepted assumption is the photometric constraint, which states that the intensities of the corresponding pixels are similar to each other. The ordering constraint states that the order of points in the image pair is usually the same. III. RELATED WORK System development consist of three major steps i.e. 2d to 3d conversion, face recognition and attendance system. A. Conversion Of 2D Images Into 3D Binocular stereo uses only two images, taken with cameras that were separated by a horizontal distance known as the "baseline. Calibrating the stereo camera system allows us to compute the 3-D world points in actual units, such as millimetres relative to the cameras. The process consists of the following steps. 1. Calibrate the stereo camera system: Calibration Involves the estimation of the intrinsic parameters of each camera. It also includes the estimation of the translation and rotation of the second camera relative to the first one. These parameters can be used to rectify a stereo pair of images to make them appear as though the two image planes are parallel. The rectified images are then used to compute the disparity map required to reconstruct the 3-D scene. A typical calibration pattern is an asymmetric checkerboard. 2. Rectify a pair of stereo images: Rectification is an important pre-processing step for computing disparity, because it reduces the 2-D stereo correspondence problem to a 1-D problem. Rectified images appear as though the two image planes are parallel and rowaligned. This means that for each point in image 1, its corresponding point in image 2 lies along the same row. 3. Compute disparity: The distance in pixels between corresponding points in the rectified images is called disparity. The disparity is used for 3-D reconstruction, because it is proportional to the distance between the cameras and the 3-D world point. 4. 3D reconstruction: Reconstruct the 3-D world coordinates of points corresponding to each pixel from the disparity map. Copyright to IJAREEIE www.ijareeie.com 10003

B. Face Recognition Fig.3 Flow chart of 2d to 3d conversion Face recognition is the process of putting a label to a known face. Just like humans learn to recognize their family, friends and celebrities just by seeing their face, there are many techniques for a computer to learn to recognize a known face. It involve four main steps: 1. Face detection : It is the process of locating a face region in an image. 2. Face preprocessing : It is the process of adjusting the face image to look more clear and similar to other faces (a small grayscale face in the top-center of the following screenshot). 3. Collect and learn faces: It is the process of saving many preprocessed faces and then learning how to recognize them. 4. Face recognition : It is the process that checks which of the collected people are most similar to the face in the camera. IV. RESULT AND DISCUSSION For calibration we have taken fourteen pairs of image. From which we get intrinsic and obtained extrinsic camera parameters. These parameters are used for finding the disparity map. Fig.(a) Fig. (b) Copyright to IJAREEIE www.ijareeie.com 10004

Fig.(c) Fig.4 (a) left image,(b) right image,(c) disparity image Here we have used standard images for conversion of 2d image into 3d. As shown in above figure (a) Is left image and figure(b) right image. The figure3 shows the disparity image computed from left and right image. This disparity image then use to reconstruct the 3d image the following fig shows 3d reconstructed image shown in 3d viewer. Fig. 3D reconstructed image in 3D viewer V. CONCLUSION We have done the conversion of 2d images into 3d. Recognition and analysis of 3D face image have more challenges and difficult to implement for 3D face recognition based attendance system So that we have used the 2D image for recognition and attendance estimation. VI. FUTURE WORK We are developing system for 10 to 15 student s attendance so it can be extended to the class also with large database. Also using more than two images for the reconstruction of 3D image there will be more accuracy in disparity image that can be used for face recognition. ACKNOWLEDGMENT Authors wish to acknowledge all the individuals who helped directly and indirectly for this research. The guidance and support received from all the members who contributed and who are contributing to this project, was vital for the success of the project. I must mention thanks to several individuals and organization that were of enormous help in the development of our work. Copyright to IJAREEIE www.ijareeie.com 10005

REFERENCES [1] Bouguet, JY. "Camera Calibration Toolbox for Matlab." Computational Vision at the California Institute of Technology. http://www.vision.caltech.edu/bouguetj/calib_doc/ [2] Janusz Konrad, Meng Wang And Prakash Ishwar. Computer vision and pattern recognition workshop(cvprw), 2012 IEEE computer socity conference on 16 june 2012 [3] L.Agnot, W-J.Hung And K.-C. Liu. A 2D to 3D video and image conversion technique based on bilateral filter. In proc. SPIE three-dimentional image processing and application,volume 7526, feb 2010 [4] Sisi, Wang Fei, Liu Wei. The overview of 2D to 3D conversion system. computer-aided industrial design & conceptual design (CAIDCD), 2010 IEEE 11 th international conference on 17-19 nov. 2010. [5] G. Bradski And A. Kaehler, "Learning OpenCV : Computer Vision the OpenCV Library," O'Reilly, Sebastopol, CA, 2008. [6] Eigenfaces for recognition, M. Turk and A. Pentland, Journal of Cognitive Neuroscience 3, pp. 71-86 [7] Face Recognition with Local Binary Patterns, T. Ahonen, A. Hadid and M. Pietikäinen, Computer Vision - ECCV 2004, pp. 469 48 Copyright to IJAREEIE www.ijareeie.com 10006