Research Paper Volume 2 Issue 8 April 2015 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 Detecting Digital Image Forgeries By Multi-illuminant Estimators Paper ID Key Words IJIFR/ V2/ E8/ 078 Page No. 2742-2748 Subject Area Computer Sci. & Engineering Illuminant Map, Image Forensics, Machine Learning, Spliced Image Detection, Texture And Edge Descriptors Anu Merin Mathew 1 Ajith John Varghese 2 M.Tech. Scholar Department of Computer Sci. & Engineering Musaliar College of Engineering and Technology, Pathanamthitta-Kerala Assistant Professor Department of Computer Sci. & Engineering Musaliar College of Engineering and Technology, Pathanamthitta-Kerala Abstract Nowadays digital images are widely used in day-today life. With the increase of the Internet and low-price digital cameras, as well as a powerful image editing software (such as Adobe Photoshop and Illustrator, and GNU Gimp), it is easy to manipulate on digital images and difficult to detect the original and tampered one. Thus a new field of image forensic has been attracting many of the researchers. With this challenge there were different techniques to detect the authenticity of an image. In this paper, one of the most common forms of photographic manipulation, known as image composition or splicing is analysed. A forgery detection method is a method that detects the inconsistencies in the color of the illumination of images. The proposed approach is a machine-learning based and requires minimal user interaction. The particular approach can be applied to images that contain a couple of persons and require no expert relationship to the tampering decision. To attain this, both physics and statistics-based illuminant estimators used on image regions of comparable material. From these illuminant estimates, extract texture and edge-based options, and then provided in a machine-learning approach for automatic decision-making. The classification performance is done using a naïve-bayes classifier is promising. www.ijifr.com Copyright IJIFR 2015 2742
1. Introduction Images are one of the most effective tools for communication, especially digital images. But nowadays, with the increase of computer editing software s, availability of low cost cameras, etc tampering the images has been done very easily. It would affect the authenticity of the images. The image forgery is not a new concept. In history it has been recorded as 1940.From conventional photography, the only difference in a digital image is that which uses digital images instead of photographs. Therefore Digital image forensics is a rising research area. It t aims at authenticating the images to detect artifact in digital images and it typically uses in criminal activities. The image forgery detection approaches can be classified into two categories both active and passive-blind. The active approach may focus on data hiding (e.g. Watermarking, Steganography) and digital signatures which were based on prior information about the image. On the other hand, the passive approach on image forgery detection does not require any prior information about the image to be investigated. It was based on the fact that editing the image content may result in uneven distribution of image features. (e.g. statistical changes). There are numerous techniques available for tampering decision. Additionally, there are various image manipulation techniques such as copy-move, splicing, retouching, steganography and so forth. Amid these types of image manipulations, image splicing or image composition would be the most common image manipulation technique. Image splicing or image composition is a technique in which two or more images tend to be merged to make a new fake image. Image splicing approaches appreciably modify the original image(s) as well as include composition of more than one particular image which can be combined to create a tampered image. When a pair of images with different skills or background were tending to be spliced, subsequently it really is comparatively more difficult to make the actual boundaries imperceptible. The example shown in the Figure 1 illustrates is a sample image for image splicing that shows man in which left is inserted. Figure 1: Example of a spliced image In this work, an illuminant-based tampering decision-making make an important step towards minimizing user interaction. So the proposed method is a new an automatic method and it is also significantly more reliable than earlier approaches. Therefore the proposed system builds upon the ideas of multi- illuminant features are extracted and decision on illuminant color and texture estimates are given to an objective classification based on naïve-bayes classifier makes the system distinct from the prior works. 2743
2. Existing System The existing system used a forgery detection method which detects inconsistencies in the color of the illumination of images. The method could be machine-learning based along with minimal user interaction. The forgery detection techniques that are based on illuminant estimation can be divided as either geometry -based or color based. Geometry based methods focus on inconsistencies in lighting whereas color based methods describes how chromaticity of an object varies with different intensities of light. The actual process seemed to be applicable to images containing two or more individuals and also requires no specialist relationship to the tampering conclusion. For this purpose, information coming from both physics- along with statistical-based illuminant estimators on image parts of related materials are usually estimated. From these types of illuminant estimates, extract texture- along with edge-based characteristics which are subsequently supplied to some machinelearning method with regard to automatic decision-making. The SVM classifier is used for predicting the decision. 2.1 Statistical Based Techniques In a statistical-based illuminant estimator, take advantage of statistical regularities in natural images to find various types kinds of image vulnerabilities. Particularly, in the statistical model is utilized to detect many techniques from essential image manipulations such as resizing along with filtering to discriminate photographic via computer-generated images and detecting hidden messages (steganography). 2.2 Physics Based Techniques Physics based approaches primarily based on to identify the particular deviations within the threedimensional interaction involving light, camera and also the physical objects. 2.3 Issues In Exploiting Illuminant Maps In order to show that the actual challenges regarding exploitation of illuminant estimates, briefly verify the illuminant maps created by the technique of Riess and Angelopoulos. With this technique, an image is actually subdivided into regions of related colouring (super pixels). The illuminated color would be estimated locally with each super pixel by using every pixel. Then recoloring every super pixel with its local illuminant color, approximately produces a new so-called illuminant map. A human expert can then identify the input image along with the illuminant map in order to detect inconsistencies. 3. Proposed System The proposed method is a forgery detection method to detect the illumination for each pair of faces in the image as consistent or inconsistent. Also the recommended approach takes an important step towards minimizing user interaction for an illuminate-based tampering decision-making was made. This was a novel automatic detection method that is also significantly more reliable than earlier approaches has been proposed. In addition to this, the present method follows both physics- and statistical-based multi- illuminant estimators on image regions of similar material. The particular exploitation could be done possibly with local illuminant estimators usually was most discriminating when comparing physical objects of similar (or similar) material. From these illuminant estimates, extract texture- and edge-based features which provide a machine-learning approach for automatic decision-making. The classification performance using a naïve Bayes classifier is promising. 2744
The proposed forgery detection method can be organized into five main components. The figure 2 shows the method overview of these components 3.1 Dense (Regional) Local Illuminant Evaluation 3.2 Face Extraction 3.3 Estimation Of Illuminant Features 3.4 Paired Face Features 3.5 Classification Orignal images and composite images Dense local illuminant estimation Face extraction Extraction of SASI and HOGedge features Paired Feature Extraction Database of training Examples Training Feature Vectors Test Stage Input image to claasify Dense local Illuminant Estimation Face Extraction Paired Face Features Naïvebayes classifier Forgery Detection Figure 2: Method Overview 3.1:Dense (Regional) Local Illuminant Evaluation The input image would be divided into homogeneous regions. Every illuminant estimator, a whole new image is generated where each and every region will be colored with the extracted illuminant color. Then the resulting intermediate representation is named as illuminant map (IM). In order to calculate the illuminant color estimates a couple of separate illuminant estimators (IE) are employed: 2745
gray world estimator as well as physics based illuminant estimator referred to as inverse intensity chromaticity space. Then the resultant producing intermediate representation of the illuminants is termed illuminant map (IM). Both the illuminant maps usually are next separately examined. ( a) Gray World Estimator In particular gray world assumption [4] expresses that the average color in the image scene is gray. Any variations appeared in the value of the average intensity is associated with the image is because of the particular illuminant. (b) Inverse Intensity-Chromaticity Estimator The other illuminant estimator could be the so-called inverse intensity-chromaticity (IIC) space. Assume that the image intensities have a mixture of diffuse and specular reflectance. Real specularities include just this illuminant color. Initially the RGB color space is usually converted to YCbCr color space because it presents the intensity and also chromaticity variables. 3.2 Face Extraction In face extraction, extract each face in the input image. It can be done by using either automatic or semi-automatic method. In this work, an automatic face detection was employed. Also an operator was used to set bounding boxes around each face in the image under investigation and then crop bounding boxes out of each illuminant map. However, many of us often like someone else's user just for this task for 2 major causes: firstly, it avoids fake detections or missed faces. Secondly, it judges the lighting conditions in the scene in context is important 3.3 Estimation Of Illuminant Features For all face regions, both texture-based and gradient-based features are computed on the values of illuminant map. Then texture features are calculated using Statistical Analysis of Structural Information (SASI) and edge features using HOGedge Histograms of Oriented Gradients (HOG) [5]. (a) Texture Feature Descriptor : Statistical Analysis Of Structural Information (SASI) SASI measures the structural properties of texture features. It is based on the autocorrelation of horizontal, vertical and diagonal pixel lines over an image at different scales. Then computing the mean and standard deviation of all such pixel values then normalized by subtracting its mean value, and dividing it by its standard deviation. Extraction Of Edge Points The Canny Edge Detector could be used for extraction of edge points. When an image is spliced, the statistics of these edges differ in their original images and doctored images. These edge discontinuities are characterized by a feature or attribute descriptor called HOGedge. (b) Edge Descriptor: HOGedge Histograms of Oriented Gradients Initially get both equally distributed edge points in addition to establishing the HOG descriptor pertaining to all the edge points. The computed HOG descriptors are described in the image book or a visual dictionary. In addition to the appearance in the shape of objects in an image is determined based on the distribution regarding edge instructions. To start with break down the actual image directly into smaller regions termed cells in addition to calculate local 1-D histogram of each cell. The particular attribute or feature vector is built through mixing and contrast-normalizing of actual histogram of cells within a spatially large location. 2746
3.4 Paired Face Features In this step all the face pairs in the image was identified and feature vector of each of the faces was combined with other face in pairs. The main objective behind this is that feature concatenation from two faces is different when one face is original and the other is spliced. If an image consists of n f faces (n f 2), there are (n f (n f -1))/2 possible face pairs. 3.5 Classification In this step an automatic machine learning approach is used to classify the feature vectors. Here naïve-bayes classifier is used to classify each pair of faces as either consistently or inconsistently illuminated. The classification in machine learning used to improve the detection performance. The classifier can store all the training data. 4. Experiment And Results The experiment is done on the images in the dataset. The dataset contains the original images and also spliced images. The images were created using adobe photoshop image editing software. The illuminant estimates are determined using gray world estimates and inverse intensity-chromaticity space. The forgeries were created by adding one or more individuals into a source image that already contained one or more persons. The classification performance is done using by naïve bayes classifier is promising. The main advantage of naive Bayes is that it only requires a small amount of training data to estimate the parameters necessary for classification. It yields detection rates of 89% on a new benchmark dataset consisting of 100 images. The experiments show that there is a great difference in the illuminant estimates of the images when they are spliced. This is because, while editing the image, the manipulator performs different functions to make the image look more like original one. This can alter the edge features of the image. Overall, the proposed technique is able to provide better performance compared to the available illuminant classification method and also using Naïve-Bayes classifier for predicting the images are either inconsistent or not. (a) (b) (c) (d) Figure 3 : (a)input image (b) illuminant maps are created with the IIC-based illuminant estimator (c) after automatic face detection (d) Result of the Canny edge detector when applied on this IM. 2747
5. Conclusion In this paper present a novel method for detecting the forgery in digital images using the illuminant color estimator has been proposed. Here the illuminant color is estimated by using the gray world illuminant estimator and automatic face detection is proposed. The texture information is extracted using SASI algorithm and edge point information is obtained from HOGedge algorithm. Then the classification of face pairs is performed using naïve-bayes classifier.this method requires only minimal user interaction and provides a better authenticity of the image. References [1] Tiago José de Carvalho,,Christian Riess,,Elli Angelopoulou, Hélio Pedrini, and Anderson de Rezende Rocha, Exposing Digital Image Forgeries by Illumination Color Classification, IEEE Transactions on Information Forensics and Security, Vol. 8, No. 7, July 2013 [2]A. Rocha, W. Scheirer, T. E. Boult, and S. Goldenstein, Vision of the unseen: Current trends and challenges in digital image and video forensics, ACM Computer Surveys, vol. 43, pp. 1 42, 2011 [3] H. Farid and M. J. Bravo, Image forensic analyses that elude thehuman visual system, in Proceedings Symposium Electronic Imaging (SPIE), 2010,pp. 1 10. [4] S. Gholap and P. K. Bora, Illuminant colour based image forensics, in Proceedings IEEE Region 10 Conference, 2008, pp. 1 5. [5] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, Proceedings.IEEE Conference Computer Vision and Pattern Recognition,2005, pp. 886 893. 2748