Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Similar documents
High Capacity Reversible Watermarking Scheme for 2D Vector Maps

A Kind of Fast Image Edge Detection Algorithm Based on Dynamic Threshold Value

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Optimized Watermarking Using Swarm-Based Bacterial Foraging

Data Hiding on Text Using Big-5 Code

A reversible data hiding based on adaptive prediction technique and histogram shifting

A Fourier Extension Based Algorithm for Impulse Noise Removal

3D Mesh Sequence Compression Using Thin-plate Spline based Prediction

Stripe Noise Removal from Remote Sensing Images Based on Stationary Wavelet Transform

Fingerprint Ridge Distance Estimation: Algorithms and the Performance*

Local Image Registration: An Adaptive Filtering Framework

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Image and Video Quality Assessment Using Neural Network and SVM

Feature Based Watermarking Algorithm by Adopting Arnold Transform

A BTC-COMPRESSED DOMAIN INFORMATION HIDING METHOD BASED ON HISTOGRAM MODIFICATION AND VISUAL CRYPTOGRAPHY. Hang-Yu Fan and Zhe-Ming Lu

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

The Population Density of Early Warning System Based On Video Image

Hydraulic pump fault diagnosis with compressed signals based on stagewise orthogonal matching pursuit

AN EFFICIENT VIDEO WATERMARKING USING COLOR HISTOGRAM ANALYSIS AND BITPLANE IMAGE ARRAYS

Evaluation of Image Forgery Detection Using Multi-scale Weber Local Descriptors

An efficient face recognition algorithm based on multi-kernel regularization learning

An Adaptive Threshold LBP Algorithm for Face Recognition

An Efficient Character Segmentation Algorithm for Printed Chinese Documents

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

Virtual Interaction System Based on Optical Capture

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Comparison of Digital Image Watermarking Algorithms. Xu Zhou Colorado School of Mines December 1, 2014

An Improved 3DRS Algorithm for Video De-interlacing

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

Detection of Blue Screen Special Effects in Videos

A new predictive image compression scheme using histogram analysis and pattern matching

Abstract. Keywords: Genetic Algorithm, Mean Square Error, Peak Signal to noise Ratio, Image fidelity. 1. Introduction

Digital Image Steganography Techniques: Case Study. Karnataka, India.

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Moment-preserving Based Watermarking for Color Image Authentication and Recovery

Error-free Authentication Watermarking Based on Prediction-Error-Expansion Reversible Technique

Object Detection in Video Streams

Beyond Bags of Features

(JBE Vol. 23, No. 6, November 2018) Detection of Frame Deletion Using Convolutional Neural Network. Abstract

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

A Robust Color Image Watermarking Using Maximum Wavelet-Tree Difference Scheme

Schedule for Rest of Semester

A Novel Real-Time Feature Matching Scheme

Authentication and Secret Message Transmission Technique Using Discrete Fourier Transformation

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Fast trajectory matching using small binary images

Image Enhancement Using Fuzzy Morphology

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2

Masked Face Detection based on Micro-Texture and Frequency Analysis

A Reversible Data Hiding Scheme for BTC- Compressed Images

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS

Image Error Concealment Based on Watermarking

arxiv: v1 [cs.cr] 31 Dec 2018

A Revisit to LSB Substitution Based Data Hiding for Embedding More Information

The Comparative Study of Machine Learning Algorithms in Text Data Classification*

A Robust Image Zero-Watermarking Algorithm Based on DWT and PCA

Video Aesthetic Quality Assessment by Temporal Integration of Photo- and Motion-Based Features. Wei-Ta Chu

Image Segmentation Based on. Modified Tsallis Entropy

2 Proposed Methodology

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

Robot Path Planning Method Based on Improved Genetic Algorithm

Temperature Calculation of Pellet Rotary Kiln Based on Texture

Detection of Moving Objects in Colour based and Graph s axis Change method

User-Friendly Sharing System using Polynomials with Different Primes in Two Images

CIS UDEL Working Notes on ImageCLEF 2015: Compound figure detection task

Recognition of Gurmukhi Text from Sign Board Images Captured from Mobile Camera

Research on online inspection system of emulsion explosive packaging defect based on machine vision

Digital Image Steganography Using Bit Flipping

Fabric Defect Detection Based on Computer Vision

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

Temperature Distribution Measurement Based on ML-EM Method Using Enclosed Acoustic CT System

Extracting Characters From Books Based On The OCR Technology

Video De-interlacing with Scene Change Detection Based on 3D Wavelet Transform

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Storage Efficient NL-Means Burst Denoising for Programmable Cameras

Forensic analysis of JPEG image compression

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Copy-Move Forgery Detection using DCT and SIFT

An indirect tire identification method based on a two-layered fuzzy scheme

Compression of Stereo Images using a Huffman-Zip Scheme

One category of visual tracking. Computer Science SURJ. Michael Fischer

Stereo Vision Image Processing Strategy for Moving Object Detecting

An Improved Images Watermarking Scheme Using FABEMD Decomposition and DCT

A Robust Video Hash Scheme Based on. 2D-DCT Temporal Maximum Occurrence

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

FSRM Feedback Algorithm based on Learning Theory

Fast Mode Decision for H.264/AVC Using Mode Prediction

A NOVEL SCANNING SCHEME FOR DIRECTIONAL SPATIAL PREDICTION OF AVS INTRA CODING

Reversible Image Data Hiding with Local Adaptive Contrast Enhancement

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Improved Qualitative Color Image Steganography Based on DWT

Adaptive Fuzzy Watermarking for 3D Models

Performance Degradation Assessment and Fault Diagnosis of Bearing Based on EMD and PCA-SOM

Copy Move Forgery using Hu s Invariant Moments and Log-Polar Transformations

A Flexible Scheme of Self Recovery for Digital Image Protection

The Analysis and Detection of Double JPEG2000 Compression Based on Statistical Characterization of DWT Coefficients

An Improved Algorithm for Image Fractal Dimension

JPEG Copy Paste Forgery Detection Using BAG Optimized for Complex Images

Transcription:

Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong Ma School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, 44, China Tel.: +86--568439, fax: +86--568439 E-mail: {323, zhhli2, 53, qlma}@bjtu.edu.cn Received: 3 December 23 /Accepted: 28 February 24 /Published: 3 March 24 Abstract: Identifying inter-frame forgery is a hot topic in video forensics. In this paper, we propose a method based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. We first extract optical flow from frames of videos and then calculate the optical flow consistency after normalization and quantization as distinguishing feature to identify inter-frame forgeries. We train the Support Vector Machine to classify original videos and video forgeries with optical flow consistency feature of some sample videos and test the classification accuracy in a large database. Experimental results show that the proposed method is efficient in classifying original videos and forgeries. Furthermore, the proposed method performs also pretty well in classifying frame insertion and frame deletion forgeries. Copyright 24 IFSA Publishing, S. L. Keywords: Inter-frame forgeries, Frame insertion, Frame deletion, Optical flow consistency, Video forensics, Classification.. Introduction Nowadays with the ongoing development of video editing techniques, it becomes increasingly easy to modify the digital videos. How to identify the authenticity of videos has become an important field in information security. Video forensics aims to look for features that can distinguish video forgeries from original videos, thus people can identify the authenticity of a given video. A kind of distinguishing method which is based on video content and composed of copy-move detection and inter-frame tampering detection becomes a hot topic in video forensics. Chih-Chung et al. [] used correlation of noise residue to locate forged regions in a video. Leida et al. [2] presented a method to detect the removed object in video by employing the magnitude and orientation of motion vectors. A.V. et al. [3] detected the spatial and temporal copy-paste tampering based on Histogram of Oriented Gradients (HOG) feature matching and video compression properties. For inter-frame forgeries, such as frame insertion and frame deletion forgeries, some effective methods were proposed [4, 5]. Juan et al. [4] presented a model based optical flow consistency to detect video forgery as inter-frame forgery will disturb the optical flow consistency. Tianqiang et al. [5] proposed a method using gray values to represent the video content and detected the tampered video including frame deletion, frame insertion and frame substitution. In this paper, we propose a simple yet efficient method based on optical flow consistency to distinguish inter-frame forgeries from original Article number P_932 229

videos. The basic idea of our method is that optical flows of original sequences are consistent while optical flows of inter-frame forgeries will have abnormal points. We first calculate the optical flow values between sequential frames of videos, and then use Support Vector Machine (SVM) [6] to classify original videos and inter-frame forgeries. We test our method on a large database, and results show that it is a method of high detection accuracy. The rest of the paper is organized as follows. In Section 2 the optical flow generation is reviewed. Section 3 introduced how inter-frame forgery affects optical flow and the extraction of optical flow features. Experimental results and discussions are presented in Section 4 and Section 5 concludes the paper. Fig. 2. Spatial sampling of original image. 2. Review of Optical Flow The Lucas Kanade optical flow is proposed by B. D. Lucas and T. Kanade. It has been widely used in layered motion, mosaic construction, face coding. Recently it is used in video inter-frame forgery detection [4] which is based on the assumption that for adjacent frames in original videos the optical flows are consistent, and inter-frame forgery will disturb this optical flow consistency. The framework of the method [4] is as shown in Fig.. Fig. 3. 5-layer pyramid of odd image. Fig.. The framework to extract the Lucas Kanade optical flow. The main steps to extract the Lucas Kanade optical flow are as follows:. For each image, take its odd rows to form an odd image and its even rows to form an even image. The spatial sampling is conducted to reduce the computational complexity. For example, in Fig. 2 is the original image, and the odd image and even image are shown in Fig. 2. 2. Build a pyramid for each image. For example, in Fig. 3, the picture in the bottom is the original odd image, and the fours above it are the pyramid built on it. The picture above the bottom image is sampled every 2 rows and 2 columns from the original odd image. Accordingly, the other upper images are respectively sampled every 3/4/5 rows and the same columns from the original odd image. 3. Estimate motion vectors in both X and Y directions of each layer in Fig. 3 from top to bottom between two images. 4. Expand the motion vector in the top layer twice and add it to its lower layer in both X and Y directions. Then smooth the sum of these two layers. Repeat these steps in each layer until the bottom layer. Finally, we obtain the optical flow between two odd images. The optical flows in odd image and even image are almost the same, while in original image is the sum of the two. The method calculating optical flows of odd images can reduce the computing complexity while keep the optical flow feature. 5. For two frames P and Q, two optical flow figures OFX (P,Q) and OFY (P,Q) which are optical flow vectors in the 2D space are computed by adding the absolute values of the optical flow in each pixel with equation (). S I J ( x) = OFX ( i, ), () ( P, Q) ( P, Q) j i= j= where S (P,Q) (x) is the sum of optical flow values between frame P and frame Q in the X direction, and 23

X can be replace with Y to calculate S (P,Q) (y), which is the sum of optical flow values between frame P and frame Q in the Y direction, I and J are the number of pixels in each row and each column of the optical flow figure. 3. Proposed Method As we know, in [4] they supposed that the variability of optical flow in videos without tampering should be consistent, otherwise, the consistency would be destroyed by the deleted or inserted frames. So they used window based rough detection method and binary searching scheme to find out where the tampered frames are. In this paper, we propose a method using optical flow consistency feature to distinguish original videos and Inter-frame video forgeries. The difference between our method and the previous work [4] is that, the method in [4] was used for detecting where was tampered in a video. Although it mentioned that they considered the videos without abnormal points were original videos, but they didn t test their thought and give any experimental results [4]. While our method is for the purpose to identify whether a video was tampered and distinguish video forgeries from original videos. Since their method cannot distinguish the original videos and forgeries directly, we improve their method by using normalization and quantization to obtain distinguishing feature of a fix length, and then use SVM to classify original videos and inter-frame forgeries. 3.. Analysis of Optical Flow Consistency for Original Videos and Inter-frame Forged Videos Optical flow is efficient in depicting the continuity of frames. For a frame-tampered video, the optical flow would be changed a lot at the tampered point. In the original video database, every frame is nontampered. The continuous frames in the original video are as shown in Fig. 4. Fig. 4. Three continuous frames from an original video. These three frames are continuous in a video. We can see the motion of the people is small and the background almost has no difference. In fact, a whole video is composed by hundreds of frames. The fluctuation of optical flows between adjacent frames is shown in Fig. 5. The Fig. 5 shows the fluctuation in X direction and Fig. 5 depicts the fluctuation in Y direction. From Fig. 5, we can see that the optical flow values of sequential frames fluctuate in a small range. And the fluctuation in X direction and Y direction are very similar. That is to say, optical flows of original frames are consistent in either X direction and Y direction. 2 5 5 8 6 4 2 8 6 4 Fig. 5. The optical flow of an original video. Videos in the 25-frame-inserted video database and -frame-inserted video database are inserted 25 or frames somewhere. Fig. 6 shows four frames gotten from a -frame-inserted video. The first and forth images were the adjacent frames before frame insertion. Then we insert frames between them. And the inserted frames are continuous. In Fig. 6 the second and third images are the first and last frames of the video which was inserted to the original video. In a whole -frameinserted video, the fluctuation of optical flows between adjacent frames is shown in Fig. 7. The Fig. 7 shows the fluctuation in X direction and Fig. 7 depicts the fluctuation in Y direction. From Fig. 7 we can see two peaks, which represent two 23

tampered points. The front peak denotes the start point of the insertion, while the latter peak represents the end point. The fluctuations in these two points are much larger than that of other points. fluctuation of optical flows between adjacent frames is shown in Fig. 9. From Fig. 9 we can see one peak. The peak denotes the deletion point, where the fluctuation of optical flows is much larger than that of other points. 3.2. Framework of Inter-frame Forgery Identification Scheme Fig. 6. Four continuous frames from a -frame-inserted video. 8 x 5 6 4 Fig. 8. Three continuous frames from a -frame-deleted video. 2 8 6 6 x 4 5 4 2 4 3 2.8.6.4.2.8.6.4.2 2 x 6 Fig. 7. The optical flow of a -frame-inserted video. Similar to the frame-inserted video, in the 25-frame-deleted video database and -frame-deleted video database, every video is deleted 25 or frames somewhere. Fig. 8 shows three frames gotten from a -frame-deleted video. The first two images are the frames before frame deletion and the third image is the frame after deleting frames from the original video. In fact, we cannot find much difference between the last two images. In a whole -frame-deleted video, the 5 5 2 25 3 35 4 4.5 x 4 4 3.5 3 2.5 2.5.5 5 5 2 25 3 35 4 Fig. 9. The optical flow of a -frame-deleted video. In this paper, an optical flow consistency based inter-frame forgery identification scheme is proposed. The framework of our method is as shown in Fig.. 232

Fig.. The framework of video inter-frame forgery identification. The detailed scheme of our method is as follow. Since the method of extracting optical flow consistency feature in Y direction is the same as in X direction, we just take the feature extraction in X direction for example.. Calculate the optical flows between adjacent frames. The method is based on the extraction of Lucas Kanade optical flow. Equation () is used to calculate optical flows in X direction between two frames. X can be replaced with Y to calculate optical flow in Y direction. For a video with M frames, get a (M-) 2 matrix to store the optical flows of the video, where the st column stores the optical flows in X direction and the 2nd column stores them in Y direction. 2. Normalize and quantize the optical flows. For the st column of the matrix, find the maximum value m and divide every element in the column by m. Then quantify all the elements into D quantization levels. As all of them are distributed from to, the quantization interval is /D. 3. Count the statistical distribution of the quantified optical flows. After the normalization and quantization steps, the M- elements are all discrete values from /D to (/D, 2/D,, ). Count the distribution of the discrete values into a D vector. For a video database with N videos, build an array contains N vectors of the statistical distribution to represent the optical flow consistency feature of all the videos in the database. 4. Distinguish original videos and video forgeries with SVM. Put the optical flow consistency feature of the original videos and video forgeries to the SVM and distinguish the different kinds of videos. Repeat 2 times and get the average classification accuracy. 4. Experimental Results In experiments, we use five video databases. One is original video database. The other four are tampered video databases, of which each separately contains the videos inserted 25 frames, inserted frames, deleted 25 frames and deleted frames. There are 598 videos with still background and a little camera shake in each video database (N=598). 4.. Experimental Setting In our experiments, we use SVM of polynomial Kernel [6] as the classifier. To train the SVM classifier, about 4/5 of the videos are randomly selected as the training set (48 originals and 48 forgeries). The rest /5 form testing set (8 originals and 8 forgeries). The experiments are repeated for 2 times to secure reliable classification results. 4.2. Results and Discussion We extract optical flow consistency features of the five video databases, and classify original videos and each of the four kinds of tampered videos separately. The average classification accuracy is as shown in Table. Table. Classification accuracy between original videos and forgeries. Forgery Database X direction Y direction 25-frame-insertion 98.4 % 98.6 % -frame-insertion 98.2 % 98.54 % 25-frame-deletion 86.82 % 86.2 % -frame-deletion 92.6 % 88.56 % Obviously, the proposed method is efficient to identify whether a video was tampered. As expected, the classifying accuracy between originals and frameinserted forgeries is higher than that of frame-deleted forgeries. But even the classifying accuracy between original videos and 25-frame-deleted forgeries is 86.86 % in X direction and 86.2 % in Y direction, which is the lowest level of the four experiments. We then try to mix the four kinds of forgeries and classify them from originals. The result is efficient, too. The accuracy is 9.37 % in X direction and 88.44 % in Y direction. Inspired by this, we further try to classify frameinserted forgeries and frame-deleted forgeries. With the same method, we classify the 25-frame-inserted videos and 25-frame-deleted videos, as well as -frame-inserted videos and -frame-deleted videos. The classifying result is as shown in Table 2. 233

Table 2. Classification accuracy of two kinds of forgeries. Forgery Database 25-frame-insertion and 25- frame-deletion -frame-insertion and -frame-deletion X direction Y direction 9.72 % 9. % 89.83 % 92.63 % We can see that the classification accuracy still stay high. The proposed method is effective in classifying two kinds of forgeries. Then we mix the 25-frame-inserted videos and -frame-inserted videos, as well as 25-frame-deleted videos and - frame-deleted videos. And classify the two kinds of mixed forgeries. The accuracy is 97.75 % in X direction and 98.67 % in Y direction. To sum up, our method is efficient in classifying original videos and forgeries as well as different kinds of forgeries. 5. Conclusions In this paper, we extract the optical flow consistency as distinguishing features to classify original videos and forgeries. The classifying accuracies are high in all of the experiments indicate that the proposed method is efficient in classifying original videos and forgeries yet different kinds of video forgeries. In future work, we will combine the optical flow consistency features in X direction and Y direction. And in view of the fact that the videos we use in the experiments are under still background, we will build a new video database with moved background. References []. C. C. Hsu, T. Y. Hung, C. W. Lin, et al., Video forgery detection using correlation of noise residue, in Proceedings of the IEEE th Workshop on Multimedia Signal Processing, October, 28, pp. 7-74. [2]. L. Li, X. Wang, W. Zhang, et al., Detecting removed object from video with stationary background, Digital Forensics and Watermarking, Lecture Notes in Computer Science, Springer, Berlin Heidelberg, Vol. 789, 23, pp. 242-252. [3]. A. V. Subramanyam, S. Emmanuel, Video forgery detection using HOG features and compression properties, in Proceedings of the IEEE 4 th International Workshop on Multimedia Signal Processing (MMSP 2), 22, pp. 89-94. [4]. J. Chao, X. Jiang, T. Sun, A novel video inter-frame forgery model detection scheme based on optical flow consistency, Digital Forensics and Watermarking, Lecture Notes in Computer Science, Springer, Berlin Heidelberg, Vol. 789, 23, pp. 267-28. [5]. T. Q. Huang, Z. W. Chen, L. C. Su, Z. Zheng, X. J. Yuan, Digital video forgeries detection based on content continuity, Journal of Nanjing University (Natural Science), Vol. 47, No. 5, 2, pp. 493-53. [6]. C. C. Chang, C. J. Lin, LIBSVM: a library for support vector machines, ACM Transactions on Intelligent Systems and Technology (TIST), Vol. 2, Issue 3, 2, pp. 27. Software available at http://www.csie.ntu.edu.tw/~cnlin/libsvm 24 Copyright, International Frequency Sensor Association (IFSA) Publishing, S. L. All rights reserved. (http://www.sensorsportal.com) 234