Detection of Blue Screen Special Effects in Videos

Similar documents
The Analysis and Detection of Double JPEG2000 Compression Based on Statistical Characterization of DWT Coefficients

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Automatic Video Caption Detection and Extraction in the DCT Compressed Domain

Chapter 11.3 MPEG-2. MPEG-2: For higher quality video at a bit-rate of more than 4 Mbps Defined seven profiles aimed at different applications:

AN EFFICIENT VIDEO WATERMARKING USING COLOR HISTOGRAM ANALYSIS AND BITPLANE IMAGE ARRAYS

JPEG Copy Paste Forgery Detection Using BAG Optimized for Complex Images

(JBE Vol. 23, No. 6, November 2018) Detection of Frame Deletion Using Convolutional Neural Network. Abstract

Image Segmentation Techniques for Object-Based Coding

Memory Performance Characterization of SPEC CPU2006 Benchmarks Using TSIM1

Video Compression MPEG-4. Market s requirements for Video compression standard

Texture Segmentation by Windowed Projection

A Hand Gesture Recognition Method Based on Multi-Feature Fusion and Template Matching

H.264/AVC Video Watermarking Algorithm Against Recoding

Path-based XML Relational Storage Approach

A Flexible Scheme of Self Recovery for Digital Image Protection

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

Image Error Concealment Based on Watermarking

A Fast Video Illumination Enhancement Method Based on Simplified VEC Model

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

A Robust Wipe Detection Algorithm

Video Compression Method for On-Board Systems of Construction Robots

Copy-Move Forgery Detection using DCT and SIFT

An Adaptive Histogram Equalization Algorithm on the Image Gray Level Mapping *

A Rapid Scheme for Slow-Motion Replay Segment Detection

Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

A new predictive image compression scheme using histogram analysis and pattern matching

One-pass bitrate control for MPEG-4 Scalable Video Coding using ρ-domain

Video Coding Using Spatially Varying Transform

A Robust Error Resilient Approach for Data Hiding in MPEG Video Files Using Multivariate Regression and Flexible Macroblock Ordering

A NOVEL SCANNING SCHEME FOR DIRECTIONAL SPATIAL PREDICTION OF AVS INTRA CODING

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information

An Improved DCT Based Color Image Watermarking Scheme Xiangguang Xiong1, a

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance *

Data Storage Exploration and Bandwidth Analysis for Distributed MPEG-4 Decoding

High Capacity Reversible Watermarking Scheme for 2D Vector Maps

A CAVLC-BASED VIDEO WATERMARKING SCHEME FOR H.264/AVC CODEC. Received May 2010; revised October 2010

Improved LBP and K-Nearest Neighbors Algorithm

Text Area Detection from Video Frames

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

System Modeling and Implementation of MPEG-4. Encoder under Fine-Granular-Scalability Framework

Fast frame memory access method for H.264/AVC

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

An adaptive preprocessing algorithm for low bitrate video coding *

Robust Steganography Using Texture Synthesis

Network-based model for video packet importance considering both compression artifacts and packet losses

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

Context based optimal shape coding

An Optimized Template Matching Approach to Intra Coding in Video/Image Compression

Data Hiding on Text Using Big-5 Code

An Efficient QBIR system using Adaptive segmentation and multiple features

IMAGE COMPRESSION USING ANTI-FORENSICS METHOD

Error Concealment Used for P-Frame on Video Stream over the Internet

A Miniature-Based Image Retrieval System

Low-cost Multi-hypothesis Motion Compensation for Video Coding

OVERVIEW OF IEEE 1857 VIDEO CODING STANDARD

Blind Measurement of Blocking Artifact in Images

Motion Detection and Segmentation in H.264 Compressed Domain For Video Surveillance Application

System Modeling and Implementation of MPEG-4. Encoder under Fine-Granular-Scalability Framework

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

A Novel Deblocking Filter Algorithm In H.264 for Real Time Implementation

A Robust Watermarking Algorithm For JPEG Images

Review for the Final

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Available online at ScienceDirect. The 4th International Conference on Electrical Engineering and Informatics (ICEEI 2013)

CONTENT ADAPTIVE SCREEN IMAGE SCALING

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS

Structure-adaptive Image Denoising with 3D Collaborative Filtering

IBM Research Report. Inter Mode Selection for H.264/AVC Using Time-Efficient Learning-Theoretic Algorithms

Automatic Texture Segmentation for Texture-based Image Retrieval

An Information Hiding Algorithm for HEVC Based on Angle Differences of Intra Prediction Mode

Blur Detection For Video Streams In The Compressed Domain

Available online at ScienceDirect. Energy Procedia 69 (2015 )

streams cannot be decoded properly, which would severely affect the quality of reconstructed video. Therefore, error resilience is utilized to solve

Department of Electronics and Communication KMP College of Engineering, Perumbavoor, Kerala, India 1 2

A Video Watermarking Algorithm Based on the Human Visual System Properties

Available online at ScienceDirect. Procedia Computer Science 87 (2016 ) 12 17

A Rapid Automatic Image Registration Method Based on Improved SIFT

Research on Impact of Ground Control Point Distribution on Image Geometric Rectification Based on Voronoi Diagram

Streaming Video Based on Temporal Frame Transcoding.

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Keywords-H.264 compressed domain, video surveillance, segmentation and tracking, partial decoding

Performance Comparison between DWT-based and DCT-based Encoders

AIIA shot boundary detection at TRECVID 2006

Block-Matching based image compression

SOME stereo image-matching methods require a user-selected

Citation for the original published paper (version of record):

BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION

Open Access Surveillance Video Synopsis Based on Moving Object Matting Using Noninteractive

Digital Video Processing

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Digital Image Stabilization and Its Integration with Video Encoder

Performance Analysis of Adaptive Beamforming Algorithms for Smart Antennas

Week 14. Video Compression. Ref: Fundamentals of Multimedia

Browsing News and TAlk Video on a Consumer Electronics Platform Using face Detection

ScienceDirect. Reducing Semantic Gap in Video Retrieval with Fusion: A survey

FRAGILE WATERMARKING USING SUBBAND CODING

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

AUTOMATIC LOGO EXTRACTION FROM DOCUMENT IMAGES

Image Classification Using Wavelet Coefficients in Low-pass Bands

Transcription:

Available online at www.sciencedirect.com Physics Procedia 33 (2012 ) 1316 1322 2012 International Conference on Medical Physics and Biomedical Engineering Detection of Blue Screen Special Effects in Videos Junyu Xu, Yanru Yu,Yuting Su, Bo Dong, Xingang You School of Electronic Information Engineering,Tianjin University,Tianjin, China ytsu@tju.edu.cn Abstract A new approach to detect digital special effects of blue screen compositing is proposed in this paper. Based on the different qualities between the foreground and background, some statistical features of quantized discrete cosine transform (DCT) coefficients in the composited videos are extracted to expose the compositing operation. The experimental results show that our proposed algorithm can successfully detect the blue screen compositing in videos. 2012 2011 Published by Elsevier by Elsevier B.V. Selection Ltd. Selection and/or peer and/or review peer-review under responsibility under responsibility of ICMPBE International of [name organizer] Committee. Open access under CC BY-NC-ND license. Keywords: blue screen; digital video forensics; video signal processing; composited videos 1.Introduction With a wide spread of sophisticated and low-cost video cameras, digital videos are playing a more important role in our everyday life. Meanwhile, the accompanying development of video editing techniques and software has posed considerable challenges to the ability to ensure the integrity and authenticity of videos since doctored videos are appearing with a growing speed and sophistication in major media outlets, political campaigns, and courtrooms. The creation of a digital forgery often involves combining objects/people from separate images or videos into a single composite. Several techniques have been developed to detect image/video composite or region duplication. The domain of digital image forensics has been developing rapidly [1, 2]. For digital video forensic, there is still a lot of work to be done. Weihong Wang and Farid [3] designed a frame duplication and region duplication across frames detection method based on spatial and temporal correlation. Some efforts have been made to expose the double MPEG compression [4] and double quantization [5]. Meanwhile, the detection technique for double quantization is also suitable for detecting the digital special effects of blue screen, but the method will work only when the videos are doubly compressed with higher quality than the original ones and the quantization scale factor is fixed throughout the entire video sequence. That is to say, the method in [5] is not suitable for constant bitrate videos. 1875-3892 2012 Published by Elsevier B.V. Selection and/or peer review under responsibility of ICMPBE International Committee. Open access under CC BY-NC-ND license. doi:10.1016/j.phpro.2012.05.217

Junyu Xu et al. / Physics Procedia 33 ( 2012 ) 1316 1322 1317 In this paper, we present a complementary algorithm to detect digital special effects created with blue screen technique based on statistical features in DCT domain, which is suitable for constant bitrate videos. The rest of the paper is organized as follows: Section 2 introduces blue screen compositing technique; section 3 presents the proposed method for detecting blue screen special effects in videos, followed by experimental results in section 4. Conclusions are made in section 5. 2.Blue Screen Compositing Technique Blue screen matting [6] and compositing are important operations in the production of special video effects. These techniques enable directors to embed actors in a world that exists only in imagination, or to revive creatures that have been extinct for millions of years. During matting, foreground elements are extracted from a video sequence with a background of a constant backing color (usually blue or green). During compositing, the extracted foreground elements are placed over novel background images, as shown in Fig.1. Figure 1. Snapshots of an example of blue screen compositing. (a) A person walking before a constant backing color. (b) The person composited into a new environment. A common example of blue screen compositing can be seen during the weather forecast on TV. The weatherperson seems to be standing in front of a radar weather map, but that map is actually a blank wall. Editing software superimpose the image of the weatherperson over the image of the weather map, creating the illusion we see on TV. However, when this kind of compositing operation is used for other purposes, it is not necessarily legal or beneficial to people. 3.Our Proposed Method When producing a composited video, people are likely to select a blue screen video with higher quality as foreground, which is necessary in order to get a satisfying compositing effect. Therefore, foreground and background elements are usually different in quality, which leaves artifacts for us to detect the blue screen compositing. Furthermore, if foreground and background elements are shot with different digital cameras (usually different in rate control scheme), there will be more traces to be used to expose the video compositing operation. In this method, we can assume that two source videos used for compositing have been compressed at least once at different bitrates to serve the purpose of digital storage and meet respective visual demands. The flow chart of this method is shown in Fig.2.

1318 Junyu Xu et al. / Physics Procedia 33 ( 2012 ) 1316 1322 START Input a video Rough Segmentation of the video into foreground and background Quantized DCT coefficients of foreground Quantized DCT coefficients of background Partition DCT coefficients into N subsets S F (q) according to q-scale q Partition DCT coefficients into N subsets S B (q) according to q-scale q Histogram h F (q, n) Histogram h B (q, n) Calculate pattern distances D(h F, h B ) D(h F,h B )>Threshold? Y The video is a compoiste. N The video is an orignial one. STOP Figure 2. Flow chart of the proposed method 3.1.Rough Segmentation of a Video Inconsistencies of statistical features of quantized DCT coefficients between foreground and background are exploited here to detect the blue screen compositing, so the suspicious foreground and the other regions should be studied separately. In the first frame of the video, select a suspicious object as foreground manually and then segment the frame into three parts, marked as foreground, background and unknown respectively. The size of region

Junyu Xu et al. / Physics Procedia 33 ( 2012 ) 1316 1322 1319 marking is measured in macroblock. In the subsequent frames, the forward projection algorithm [7] is used to track and locate the object with rough boundary which is achieved from motion estimation, as shown in Fig.3 (a) and (b). Figure 3. Rough segmentation of videos (a) A composited video. (b) An original video. Although all the frames are decoded to get motion vectors to track the boundary, the quantized DCT (1,2), (2,1), (2,2) coefficients of the foreground and background regions are extracted respectively only from intra-coded frames (I-frames). Unknown regions are preserved for two reasons: first, some potential secondary processing to the edges may influence the statistical information; second, some ambiguous macroblocks exist during segmentation, which will all be classified as unknown regions. 3.2.Statistical Model of Quantized DCT Coefficients The quantization during video coding is determined by the quantization step size and the quantization scale factor (q-scale for short) together. The step size is defined by the quantization matrix, with no change during coding, but the q-scale varies frequently according to rate control scheme for adaptive quantization, whose variation regularity will reflect features of the current encoder in some degree. First, the extracted quantized DCT coefficients are divided into N subsets S F (q) and S B (q) according to the q-scale q respectively, where q=1, 2 N while F and B denote foreground and background respectively. Next, we calculate the histograms of quantized DCT coefficients for every subset, which use the same q- scale during quantization. Histograms are denoted as h F (q, n) and h B (q, n) here, where n is the value of quantized DCT coefficients. So given a video, a group of histograms will be calculated under different q- scales and there is a pair of histograms for two regions under the same q-scale for comparison. The normalized histograms are depicted with line segments representation in order to show the changing trends. As shown in Fig.4, for a composited video, statistical distribution of the foreground and background in case of q-scale=2 have large differences. One decreases approximately monotonically while the other fluctuates with apparent convex points. However, the two statistical distribution curves of an original video have similar changing trends. 3.3.Difference Analysis Based on Pattern Distance In order to analyze the consistencies of statistical distribution in foreground and background, pattern distance [8] is introduced to measure the similarity of changing trends between two histogram curves. A histogram can be represented as follows: hqn (, ) m,1,..., m, i,..., m, N (1) 1 i N

1320 Junyu Xu et al. / Physics Procedia 33 ( 2012 ) 1316 1322 Fig.4 Histograms of DCT (1,2), (2,1), (2,2) coefficients for foreground and background. (a)distribution of Fig.2(a), q_scale=2. (b)distribution of Fig.2(b), q_scale=2. where m i {1,-1,0}, indicates three statuses of changing trend respectively: uptrend, downtrend and continuation; i denotes the x-coordinate of the end of trend m i. Given the histograms of foreground and background, the pattern distance between h F (q, n) and h B (q, n) can be calculated as below: N 1 D h ( q, n), h ( q, n) W W m m F B F i B i F i B i N i 1 (2) W i is the weight of the ith segment, defined as W h( q, i) h( q, i 1) (3) where h(q, i) is the normalized amount of DCT coefficient i under the q-scale q. There exists a pattern distance between every two corresponding histograms under one q-scale. When no less than one pattern distance is larger than a certain threshold T, we can conclude that the foreground and background do not belong to the same source video and thereby the video must be a composited one. However, two points ought to be noted. One problem is that certain q-scale may not be used during quantization in one region or both. On this occasion, comparison of the pair of histograms under the q- scale should be abandoned. The other problem is that the amount of quantized DCT coefficients under some q-scale may be too small to convey enough statistical features of the region, so the comparison of the statistical information for two regions should also be given up. In a composited video, the foreground and background elements are from different encoders or encoded with different settings, which have distinct statistical distributions of quantized DCT coefficients originally. After the two videos are composited and re-compressed together, the differences in their distribution will be weakened to some degree, but still can be detected effectively by our algorithm. In this sense, as long as two regions of a composited video have certain differences in original qualities, no matter which one is better, the compositing operation can be exposed. 4.Experimental Results Two types of videos are prepared for experiments. Twenty original video sequences with each length about 300 frames were recorded with digital video cameras, ten videos with DV Sony HDR-XR500E and ten with DV Canon FS10E. The size of each frame is 720 576 pixels and MPEG-2 videos were captured at a constant bitrate of 6 Mbps. Five high definition videos of persons recorded in front of a green screen [9] were selected and MPEG compressed at a bitrate of 40Mbps (They should be compressed once because the original bitrate was so high that it is too ideal a condition for tampering detection), whose size is 1280 720. Fifty composited videos were created by combining the five foreground videos with ten background i

Junyu Xu et al. / Physics Procedia 33 ( 2012 ) 1316 1322 1321 videos, five recorded with DV Canon and five with DV Sony. The compositing process is implemented with the software Adobe Premiere Pro 2.0 [10]. The background videos were compressed with a bitrate of 6Mbps and the fifty final compositions were compressed with bitrates of 5Mbps, 6Mbps, 7Mbps, 8Mbps and 9Mbps respectively, ten videos for each bitrate. Pattern distances for several videos of 6Mbps under different q-scales are listed in Table I to illustrate that the compositing forgery with blue screen technique can be exposed effectively with this method. Here the threshold T is set to 0.01 (achieved from a number of experiments). If pattern distances of a video under one or more q-scales are larger than T (highlighted in bold in Table I), we can conclude the video has been subject to the compositing operation, otherwise the video is an original one. Data under q-scale=1 are ignored in the table because q-scale=1 is not used in quantization for foreground coding with the bitrate 6Mbps, as described at the end of Section 3.3. In Table I, video 1-3 are videos captured directly from DV Sony HDR-XR500E and video 4-6 are selected from the fifty composited videos. Fig.3 (a) and (b) are just snapshots of video 1 and 4. The detection results are consistent with the true types of videos. Table II shows the detection accuracy of the digital special effect of blue screen compositing. Fifty composited videos are detected whether they are subject to compositing operation and the average accuracy is 88%. For comparison, twenty original videos directly from digital camera are also tested and the detection accuracy is 80%. It is noted from Table II that when the bitrate of final composited video is no higher than both the foreground and background element (i.e. the quality is worse than the background), the detection accuracy decreases gradually. 5.Conclusion A new method has been proposed here to detect videos composited with blue screen technique whose foreground and background have certain differences in quality before re-compression. Inconsistencies in statistical distributions of quantized DCT coefficients between foreground and background are exploited to expose the digital special effects. The experimental results show that our proposed algorithm has effective performance in detecting blue screen compositing technique. We will focus on reducing the false positive rate and exploiting more features suitable for more video encoders to expose digital forgeries in our future work. TABLE I. PATTERN DISTANCES OF SEVERAL VIDEOS (T=0.01) q_scale Pattern Distance Video 1 Video 2 Video 3 Video 4 Video 5 Video 6 2 0.0000 0.0038 0.0009 0.0768 0.0304 0.0425 3 0.0000 0.0000 0.0000 0.0038 0.0000 0.0015 4 0.0000 0.0000 0.0000 0.0011 0.0025 0.0014 5 0.0000 0.0017 0.0013 0.0036 0.0083 0.0038 6 0.0018 0.0063 0.0000 0.0030 0.0030 0.0000 7 0.0030 0.0000 0.0035 0.0283 0.0129 0.0084 8 0.0000 0.0023 0.0036 0.0113 0.0078 0.0013 (Video 1-3 are original videos and video 4-6 are composited videos) TABLE II. DETECTION ACCURACY OF THE COMPOSITED VIDEOS Bitrate (Mbps) 5 6 7 8 9 Accuracy 70% 80% 90% 100% 100% (50 composited videos, total accuracy=88%)

1322 Junyu Xu et al. / Physics Procedia 33 ( 2012 ) 1316 1322 Acknowledgment This paper is supported by National High Technology Research and Development Programme of China (No. 2006AA01Z407), National Natural Science Foundation of China (Grant No. 61170239) and Innovation Foundation of Tianjin University (60302019, 60304006, 60303003). References [1] G. Cao, Y. Zhao and R. -R. Ni, Image composition detection using object-based color consistency, 9th Int. Conf. Signal Processing, 2008, pp. 1186-1189. [2] M. K. Johnson and H. Farid, Exposing digital forgeries in complex lighting environments, IEEE Trans. Information Forensics and Security, 2007, vol. 2(3), pp. 450-461. [3] W. -H. Wang and H. Farid, Exposing digital forgeries in video by detecting duplication, Proc. Multimedia and Security Workshop, 2007, pp. 35-42. [4] W. -H. Wang and H. Farid, Exposing digital forgeries in video by detecting double MPEG compression, Proc. Multimedia and Security Workshop, 2006, pp. 37-47. [5] W. -H. Wang and H. Farid, Exposing digital forgeries in video by detecting double quantization, Proc. 11th ACM workshop on Multimedia and security, 2009, pp. 39-48. [6] A. R. Smith and J. F. Blinn, Blue screen matting, Int. Conf. Computer Graphics and Interactive Techniques, 1996, pp. 259 268. [7] L. Zhi, Y. Yang and N. S. Peng, Semi-automatic video object segmentation using seeded region merging and bidirectional projection, Pattern Recognition Letters, 2005, vol26(5), pp. 653-662. [8] J. -W. Yu, J. Yin, D. -N. Zhou and J. Zhang, A pattern distance-based evolutionary approach to time series segmentation, Int. Conf. Intelligent Computing, 2006, pp. 797-802. [9] Free Green Screen Footage. http://www.timelinegfx.com/freegreenscreen.html [10] J. Rosenberg, Adobe Premiere Pro 2.0 Studio Techniques. Berkeley, CA: Peachpit Press, 2006, pp.512-524.