Frame differencing-based segmentation for low bit rate video codec using H.264. S. Sowmyayani* and P. Arockia Jansi Rani

Similar documents
Deblocking Filter Algorithm with Low Complexity for H.264 Video Coding

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Title Adaptive Lagrange Multiplier for Low Bit Rates in H.264.

International Journal of Computer Engineering and Applications, Volume XI, Issue XI, Nov. 17, ISSN

Pattern based Residual Coding for H.264 Encoder *

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

EE Low Complexity H.264 encoder for mobile applications

EE 5359 Low Complexity H.264 encoder for mobile applications. Thejaswini Purushotham Student I.D.: Date: February 18,2010

Reduced 4x4 Block Intra Prediction Modes using Directional Similarity in H.264/AVC

An Efficient Table Prediction Scheme for CAVLC

Fast Mode Decision for H.264/AVC Using Mode Prediction

Object Tracking Using Frame Differencing and Template Matching

Advanced Video Coding: The new H.264 video compression standard

EE 5359 MULTIMEDIA PROCESSING SPRING Final Report IMPLEMENTATION AND ANALYSIS OF DIRECTIONAL DISCRETE COSINE TRANSFORM IN H.

An Efficient Mode Selection Algorithm for H.264

STUDY AND IMPLEMENTATION OF VIDEO COMPRESSION STANDARDS (H.264/AVC, DIRAC)

Implementation and analysis of Directional DCT in H.264

NEW CAVLC ENCODING ALGORITHM FOR LOSSLESS INTRA CODING IN H.264/AVC. Jin Heo, Seung-Hwan Kim, and Yo-Sung Ho

CONTENT ADAPTIVE COMPLEXITY REDUCTION SCHEME FOR QUALITY/FIDELITY SCALABLE HEVC

Complexity Reduced Mode Selection of H.264/AVC Intra Coding

Performance Comparison between DWT-based and DCT-based Encoders

Comparative and performance analysis of HEVC and H.264 Intra frame coding and JPEG2000

Spatial and Temporal Models for Texture-Based Video Coding

Department of Electrical Engineering

Improved Context-Based Adaptive Binary Arithmetic Coding in MPEG-4 AVC/H.264 Video Codec

One-pass bitrate control for MPEG-4 Scalable Video Coding using ρ-domain

VHDL Implementation of H.264 Video Coding Standard

Digital Video Processing

ERROR-ROBUST INTER/INTRA MACROBLOCK MODE SELECTION USING ISOLATED REGIONS

Reducing/eliminating visual artifacts in HEVC by the deblocking filter.

Upcoming Video Standards. Madhukar Budagavi, Ph.D. DSPS R&D Center, Dallas Texas Instruments Inc.

Video Quality Analysis for H.264 Based on Human Visual System

Reduced Frame Quantization in Video Coding

EFFICIENT DEISGN OF LOW AREA BASED H.264 COMPRESSOR AND DECOMPRESSOR WITH H.264 INTEGER TRANSFORM

A COST-EFFICIENT RESIDUAL PREDICTION VLSI ARCHITECTURE FOR H.264/AVC SCALABLE EXTENSION

JPEG 2000 vs. JPEG in MPEG Encoding

Video Codec Design Developing Image and Video Compression Systems

Optimum Quantization Parameters for Mode Decision in Scalable Extension of H.264/AVC Video Codec

A Novel Deblocking Filter Algorithm In H.264 for Real Time Implementation

Introduction to Video Coding

ARTICLE IN PRESS. Signal Processing: Image Communication

A Fast Intra/Inter Mode Decision Algorithm of H.264/AVC for Real-time Applications

A Dedicated Hardware Solution for the HEVC Interpolation Unit

RECOMMENDATION ITU-R BT

Video compression with 1-D directional transforms in H.264/AVC

STANDARD COMPLIANT FLICKER REDUCTION METHOD WITH PSNR LOSS CONTROL

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Zonal MPEG-2. Cheng-Hsiung Hsieh *, Chen-Wei Fu and Wei-Lung Hung

H.264/AVC BASED NEAR LOSSLESS INTRA CODEC USING LINE-BASED PREDICTION AND MODIFIED CABAC. Jung-Ah Choi, Jin Heo, and Yo-Sung Ho

Scalable Video Coding

Performance Analysis of DIRAC PRO with H.264 Intra frame coding

Rate Distortion Optimization in Video Compression

Video Compression Standards (II) A/Prof. Jian Zhang

An Improved H.26L Coder Using Lagrangian Coder Control. Summary

Homogeneous Transcoding of HEVC for bit rate reduction

Image and Video Compression Fundamentals

Multimedia Standards

IBM Research Report. Inter Mode Selection for H.264/AVC Using Time-Efficient Learning-Theoretic Algorithms

FAST MOTION ESTIMATION DISCARDING LOW-IMPACT FRACTIONAL BLOCKS. Saverio G. Blasi, Ivan Zupancic and Ebroul Izquierdo

THE H.264 ADVANCED VIDEO COMPRESSION STANDARD

A Quantized Transform-Domain Motion Estimation Technique for H.264 Secondary SP-frames

Fraunhofer Institute for Telecommunications - Heinrich Hertz Institute (HHI)

Frequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding

High Efficiency Video Coding (HEVC) test model HM vs. HM- 16.6: objective and subjective performance analysis

An Implementation of Multiple Region-Of-Interest Models in H.264/AVC

An Efficient Intra Prediction Algorithm for H.264/AVC High Profile

Fast frame memory access method for H.264/AVC

An Adaptive Video Compression Technique for Resource Constraint Systems

VIDEO streaming applications over the Internet are gaining. Brief Papers

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Next-Generation 3D Formats with Depth Map Support

Research Article A High-Throughput Hardware Architecture for the H.264/AVC Half-Pixel Motion Estimation Targeting High-Definition Videos

Three Dimensional Motion Vectorless Compression

IMPROVED CONTEXT-ADAPTIVE ARITHMETIC CODING IN H.264/AVC

Efficient MPEG-2 to H.264/AVC Intra Transcoding in Transform-domain

VIDEO AND IMAGE PROCESSING USING DSP AND PFGA. Chapter 3: Video Processing

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264

arxiv: v1 [cs.mm] 9 Aug 2017

Fast Implementation of VC-1 with Modified Motion Estimation and Adaptive Block Transform

Compression of Stereo Images using a Huffman-Zip Scheme

White paper: Video Coding A Timeline

Optimal Estimation for Error Concealment in Scalable Video Coding

Wavelet-Based Video Compression Using Long-Term Memory Motion-Compensated Prediction and Context-Based Adaptive Arithmetic Coding

Coding of Coefficients of two-dimensional non-separable Adaptive Wiener Interpolation Filter

Coding of 3D Videos based on Visual Discomfort

New Techniques for Improved Video Coding

LIST OF TABLES. Table 5.1 Specification of mapping of idx to cij for zig-zag scan 46. Table 5.2 Macroblock types 46

An Efficient Hardware Architecture for H.264 Transform and Quantization Algorithms

Editorial Manager(tm) for Journal of Real-Time Image Processing Manuscript Draft

Area Efficient SAD Architecture for Block Based Video Compression Standards

EE5359:MULTIMEDIA PROCESSING

Video Coding Standards: H.261, H.263 and H.26L

Video Coding Using Spatially Varying Transform

STACK ROBUST FINE GRANULARITY SCALABLE VIDEO CODING

ISSN: An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding

Testing HEVC model HM on objective and subjective way

H.264 to MPEG-4 Transcoding Using Block Type Information

Adaptation of Scalable Video Coding to Packet Loss and its Performance Analysis

Enhanced Hexagon with Early Termination Algorithm for Motion estimation

Transcription:

Int. J. Computational Vision and Robotics, Vol. 6, Nos. 1/2, 2016 41 Frame differencing-based segmentation for low bit rate video codec using H.264 S. Sowmyayani* and P. Arockia Jansi Rani Department of Computer Science and Engineering, Manonmaniam Sundaranar University, Tirunelveli 627012, Tamilnadu, India Email: sowmyayani@gmail.com Email: jansi_msu@yahoo.co.in *Corresponding author Abstract: In video sequence coding, a combination of temporal and spatial coding technique is used in order to remove the predictable or redundant image content and encode only the unpredictable information. The objective of video compression technique is to increase the coding efficiency and to increase the data rate savings. A segmentation-based compression method is proposed to achieve this goal. In this paper, static portions are identified using frame differencing method and segmented during the encoding process. This encoded information is passed to the synthesiser. In synthesis, information which is passed during segmentation process is added to reconstruct the video. This method is integrated with the conventional video codec H.264/AVC video codec. Experimental results substantially proved the data rate is reduced by as much as 25%. Keywords: digital video compression; H.264/AVC; MPEG-4 Part 10; VCEG; advanced video coding; AVC. Reference to this paper should be made as follows: Sowmyayani, S. and Arockia Jansi Rani, P. (2016) Frame differencing-based segmentation for low bit rate video codec using H.264, Int. J. Computational Vision and Robotics, Vol. 6, Nos. 1/2, pp.41 53. Biographical notes: S. Sowmyayani is pursuing her PhD in Computer Science in Manonmaniam Sundaranar University, Tirunelveli, India. She received her MSc in Computer Science from St. Xaviers College (autonomous), Tirunelveli, India in 2011 and MPhil in Computer Science in Manomaniam Sundaranar University, Tirunelveli, India in 2013. Her research interest includes video compression. P. Arockia Jansi Rani is working as an Assistant Professor in the Department of Computer Science and Engineering, Manonmaniam Sundaranar University, Tirunelveli, India. She received her PhD in CSE from Manonmaniam Sundaranar University, Tirunelveli, India in 2012. She is in teaching profession for the last ten years. Her area of interest includes digital image processing, neural networks and data mining. She has presented her research papers at various national and international conferences. Copyright 2016 Inderscience Enterprises Ltd.

42 S. Sowmyayani and P. Arockia Jansi Rani 1 Introduction Video compression is needed to facilitate both storage and transmission in real time. Several compression procedures are developed and combined every day. A revolution has broken out in the media industry in the last decade. The research in audio and video material has nowadays taken a fundamental position in technology. An essential area in which engineers are sparing no effort is video compression. Video compression makes it possible to use, transmit, or manipulate videos easier and faster. In MPEG4 (Katsaggelos et al., 1998), shape coding was used to code shapes in a frame after they are segmented. The main goal in video compression is to minimise the weight of the files and maximise the quality of the reconstruction (Sikora, 2005). The principle for achieving efficient compression is to eliminate unnecessary data. Two main features can be examined to deem the negligible information: the redundancy of data and the deficiencies of human visual system. By redundancy of data, spatial and temporal redundancies are meant. Indeed, analysing a video sequence reveals that a large amount of data is recognised to appear repeatedly. Within a single frame, large areas of pixels are homogeneous and present significant correlation. When analysing consecutive frames a big amount of redundant data between frames also exists. Hence a percentage of information can be discharged. When elaborating the television systems, advantages have been taken of the deficiencies of the human visual system to simplify some of the elements: on the basis of these deficiencies, the number of frames per second or lines per frame has been adjusted, some colour corrections have been skipped, and some spectrum overlapping made possible. ITU-T H.264/MPEG-4 (Part 10) advanced video coding (commonly referred as H.264/AVC) is the newest entry in the series of international video coding standards (Sullivan and Wiegand, 2005). It is currently the most powerful and state-of-the-art standard, and was developed by a Joint Video Team (JVT) consisting of experts from ITU-T s Video Coding Experts Group (VCEG) and ISO/IEC s Moving Picture Experts Group (MPEG) (Ndjiki-Nya et al., 2004, ISO/IEC, 2002). As has been the case with past standards, its design provides the most current balance between the coding efficiency, implementation complexity, and cost. Bosch (2011), proposed a method to use segmentation in video sequences using texture and motion models. Both these models are used separately and data rate is saved upto 15%. And it has a major drawback that texture segmentation takes more time. In earlier methods, frame differencing was performed on consecutive frames which have limitation to detect slow moving objects. The proposed method allows skipping of macroblocks. The approach has been integrated into an H.264/AVC video codec (Keshaveni et al., 2010; Richardson, 2004). The rest of the paper is organised as follows: Section 2 describes overview of the proposed method. Section 3 explains the proposed frame differencing method. Section 4 describes the way of integrating this system with H.264/AVC video codec and addresses the synthesis of static regions. Section 5 demonstrates the experimental results of FDSVC followed by conclusion in Section 6.

Frame differencing-based segmentation for low bit rate video codec 43 2 System overview In this proposed method, frame differencing-based segmentation in video compression (FDSVC), to improve the coding efficiency, video scenes are classified into static and non-static parts with frame differencing. Segmentation identifies the static regions with no important subjective details and generates coarse masks as well as side information for the synthesiser at the decoder side. The synthesiser replaces the static regions by inserting skipped regions. Segmentation and synthesiser are based on MPEG-7 descriptors (Ndjiki-Nya et al., 2004). A general scheme for video coding using Segmentation and synthesis is illustrated in Figure 1. The goal is to examine various non-static regions in the video and skip static regions in some parts of video and estimate its impact on the data rate. The segmentation identifies homogeneous regions in a frame and labels them as static. This step can be done using normal frame differencing (Prabhakar et al., 2012). The skipped macroblocks information is sent to the synthesiser. Figure 1 Frame differencing-based video coding: system overview For compression, H.264/AVC video codec is used. During synthesis, skipped macroblocks are synthesised using side information. 3 Segmentation using frame differencing The video sequence is first divided into groups of frames (GoF). Each GoF consists of two key frames (the first and last frame of the GoF) and several middle frames to be modelled with frame differencing. It is shown in Figure 2. Frame differencing: Frame differencing is a technique which checks the apparent change in pixel values of two video frames. If the pixels have changed there apparently was something changing in the frame. The frame differencing algorithm (Jain and Nagel, 1979; Haritaoglu et al., 2000) is used for this purpose which gives the position of apparent changed pixels in the video frames as output. This extracted position is then used to extract a rectangular image template whose size is dynamic depending upon the dimension of object from that region of the frame. The task to identify moving objects in a video sequence is critical and fundamental for a general object tracking system. For this moving object identification, frame differencing technique (Jain and Nagel, 1979) is applied to the consecutive frames, which identifies all the moving objects in consecutive frames. Frame differencing is based on luminance values of the frames. In quarter common intermediate format (QCIF) video sequence, luminance values illustrate the illumination. Only those macroblocks which

44 S. Sowmyayani and P. Arockia Jansi Rani have same luminance value as it is in reference frame are skipped and not the whole frame. Figure 2 Illustration of GOF, (a) a sequences of GOFs, (b) two key frames within a GOF (c) splitting of video sequence into IBBPBBP format (a) (b) (c) This basic technique employs the image subtraction operator (Rafael and Richard, 2002) which takes two frames as input and produces segmented output. The two frames are reference frame (I or P frame) and intermediate frames (B frame). Difference is calculated between macroblock of 1st reference frame and B frame. Again, the difference is calculated between 2nd reference frame and B frame. If the difference is zero for any of the reference frame, then that reference is taken as keyframe for that particular macroblock. The two frames are reference frame (I or P frame) and intermediate frames (B frame). The output is a segmented frame that is produced after subtracting the intermediate frame pixel values from the keyframe pixel values. It is shown in Figure 3. The keyframe which is used for macroblock skipping will be sent as side information as a control bit. This subtraction is in a single pass. The general operation performed for this frame differencing algorithm is given by: DIFF(, i j) = I (, i j) I (, i j) (1) 1 2 where DIFF[i, j] represents the difference image of reference frame I 1 and B frame I 2. After the frame differencing operation the binary threshold operation is performed to convert difference image into a binary image with some threshold value and thus the moving object is identified with some irrelevant non-moving pixels due to flickering of camera. And some moving pixels are also present in binary image which corresponds to wind, dust, illusion, etc. All these extra pixels should be removed in steps of pre-processing.

Frame differencing-based segmentation for low bit rate video codec 45 Figure 3 Illustration of segmentation process This threshold technique is done with the DIFF(i, j). If DIFF(i, j) is greater than threshold T, it is movable portions, else it is static region. The threshold taken here is not fixed it can vary according to the perception. The role of threshold T is just to separate the objects pixels from the background. In this FDSVC method, frames are divided into 16 16 macroblock. Frame differencing is calculated for each macroblock. After segmentation, video consists of all I frames and segmented P frames. Skipped macroblock information is sent to the synthesiser. 4 Integration and synthesis 4.1 Integration into H.264/AVC video codec The segmentation technique has been integrated into the H.264/AVCJM 11.0 reference software (JVT H.264 Software Reference Manual, 2009). The block diagram of basic H.264/AVC encoder is shown in Figure 4. The process of H.264/AVC video codec is explained in Wiegand and Sullivan (2003), Marpe et al. (2003) and Sullivan and Wiegand (1998). In the proposed FDSVC method, segmented video sequence is given to the encoder. Macroblocks which are marked as static in segmentation process are skipped during encoding. The encoded video sequence is given to the decoder. The decoder produces the segmented video sequence. 4.2 Synthesiser The synthesiser is used to reconstruct the missing pixels in the decoded video sequence. For this purpose, side information from channel which is passed by segmentation process is used. The insignificant portions in the video are given using motion parameter set and the reference frame is given by control parameter. By using that information, the static regions can be reconstructed by warping the static regions from the key frame towards each synthesisable static region identified by the segmentation. It is shown in Figure 5. The output of the synthesiser is the reconstructed decoded video sequence.

46 S. Sowmyayani and P. Arockia Jansi Rani Figure 4 Block diagram of H.264/AVC encoder Figure 5 Illustration of synthesiser 5 Experimental results The proposed video codec was tested using QCIF sequences such as Claire, Coastguard, Carphone, Akiyo, Foreman and Suzie. Figure 6 shows the segmented results obtained for some of the sequences. The following parameters were used for the H.264/AVC video codec: quantisation parameter (QP) is set to 24 and 30; one reference frame; three B-frames. Results show that movable portions in the sequence are segmented correctly and static regions are skipped. One of the original B frames in each sequence and the respective segmented frame are shown in Figure 6.

Frame differencing-based segmentation for low bit rate video codec 47 Figure 6 Results after segmentation for (a) Claire (b) miss-america (c) grandma sequences (see online version for colours) (a) (b) (c) The static region extracted by the segmentation algorithm is indicated as the black region in the frame. This process helps to identify the significant frame portions which should be encoded with high fidelity.

48 S. Sowmyayani and P. Arockia Jansi Rani From Figure 7, it is clear that when quantisation parameter is 24, minute information in the video are clearly displayed. When quantisation parameter is 30, video seems blurring. Figure 7 Decoded and synthesised frame of (a) Claire sequence (b) mother-daughter sequence (c) miss-america sequence with QP = 24 and QP = 30 (see online version for colours) (a) (b) (c)

Frame differencing-based segmentation for low bit rate video codec 49 5.1 Estimating the data rate The data rate (or bit rate) is the size of the video file per second of data, usually expressed in kilobits or megabits per second. The data rate savings for each test sequence is calculated by subtracting from the original data rate (coded with the H.264/AVC video codec) the data rate savings for macroblocks that are not coded using the H.264/AVC video codec. The data rate used to construct the side information is then added to obtain the data rate for the segmented coded video. The side information which is shown in Table 1 contains the segmentation masks (macroblock-accurate), eight motion parameters and one control flag to indicate which key frame is used as the reference frame (the first frame or the last frame of the GOF). The data rate for side information is 256 bits for the motion parameters and one bit for the control flag. The data rate for segmentation mask depend size of the mask and, is typically about 600 bits. Hence the side information is less than 1 kb per frame. Table 1 List of side information to be sent to the synthesiser Side information Size (in bits) Segmentation binary mask About 600 Control flag 1 Motion parameter 256 The visual quality was comparable to the quality of the decoded sequences using the H.264/AVC video codec. When quantisation parameter increases, there is an improvement in data rate. The data rate can be described by the following: 1 AVG B MB NBB = NMB NB f 2 DRSSMB = NB AVG B NSMB MB DRSSMB SI 3 DR fd = DRH 264 15. N f where AVG B average bit allocation per MB for a B-frame, NB B total number of bits MB N for B-frames (kb), NB number of B-frames, MB number of MBs per frame, f DRS SMB data rate savings for total skip MBs (kb), N SMB number of skip MBs, DR fd data rate for the sequence using frame differencing model) at frame rate (kb/s), DR H264 original data rate at frame rate (kb/s), SI data allocation for total side information (kb), N f number of total frames, frames per second = 15. The data rate savings obtained for some of the sequences are shown in Table 2. Results are calculated for FDSVC and H.264/AVC video codec in order to find data rate

50 S. Sowmyayani and P. Arockia Jansi Rani savings with quantisation parameter 24 and 30 and PSNR is between 35 db to 40 db. From Table 2, it is evident that the average data rate savings is approximately 25%. Table 2 Data rate savings for all sequences Quantisation level H.264 data rate [kb/s] FDSVC data rate [kb/s] Data rate savings [%] Akiyo sequence 24 552.02 523.01 5.26 30 9936.65 9054.91 8.87 Claire sequence 24 419.99 359.02 14.51 30 9245.19 7102.52 23.18 Carphone sequence 24 936.65 854.91 8.73 30 6145.23 5526.8 10.06 Suzie sequence 24 483.53 445.34 7.90 30 9146.82 8141.02 11.01 Grandma sequence 24 616.43 580.16 5.88 30 7713.92 7072.21 8.32 Miss-America sequence 24 365.14 306.74 15.99 30 9442.86 7042.9 25.42 Because of the way the video sequences are encoded and decoded, metrics such as PSNR are not useful tools to measure the visual quality when visual artefact appear. Hence subjective evaluation is useful to measure visual quality. Data rate savings is computed as an objective measure. Results obtained after synthesis are also displayed in Figure 7. Data rate for some sequences using H.264/AVC video codec and FDSVC are compared and are displayed in charts in Figure 8. Data rate savings are also compared with QP in Figure 9. For all sequences in Figure 9, when QP increases, data rate savings also increases. If the quantisation parameter increases above 30, there is degradation in visual quality of the video sequence. If QP is below 24, there is a decrease in bit rate. When compared to H.264/AVC video coding, time taken by the proposed method is minimum due to skipping of many macroblocks. FDSVC data rate savings is compared with texture-based video compression. In texture-based video compression, two techniques are used: grey-level cooccurence matrix (GLCM) + split-merge and gabor + split-merge. Data rate and its savings of those techniques and FDSVC are shown in Table 3. The proposed method saves more data rate than texture-based video compression method.

Frame differencing-based segmentation for low bit rate video codec 51 Figure 8 Bar charts as a function of compression size of all sequences using H.264/AVC and FDSVC (see online version for colours) Figure 9 Bar charts as a function of quantisation parameter and data rate savings (see online version for colours) Table 3 Comparative analysis of FDSVC with texture-based video compression GLCM + Split-merge Data rate (kb/s) Texture-based video compression Savings (%) Akiyo sequence Gabor + Split-merge Data rate (kb/s) Savings (%) Data rate (kb/s) H.264 date rate: 6,210.18 kb/s FDSVC Savings (%) 5,978.34 3.73 6,054.21 2.51 5,883.66 5.26 Carphone H.264 data rate: 1,0537.29 kb/s 8,728.92 17.16 9,406.98 10.73 5,526.8 10.06 Claire sequence H.264 data rate: 5,966.46 kb/s 4,069.44 13.87 3,849.12 18.53 4,447.41 25.46 Suzie H.264 data rate:5,439.78 kb/s 4,915.35 9.64 5,124.15 5.80 4,920.03 9.55

52 S. Sowmyayani and P. Arockia Jansi Rani 6 Conclusions In the proposed method, video sequence is segmented using frame differencing method before video compression. The goal is to increase the coding efficiency for video sequences containing non-static regions in static regions and to increase data rate savings. Frame differencing method is used to segment static regions in each frame at the encoder and synthesise those static regions in the frame at the decoder. This method is incorporated into a conventional video codec H.264/AVC video codec where the regions modelled by the frame differencing are not coded in a usual manner. The side information is sent to the decoder. Hence size of the video sequence is very much reduced. This method reduces the incapability of tracking more complex motion object of the video sequences. From the experimental results, it is proved that the data rate is reduced by as much as 25%. In future, FDSVC can be applied with the texture and motion model segmentation for more data rate savings. References Bosch, M., Zhu, F. and Delp, J.E. (2011) Segmentation-based video compression using texture and motion models, IEEE Journal of Selected Topics in Signal Processing, Vol. 5, No. 7, pp.1366 1377. Haritaoglu, I., Harwood, D. and Davis, I. (2000) Realtime surveillance of people and their activities, IEEE T. Pattern Anal., Vol. 22, No. 8, pp.809 830. ISO/IEC (2002) JTC 1/SC29/WG11 N4668, MPEG-4 Overview. Jain, R. and Nagel, H. (1979) On the analysis of accumulative difference pictures from image sequences of real world scenes, IEEE T. Pattern Anal., Vol. 1, No. 2, pp.206 214. JVT H.264 Software Reference Manual (2009) H.264/14496-10 AVC Reference Software Manual, Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6). Joint Video Team of ITU-T and ISO/IEC JTC 1 (2009) Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264 ISO/IEC 14496-10 AVC), Document JVT-G050r1; technical corrigendum 1 documents JVT-K050r1 (non-integrated form) and JVT-K051r1 (integrated form), March 2004; and Fidelity Range Extensions documents JVT-L047 (non-integrated form) and JVT-L050 (integrated form), (2004). Katsaggelos, A., Kondi, L., Meier, L., Ostermann, J. and Schuster, G. (1998) MPEG-4 and rate-distortion-based shape-coding techniques, in Proc. IEEE, Vol. 86, No. 6, pp.1126 1154. Keshaveni, N., Ramachandran, S. and Gurumurthy, K.S. (2010) Design and FPGA implementation of integer transform and quantization processor and their inverses for H.264 video encoder, in International Journal of Computer Science & Communication, Vol. 1, No. 1, pp.43 50. Marpe, D., Schwarz, H. and Wiegand, T. (2003) Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, in the IEEE Transactions on Circuits and Systems for Video Technology. Ndjiki-Nya, P., Novychny, O. and Wiegand, T. (2004) Video content an analysis using MPEG-7 descriptors, in Proceedings of the First European Conference on Visual Media Production, Great Britain, London. Prabhakar, N., Vaithiyanathan, V., Sharma, A.P., Singh, A. and Singhal, P. (2012) Object tracking using frame differencing and template matching, in Research Journal of Applied Sciences, Engineering and Technology, Vol. 4, No. 24, pp.5497 5501.

Frame differencing-based segmentation for low bit rate video codec 53 Rafael, C.G. and Richard, E.W. (2002) Digital Image Processing, 2nd ed., Prentice Hall International, UK. Richardson, I.E.G. (2004) H.264 and MPEG-4 Video Compression (Video Coding for Next Generation Multimedia), John Wiley. Sikora, T. (2005) Trends and perspectives in image and video coding, Proc IEEE, Vol. 93, No. 1, pp.6 17. Sullivan, G. and Wiegand, T. (2005) Video compression from concepts to the H.264/AVC standard, in Proceedings of the IEEE, Vol. 93, pp.18 31. Sullivan, G.J. and Wiegand, T. (1998) Rate-distortion optimization for video compression, in the IEEE Signal Processing Magazine, Vol. 15, No. 6, pp.74 90. Wiegand, T. and Sullivan, G.J. (2003) Overview of the H.264/AVC video coding standard, IEEE Transactions on Circuits and Systems for Video Technology, pp.1 17.