Feature Point Video Synthesis For Tagged Vehicular Traffic

Similar documents
For layered video encoding, video sequence is encoded into a base layer bitstream and one (or more) enhancement layer bit-stream(s).

Vector Bank Based Multimedia Codec System-on-a-Chip (SoC) Design

A Quantized Transform-Domain Motion Estimation Technique for H.264 Secondary SP-frames

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution

Bandwidth Aggregation with Path Interleaving Forward Error Correction Mechanism for Delay-Sensitive Video Streaming in Wireless Multipath Environments

A MULTIPOINT VIDEOCONFERENCE RECEIVER BASED ON MPEG-4 OBJECT VIDEO. Chih-Kai Chien, Chen-Yu Tsai, and David W. Lin

Performance Comparison between DWT-based and DCT-based Encoders

Homogeneous Transcoding of HEVC for bit rate reduction

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

Rate Distortion Optimization in Video Compression

One-pass bitrate control for MPEG-4 Scalable Video Coding using ρ-domain

Video Compression An Introduction

Low-cost Multi-hypothesis Motion Compensation for Video Coding

Digital Media Capabilities of the Modero X Series Touch Panels

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

Moving Object Counting in Video Signals

A Novel Deblocking Filter Algorithm In H.264 for Real Time Implementation

Deblocking Filter Algorithm with Low Complexity for H.264 Video Coding

Enhanced Hexagon with Early Termination Algorithm for Motion estimation

JPEG 2000 vs. JPEG in MPEG Encoding

Analysis of Motion Estimation Algorithm in HEVC

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Fast frame memory access method for H.264/AVC

Reduced Frame Quantization in Video Coding

Compression of Light Field Images using Projective 2-D Warping method and Block matching

A NOVEL SECURED BOOLEAN BASED SECRET IMAGE SHARING SCHEME

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

About MPEG Compression. More About Long-GOP Video

HIKVISION H.265+ Encoding Technology. Halve Your Bandwidth and Storage Enjoy the Ultra HD and Fluency

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

STUDY AND IMPLEMENTATION OF VIDEO COMPRESSION STANDARDS (H.264/AVC, DIRAC)

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

OVERVIEW OF IEEE 1857 VIDEO CODING STANDARD

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Complexity Reduction Tools for MPEG-2 to H.264 Video Transcoding

Star Diamond-Diamond Search Block Matching Motion Estimation Algorithm for H.264/AVC Video Codec

Title Adaptive Lagrange Multiplier for Low Bit Rates in H.264.

Fingerprint Image Compression

Mobile Surveillance Solution

FAST SPATIAL LAYER MODE DECISION BASED ON TEMPORAL LEVELS IN H.264/AVC SCALABLE EXTENSION

Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal

MediaTek High Efficiency Video Coding

Area Efficient SAD Architecture for Block Based Video Compression Standards

Next-Generation 3D Formats with Depth Map Support

PERFORMANCE ANALYSIS OF AVS-M AND ITS APPLICATION IN MOBILE ENVIRONMENT

Prediction of traffic flow based on the EMD and wavelet neural network Teng Feng 1,a,Xiaohong Wang 1,b,Yunlai He 1,c

An Optimized Template Matching Approach to Intra Coding in Video/Image Compression

Optical Storage Technology. MPEG Data Compression

ESTIMATION OF THE UTILITIES OF THE NAL UNITS IN H.264/AVC SCALABLE VIDEO BITSTREAMS. Bin Zhang, Mathias Wien and Jens-Rainer Ohm

UNIVERSITY OF DUBLIN TRINITY COLLEGE

Implementation of A Optimized Systolic Array Architecture for FSBMA using FPGA for Real-time Applications

Department of Electrical Engineering

STANDARD COMPLIANT FLICKER REDUCTION METHOD WITH PSNR LOSS CONTROL

A COST-EFFICIENT RESIDUAL PREDICTION VLSI ARCHITECTURE FOR H.264/AVC SCALABLE EXTENSION

FPGA based High Performance CAVLC Implementation for H.264 Video Coding

International Journal of Advance Engineering and Research Development

Video compression with 1-D directional transforms in H.264/AVC

Upcoming Video Standards. Madhukar Budagavi, Ph.D. DSPS R&D Center, Dallas Texas Instruments Inc.

A New Fast Motion Estimation Algorithm. - Literature Survey. Instructor: Brian L. Evans. Authors: Yue Chen, Yu Wang, Ying Lu.

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Tech Note - 05 Surveillance Systems that Work! Calculating Recorded Volume Disk Space

ARCHITECTURES OF INCORPORATING MPEG-4 AVC INTO THREE-DIMENSIONAL WAVELET VIDEO CODING

Estimating Speed of Vehicle using Centroid Method in MATLAB

implementation using GPU architecture is implemented only from the viewpoint of frame level parallel encoding [6]. However, it is obvious that the mot

Golomb Coding Implementation in FPGA

Robust Wireless Delivery of Scalable Videos using Inter-layer Network Coding

The Performance of MANET Routing Protocols for Scalable Video Communication

Block-based Watermarking Using Random Position Key

Performance analysis of Integer DCT of different block sizes.

Unit-level Optimization for SVC Extractor

VIDEO streaming applications over the Internet are gaining. Brief Papers

Advanced Video Coding: The new H.264 video compression standard

On the Adoption of Multiview Video Coding in Wireless Multimedia Sensor Networks

Scalable Video Coding

NEW CAVLC ENCODING ALGORITHM FOR LOSSLESS INTRA CODING IN H.264/AVC. Jin Heo, Seung-Hwan Kim, and Yo-Sung Ho

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264

A Novel Statistical Distortion Model Based on Mixed Laplacian and Uniform Distribution of Mpeg-4 FGS

Wireless Vehicular Blind-Spot Monitoring Method and System Progress Report. Department of Electrical and Computer Engineering University of Manitoba

LBP-GUIDED DEPTH IMAGE FILTER. Rui Zhong, Ruimin Hu

Compression of Stereo Images using a Huffman-Zip Scheme

Multi-path Forward Error Correction Control Scheme with Path Interleaving

Model-based Enhancement of Lighting Conditions in Image Sequences

Module 7 VIDEO CODING AND MOTION ESTIMATION

packet-switched networks. For example, multimedia applications which process

Optimal Estimation for Error Concealment in Scalable Video Coding

Nodes Energy Conserving Algorithms to prevent Partitioning in Wireless Sensor Networks

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Perceptual coding. A psychoacoustic model is used to identify those signals that are influenced by both these effects.

MultiFrame Fast Search Motion Estimation and VLSI Architecture

Real-time and smooth scalable video streaming system with bitstream extractor intellectual property implementation

A Comparative Study of Depth-Map Coding Schemes for 3D Video

FAST MOTION ESTIMATION DISCARDING LOW-IMPACT FRACTIONAL BLOCKS. Saverio G. Blasi, Ivan Zupancic and Ebroul Izquierdo

Finding Reachable Workspace of a Robotic Manipulator by Edge Detection Algorithm

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

DISPARITY-ADJUSTED 3D MULTI-VIEW VIDEO CODING WITH DYNAMIC BACKGROUND MODELLING

Megapixel Networking 101. Why Megapixel?

HikCentral V1.1 Software Requirements & Hardware Performance

Transcription:

Feature oint Video Synthesis For Tagged Vehicular Traffic M. Adeel, G. M. Khan, Syed Mohsin Matloob Bokhari, S. A. Mahmud University of Engineering and Technology, eshawar, akistan sahibzada.mahmud@nwfpuet.edu.pk Abstract A novel video compression algorithm is proposed in this paper called Feature oint Video Synthesis (FVS) for vehicular traffic along a lane. Using this algorithm, the video data can be stored by keeping just a few parameters and then playing back the synthesized replica of the original video. This new idea is significant because unlike CCTV cameras, the proposed algorithm allows for only few parameters to keep a record of video data, thus saving a lot of disk space and bandwidth. Using different simulation scenarios, FVS has been found to save more than 99% video bit-rate as compared to normal video compression techniques specifically for vehicular traffic. Besides the said advantages, the idea is important to add intelligence in vehicular traffic data; hence the video traffic can be utilized in intelligent transportation system. Only those parts of the algorithm are explained in this paper that deal with tagged vehicles. The proposed system is important because a high quality video display at the receiving end is possible from the fetched variables. Keywords-ITS; CCTV; HD; Sprite Coding I. INTRODUCTION Video Object Encoding (Sprite Coding) technique, introduced in MEG-4 [1,2], was devised in order to save the bandwidth of the transmission medium. A similar approach of rate distortion in video compression is discussed in [3,4]. Using these techniques a part of movable object was transmitted provided so that the background images are constant or that the background images are sent at relatively less sampling rate than that of desired object. Using the same encoding technique, without sending the undesired background images and repeatedly sending the same static video scenes, a stationary portion of video is synthesized at the receiving end using prior information and hence a lot of the video bandwidth required is saved [2]. Using Sprite Coding technique 75-93.75 % video transmission bandwidth is saved as compared to the normal video transmission [6]. A relevant work has been discussed in [9] and [10]. An advanced video synthesis technique is proposed in this paper called Feature oint (or parametric) Video synthesis (FVS). Our proposed algorithm is based upon Feature oint video transmission. By utilizing this technique more than 99 % video transmission bandwidth can be saved as compared to normal video transmission. Most importantly, the algorithm is applied to vehicles passing on a lane and a camera is installed on roof top position. The camera takes in the video footage and based upon the contents, the controller fetches important parameters and sends them to the intended video decoder to play them back on a display screen. Following statistics shows important comparison in-terms of compression: 99.99957% bandwidth saving as compared to H.264 (4:2:0) Full HD (1080p) resolution video compression. 99.99936% bandwidth saving as compared to H.264 (4:2:0)Half HD (720p) resolution video compression 99.99872% bandwidth saving as compared to H.264 (4:2:0) SD resolution video compression 99.9968% bandwidth saving as compared to H.264 (4:2:0) CIF resolution video compression 99.9872-99.9931% bandwidth saving as compared to Sprite coding. The mentioned bandwidth saving percentage has been formulated from the results given in section-v. Besides saving the bandwidth, the proposed algorithm can be used to aid in certain application areas related to Intelligent Transportation System (ITS). For-example the fetched parameters can be exploited to alarm the control-room about traffic congestion using the variables associated with velocity and acceleration. However the details of ITS applications as regard to FVS algorithm will be explained in another research paper due to pages limitation. The focus of this paper is limited to the vehicular traffic video; however, it can be utilized to other applications in future. Tagged-vehicles in the context refers to the vehicles which gives the information of the vehicle like model/type, length and width if read using any RFID device. II. SYSTEM MODEL The main idea of FVS is that a video camera installed at the top-side of a road-way captures video frames at a certain rate R f and delivers the video frames to controller-a for any detected vehicle as shown in the flow chart of Figure 1. Virtually three vertical single line pixel-values are fetched from camera projection. From three vertical single line pixel values, the author is interested in three single dimensional pixels lines. These single dimensional lines are represented as Initial (i), Mid (m) and Final (f) pixel-column lines. Controller- A takes decision depending upon these fetched lines and calculates the parameters necessary for Controller-B to play the video back (synthesize) at the display. The working procedure for Controller-A and Controller-B has been shortly explained in this paper due to page limitations and it is supposed to be explained in next paper. However, the common sense can

easily assess the video synthesis at receiving end from the fetched parameters using controller-b. D im =Distance between points I and m. D = Distance between points m and f. v im = Velocity of the vehicle between points i and m. v = Velocity of vehicle between points m and f μ =The road color [μ i, μ m, μ f ] σ = Color detected from vehicle [σ i, σ m, σ f ] III. VEHICLE DETECTION Considering the front-end points of the vehicle represented byx i, x m, and x f,the timing information associated with front end of the vehicle is computed using the -matrix. Consider (1), (2) and (3): im = μ σ = μ σ (1) Figure1.Flow-Chart representing the FVS overview Let us consider a lane-spot having a camera installed on top level of the road, projecting the camera lens down towards the road. Before moving towards the mathematical model of our proposed algorithm, we define some important terminologies first. w C = width of the camera projection in meters. h C =Height of the camera projection in meters. l C = Length of the camera projection in meters. D w = Width of the vehicle. D l = Length of the vehicle. x i = Initial position (Front-Side) of the vehicle. x m = Mid position (Front-Side) of the vehicle. x f = Final position (Front-Side) of the vehicle. y i = Initial position (Back-Side) of the vehicle. y m = Mid position (Back- Side) of the vehicle. y f = Final position (Back-Side) of the vehicle. R f = Frame rate of the camera in frames per second (fps). t im = Time recorded between detection of x i and x m or betweeny i andy m ; [t im = t m t i ] t = Time recorded between detection of x m and x f or betweeny m and y f ; [t = t f t m ] t D = Display Time which is decided by the controller-b after receiving the whole information about a specific vehicle. 0 for β(t p ) = σ(1) im = { 1 for β(t p ) < σ(1) } 1 for β(t p ) > σ(1) 0 for β(t p ) = σ(1) = { 2 for β(t p ) < 2. σ(1) } 2 for β(t p ) > 2. σ(1) It has to be noted that im (m) = 2 if the front end of vehicle pass through the Mid-oint and the back-end of the vehicle is still on or before the Initial-oint of the projection. Similarly, only the maximum values of are considered in (3). Other values of are +1 and -1 for the same conditions as described in (2). Moreover, the important point of consideration in this paper about the values of im and is that of zero and non-zero values without taking the magnitude of non-zero values into account, since a single zero detection is enough for controller-b to indicate the exact position of the vehicle provided that the type and dimensions of the vehicle can be fetched from the database. Importance of the sign and magnitude of non-zero value is out of scope from this paper as it helps in manipulating the dimensions in-case of untagged vehicles. For simplicity let us consider the front end of the vehicle to pass between points i and m. For instance, it is detected by the initial column (i) of the camera projection at time t p = t i, for which im = 0. From this time onwards, the value remains im = 1 until it becomes im = 0 again. At this instant the time t p is recorded to bet p = t m. The timing information t im and t can then be calculated from (4) and (5). t im t = t m t i = t f t m (2) (3) (4) (5)

Timet im is calculated between the initial occurrence im (i) = 0 and im (m) = 0 for a certain vehicle. Similarly, time t is calculated between the initial occurrences im (m) = 0 and im (f) = 0.Now consideringy i, y m, and y f, the time parameters associated with back end of the vehicle are computed using the -matrix im or. im = μ σ = μ σ (6) 0 for β(t q ) = μ(1) im = { 1 for β(t q ) < μ(1) } 1 for β(t q ) > μ(1) (7) 0 for β(t q ) = μ(1) = { 2 for β(t q ) < 2. μ(1) } 2 for β(t q ) > 2. μ(1) It should be noted that maximum value of im (m) is +2 if the back-end of the vehicle has not yet crossed the Initial-oint after the front-end of the vehicle crosses the Initial-oint boundary. Similarly, only the maximum values of are considered in (8). Other values of are +1 and -1 for the same conditions as discussed in (7). Like im and, the important point of consideration about im and is that we are interested in zero and non-zero values, without considering the magnitude of a non-zero value, because once the zero is detected for a vehicle, the type and dimensions of the same vehicle can be reconstructed from the local database. The timing information t im and t can then be calculated from (9) and (10) t im t = t m t i = t f t m Time t im is calculated between the initial occurrences im (i) = 0 and im (m) = 0 for the back-end detection of a certain vehicle. Similarly, time t is calculated between the initial occurrences (m) = 0 and (f) = 0. It can be noted that the values discussed in equations (2), (3), (7) and (8) can be used in a defined 2 2 T-matrix to detect the exact vehicle in-case of un-tagged vehicles. But the un-tagged vehicle detection and reconstruction is out of scope from the point of view of this paper. Figure 2 shows the projection of the camera on a specific lane having length l C and width w C. It must be noted the w C > D w as the projection is assumed to have the significant coverage along the width of the lane hence covering the shadow throughout the width of lane with the added assumption that only one vehicle can pass through the same lane at a certain time instant. The length of the vehicle (D l ) is considered independent of the length of camera projection (l C ). (8) (9) (10) Figure 2. Video rojection dimensions w.r.t the vehicle dimensions. IV. FVS ALGORITHM The tagged vehicles are those which are identified by the controller at road side to detect the type of the vehicle. The said vehicle ID is then transmitted by controller-a. Controller-A fetches the required parameters namely x i a, x m a, x f a, v im, v, v im, v, Vehicle-ID, and acc if and sends the m to Controller- B. Depending upon the received parameters and existing parameters with Controller-B, the video is displayed. Controller-B after fetching the type and color of vehicle using search algorithm for the vehicle-id in a global database, computes its exact position and plays-back the moving vehicle with the specified speed and acceleration on the screen. Equations (11) and (12) show the calculation of velocities which are assumed for the front end of each vehicle at initial and final points, termed v im and v respectively. v im = (x a i x m a ) v t im = (x a m x f a ) t At the receiving end, controller-b synthesizes the video by reconstructing the vehicle back and shows the vehicle moving with a certain velocity and acceleration by using the received parameters. The type of vehicle is displayed by using the vehicle ID. Acceleration of the front end of each vehicle is defined in (13) as acc if = v im v (t im+t ) (11) (12) (13) The acceleration combined with the traffic flow rate can suggest projecting the vehicle motion for non-shadowed camera projection. The estimation of the traffic flow rate for the projected video is correct up-to a great extent depending upon the flow rate of average vehicles along the lane.

log 10 log 10 However, it has not yet been given enough consideration by researchers and is left as a topic for future investigation. The acceleration or deceleration of the vehicle (acc if ) can also be verified by calculating v im, v and acc if from the back end of each vehicle. The respective variables are shown in (14), (15) and (16). v im = (xi x m ) v t im = (xm x f ) acc if = v im v (t im +t) t (14) (15) (16) After all the important parameters are calculated, the video of the vehicle s exact position on the lane along with its exact velocity and acceleration are displayed by Controller-B from the received parameters. Time t D is the time at which the video is displayed after the video is fetched from the local database. The video displays the vehicle s reconstruction image along with its correct motion details. The fetched parameters are sent only if the vehicle is detected under the camera shadow on the road. For no received parameters the controller-b shows only the road picture with no moving vehicle. It means the video is displaying at a constant ratet D. Controller-B also possess parameters D im, D, μ and t D. In conjunction with the received parameters and already saved parameters, the controller-bsynthesizes the video by displaying the vehicle motion with reasonably accurate speed & acceleration. algorithm provides as compared to Sprite coding is discussed in next sub-section. B. Comparison of FVS w.r.t Sprite Coding: The data is taken for three different vehicular traffic flowrate scenarios. In first scene the traffic flow-rate was considered for the frequency of 95 vehicles per minute per lane (f 1 ). In the second scene, the traffic flow-rate was considered for the frequency of 60 vehicles per minute per lane (f 2 ) whereas for the thirdscene, the traffic flow-rate was considered for the frequency of 30 vehicles per minute per lane (f 3 ). Unlike 4:4:4, 4:2:2 and 4:2:0, the techniques like Sprite Coding and FVS depend upon the traffic flow-rate. Each fetched/calculated parameter is represented by a 16-bit memory and hence total of 144-bps of data-rate is assumed for transmission. 10 8 10 7 4:4:4 4:2:2 4:2:0 V. NUMERICAL RESULTS FVS algorithm proposed in this paper is a type of an application specific video compression technique used to fetch point information and based upon the associated parameters to display the video footage of vehicles passing through a lane. MATLAB tool was used to compare the different results between H.264 formats, Sprite Coding and our proposed algorithm. Two sub-categories of numerical results are shown A. Comparison of Different Compression Techniques: The different video formats supported by H.264 are 4:4:4, 4:2:2 and 4:2:0. Figure 3 depicts the compression comparison of these formats with respect to video resolution for CIF, SD, Half-HD and Full-HD. It is shown that the 4:2:0 format saves up-to 50% of the video data as compared to the original 4:4:4 video format. From the literature review [5,6] and different experiments conducted in this research about the results it can be seen that Sprite coding technique provides 75 93 % compression as compared to all of the above mentioned H.264 codecs. The level of compression that our proposed FVS 10 7 10 5 10 4 10 3 10 2 Figure 3. H.264 video formats comparison 10 1 Figure 4. For f 1 Flow-Rate of traffic Sprite FVS

log 10 log 10 However, sending 144 data-bits in a second is not an optimum way, yet the said transmission can be further optimized by considering the data sending from Controller-A using triggered way of detecting a vehicle by the installed camera. In such a scenario, the data to be sent should depend upon the height of camera above the runway, the camera projection distance and speed of vehicle passing under the given camera projection. Figures 4, 5 and 6 show the compression for each respective resolution i.e. SD, CIF, Half-HD and Full-HD. From these results it can be concluded that FVS can provide more than 99% compression as compared to Sprite coding technique. 10 5 10 4 10 3 Sprite FVS VI. CONCLUSION AND FUTURE WORK Comparison of different video compression techniques and Sprite coding was compared to our proposed FVS algorithm and it was concluded from the results that a substantial level of video compression with a reasonably high quality video can be achieved. Sprite coding algorithm is generally believed to provide the highest level of video compression as compared to H.264 (4:4:4, 4:2:2 and 4:2:0).However, FVS proves to be considerably more efficient in terms of video compression than Sprite coding. The video synthesis using FVS algorithm can be played as high quality video playback. It is seen that the position parameters (x i a, x m a, x f a, x i b, x m b, x f b, y i a, y m a, y f a, y i m b, y b and y f b ) are used to display the single frame of a video whereas the parameters (v im, v, v im, v, acc if and acc if ) are used to display the multiple frames/sec of the video displaying the vehicle s motion with exact position, velocity and acceleration. The Vehicle-ID is used to fetch the color and type of vehicle. Hence it is seen that only few parameters are enough to display the video with excellent HD quality without consuming extra bandwidth and disk-space. 10 7 10 5 10 4 10 3 10 2 10 1 Figure 5. For f 2 Flow-Rate of traffic Sprite FVS Figure 6. For f 3 Flow-Rate of traffic The same video parameters can be saved and can be played back in future without requiring too much disk space as is required in the case of H.264 compressed video or even in the case of Sprite coding. Following points can be considered for future work. 10 2 10 1 More intelligence can be added in vehicular traffic controllers using the parameters fetched by Controller-A. Non-Shadowed Video footage can be projected More accuracy can be accomplished by minimizing the errors due to peak velocity and acceleration of detected vehicles. Modified version of the same algorithm for non- Tagged vehicles is in progress. FVS algorithm can be applied to applications other than Intelligent Transportation Systems. There is more work to do on triggered video parameter transmission. More bandwidth can be saved if there is less traffic flow along the lane. REFERENCES [1] Wu Feng; Gao Wen; Xiang YangZhao; Gao eng; Chen DaTong, "Online sprite encoding with large global motion estimation," Data Compression Conference, 1998. DCC '98. roceedings, vol., no., pp.546,, 30 Mar-1 Apr 1998 [2] Krutz, A.; Glantz, A.; Frater, M.; Sikora, T., "Rate-distortion optimization for automatic sprite video coding using H.264/AVC," Image rocessing (ICI), 2009 16th IEEE International Conference on, vol., no., pp.2297,2300, 7-10 Nov. 2009. [3] Sullivan, G.J.; Wiegand, T., "Rate-distortion optimization for video compression," Signal rocessing Magazine, IEEE, vol.15, no.6, pp.74,90, Nov 1998doi: 10.1109/79.733497.

[4] Ortega, A.; Ramchandran, K., "Rate-distortion methods for image and video compression," Signal rocessing Magazine, IEEE, vol.15, no.6, pp.23,50, Nov 1998doi: 10.1109/79.733495. [5] Sullivan, G.J.; Wiegand, T., "Video Compression - From Concepts to the H.264/AVC Standard," roceedings of the IEEE, vol.93, no.1, pp.18,31, Jan. 2005doi: 10.1109/JROC.2004.839617. [6] Wiegand, T.; Sullivan, G.J.; Bjontegaard, G.; Luthra, A., "Overview of the H.264/AVC video coding standard," Circuits and Systems for Video Technology, IEEE Transactions on, vol.13, no.7, pp.560,576, July 2003 doi: 10.1109/TCSVT.2003.815165. [7] Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s&mdash,art 2: Video, 1993 :Int. Standards Org./Int. Electrotech. Comm. (ISO/IEC) JTC 1 [8] Yang Tao; Zhiming Liu; Yuxing eng, "VCGE: Video Coding Grid Environment Using H.264 Video Compression Technologies," Computational Intelligence and Security, 2006 International Conference on, vol.2, no., pp.1726,1729, 3-6 Nov. 2006 [9] MF Tsai, NK Chilamkurti, S Zeadally, A Vinel, Concurrent multipath transmission combining forward error correction and path interleaving for video streaming, Computer Communications 34 (9), 1125-1136. [10] A Vinel, E Belyaev, K Egiazarian, Y Koucheryavy, An overtaking assistance system based on joint beaconing and real-time video transmission Vehicular Technology, IEEE Transactions on 61 (5), 2319-2329