A Quantized Transform-Domain Motion Estimation Technique for H.264 Secondary SP-frames

Similar documents
VIDEO streaming applications over the Internet are gaining. Brief Papers

Complexity Reduced Mode Selection of H.264/AVC Intra Coding

Efficient MPEG-2 to H.264/AVC Intra Transcoding in Transform-domain

Signal Processing: Image Communication

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

An Efficient Mode Selection Algorithm for H.264

STACK ROBUST FINE GRANULARITY SCALABLE VIDEO CODING

Performance Comparison between DWT-based and DCT-based Encoders

Pattern based Residual Coding for H.264 Encoder *

Reduced Frame Quantization in Video Coding

Rate Distortion Optimization in Video Compression

Reduced 4x4 Block Intra Prediction Modes using Directional Similarity in H.264/AVC

One-pass bitrate control for MPEG-4 Scalable Video Coding using ρ-domain

H.264 to MPEG-4 Transcoding Using Block Type Information

Upcoming Video Standards. Madhukar Budagavi, Ph.D. DSPS R&D Center, Dallas Texas Instruments Inc.

An Improved H.26L Coder Using Lagrangian Coder Control. Summary

H.264/AVC Baseline Profile to MPEG-4 Visual Simple Profile Transcoding to Reduce the Spatial Resolution

LIST OF TABLES. Table 5.1 Specification of mapping of idx to cij for zig-zag scan 46. Table 5.2 Macroblock types 46

CONTENT ADAPTIVE COMPLEXITY REDUCTION SCHEME FOR QUALITY/FIDELITY SCALABLE HEVC

FAST HEVC TO SCC TRANSCODING BASED ON DECISION TREES. Wei Kuang, Yui-Lam Chan, Sik-Ho Tsang, and Wan-Chi Siu

A Novel Deblocking Filter Algorithm In H.264 for Real Time Implementation

IBM Research Report. Inter Mode Selection for H.264/AVC Using Time-Efficient Learning-Theoretic Algorithms

Deblocking Filter Algorithm with Low Complexity for H.264 Video Coding

Video Coding Using Spatially Varying Transform

A NOVEL SCANNING SCHEME FOR DIRECTIONAL SPATIAL PREDICTION OF AVS INTRA CODING

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Transform Kernel Selection Strategy for the H.264

High Performance VLSI Architecture of Fractional Motion Estimation for H.264/AVC

FAST MOTION ESTIMATION DISCARDING LOW-IMPACT FRACTIONAL BLOCKS. Saverio G. Blasi, Ivan Zupancic and Ebroul Izquierdo

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264

Fast Mode Decision for H.264/AVC Using Mode Prediction

Fast frame memory access method for H.264/AVC

Chapter 10. Basic Video Compression Techniques Introduction to Video Compression 10.2 Video Compression with Motion Compensation

ARTICLE IN PRESS. Signal Processing: Image Communication

Advanced Video Coding: The new H.264 video compression standard

An Efficient Table Prediction Scheme for CAVLC

ERROR-ROBUST INTER/INTRA MACROBLOCK MODE SELECTION USING ISOLATED REGIONS

Improved Context-Based Adaptive Binary Arithmetic Coding in MPEG-4 AVC/H.264 Video Codec

System Modeling and Implementation of MPEG-4. Encoder under Fine-Granular-Scalability Framework

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

Coding of Coefficients of two-dimensional non-separable Adaptive Wiener Interpolation Filter

Title Adaptive Lagrange Multiplier for Low Bit Rates in H.264.

A COST-EFFICIENT RESIDUAL PREDICTION VLSI ARCHITECTURE FOR H.264/AVC SCALABLE EXTENSION

ARCHITECTURES OF INCORPORATING MPEG-4 AVC INTO THREE-DIMENSIONAL WAVELET VIDEO CODING

An Optimized Template Matching Approach to Intra Coding in Video/Image Compression

Homogeneous Transcoding of HEVC for bit rate reduction

Module 7 VIDEO CODING AND MOTION ESTIMATION

High Efficient Intra Coding Algorithm for H.265/HVC

A Novel Statistical Distortion Model Based on Mixed Laplacian and Uniform Distribution of Mpeg-4 FGS

Reducing/eliminating visual artifacts in HEVC by the deblocking filter.

Bit Allocation for Spatial Scalability in H.264/SVC

Motion Vector Coding Algorithm Based on Adaptive Template Matching

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

SINGLE PASS DEPENDENT BIT ALLOCATION FOR SPATIAL SCALABILITY CODING OF H.264/SVC

EE Low Complexity H.264 encoder for mobile applications

Variable Temporal-Length 3-D Discrete Cosine Transform Coding

NEW CAVLC ENCODING ALGORITHM FOR LOSSLESS INTRA CODING IN H.264/AVC. Jin Heo, Seung-Hwan Kim, and Yo-Sung Ho

Week 14. Video Compression. Ref: Fundamentals of Multimedia

Using animation to motivate motion

Video Quality Analysis for H.264 Based on Human Visual System

BLOCK MATCHING-BASED MOTION COMPENSATION WITH ARBITRARY ACCURACY USING ADAPTIVE INTERPOLATION FILTERS

Professor, CSE Department, Nirma University, Ahmedabad, India

VHDL Implementation of H.264 Video Coding Standard

OVERVIEW OF IEEE 1857 VIDEO CODING STANDARD

Transcoding from H.264/AVC to High Efficiency Video Coding (HEVC)

10.2 Video Compression with Motion Compensation 10.4 H H.263

Complexity Reduction Tools for MPEG-2 to H.264 Video Transcoding

Video Compression An Introduction

Investigation of the GoP Structure for H.26L Video Streams

IMPROVED CONTEXT-ADAPTIVE ARITHMETIC CODING IN H.264/AVC

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Scalable Video Coding

For layered video encoding, video sequence is encoded into a base layer bitstream and one (or more) enhancement layer bit-stream(s).

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

Video compression with 1-D directional transforms in H.264/AVC

Area Efficient SAD Architecture for Block Based Video Compression Standards

Optimum Quantization Parameters for Mode Decision in Scalable Extension of H.264/AVC Video Codec

H.264/AVC Video Watermarking Algorithm Against Recoding

Zonal MPEG-2. Cheng-Hsiung Hsieh *, Chen-Wei Fu and Wei-Lung Hung

MOTION estimation is one of the major techniques for

H.264/AVC BASED NEAR LOSSLESS INTRA CODEC USING LINE-BASED PREDICTION AND MODIFIED CABAC. Jung-Ah Choi, Jin Heo, and Yo-Sung Ho

PERFORMANCE ANALYSIS OF INTEGER DCT OF DIFFERENT BLOCK SIZES USED IN H.264, AVS CHINA AND WMV9.

High Efficiency Video Coding (HEVC) test model HM vs. HM- 16.6: objective and subjective performance analysis

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Video Compression Standards (II) A/Prof. Jian Zhang

A Hybrid Temporal-SNR Fine-Granular Scalability for Internet Video

Fast Wavelet-based Macro-block Selection Algorithm for H.264 Video Codec

Streaming Video Based on Temporal Frame Transcoding.

Block-based Watermarking Using Random Position Key

A reversible data hiding based on adaptive prediction technique and histogram shifting

FAST SPATIAL LAYER MODE DECISION BASED ON TEMPORAL LEVELS IN H.264/AVC SCALABLE EXTENSION

Low-cost Multi-hypothesis Motion Compensation for Video Coding

CAMED: Complexity Adaptive Motion Estimation & Mode Decision for H.264 Video

Introduction to Video Encoding

Adaptation of Scalable Video Coding to Packet Loss and its Performance Analysis

Video Coding Standards. Yao Wang Polytechnic University, Brooklyn, NY11201 http: //eeweb.poly.edu/~yao

EFFICIENT PU MODE DECISION AND MOTION ESTIMATION FOR H.264/AVC TO HEVC TRANSCODER

New Techniques for Improved Video Coding

Transcription:

A Quantized Transform-Domain Motion Estimation Technique for H.264 Secondary SP-frames Ki-Kit Lai, Yui-Lam Chan, and Wan-Chi Siu Centre for Signal Processing Department of Electronic and Information Engineering The Hong Kong Polytechnic University Hung Hom, Kowloon, Hong Kong {kikit.lai, enylchan, enwcsiu}@polyu.edu.hk Abstract. The brand-new SP-frame in H.264 facilitates drift-free bitstream switching. Notwithstanding the guarantee of seamless switching, the cost is the bulky size of secondary SP-frames. This induces a significant amount of additional space or bandwidth for storage or transmission. For this reason, a new motion estimation and compensation technique, which is operated in the quantized transform (QDCT) domain, is designed for coding secondary SPframes in this paper. So far, much investigation has been conducted to evaluate the trade off between the relative sizes of primary and secondary SPframes by adjusting the quantization parameters. But, our proposed work aims at keeping the secondary SP-frames as small as possible without affecting the size of primary SP-frames by incorporating QDCT-domain motion estimation and compensation in the secondary SP-frame coding. Simulation results demonstrate that the size of secondary SP-frames can be reduced remarkably. Keywords: Video coding, SP-frame, H.264, QDCT-domain, motion estimation, motion compensation. Introduction H.264 is the latest video coding standard [], which was jointly developed by the ISO Moving Picture Experts Group (MPEG) and the ITU Video Coding Experts Group (VCEG). It is shown to achieve gains in coding efficiency of up to 50% over a wide range of bit rates as compared with previous video coding standards [2]. In addition to achieving superior coding efficiency, this new standard includes a number of new features to provide more flexibility for applications to a wide variety of network environments. The new SP-frame is one of these features. The motivation of introducing SPframes is to facilitate error resilience, bitstream switching, splicing, random access, fast forward, and fast backward []. It is now part of the Extended Profile in the H.264 standard. This special SP-frame is composed of primary and secondary SPframes. They both exploit temporal redundancy with predictive coding, but use

different reference frames. Although different reference frames are used, it still allows identical reconstruction. This property can be applied to drift-free switching between compressed bitstreams of different bit rates to accommodate the bandwidth variation, as illustrated in Figure. This figure depicts a video sequence encoded into two bitstreams (B and B2) with different bit rates. B is a sequence encoded in high bitrate while B2 is a low bitrate bitstream. Within each bitstream, two primary SP-frames SP,t and SP 2,t are placed at frame t (switching point). To allow seamless switching, a secondary SPframe(SP 2,t ) is produced, which has the same reconstructed values as SP 2,t even different reference frames are used. When switching from B to B2 is needed at frame t, SP 2,t instead of SP 2,t is transmitted. After decoding SP 2,t, the decoder can obtain exactly the same reconstructed values as normally SP 2,t decoded at frame t. Therefore it can continually decode B2 at frame t+ seamlessly. Nevertheless, there is a trade-off between the coding performance of primary SPframes and the storage cost for secondary SP-frames [3]. For example, a primary SP-frame with high quality results in a significantly high storage requirement for the secondary SP-frame. It is unfeasible to store such huge size of the secondary SPframe. In this paper, we propose a novel coding arrangement to reduce the size of secondary SP-frames. Fig. Switching bitstream from B to B2 using SP-frames. The rest of this paper is organized as follows. In Section 2, a brief introduction of H.264 SP/SI-frame coding is given. Section 3 presents an in-depth study of the problem on applying the traditional pixel-domain motion estimation technique into the secondary SP-frame encoder. Analysis of using QDCT-domain motion estimation is also covered here. After the detailed analysis, a novel secondary SPframe encoded is proposed. In Section 4, we present some experimental results to show the performance of the proposed scheme. We also compare its performance with the conventional secondary SP-frame encoder. Concluding remarks are provided in Section 5.

2 Background of Coding SP-Frames The way of encoding primary SP-frames is similar to that of encoding P-frames except additional quantization/dequantization steps with the quantization level are applied to the transform coefficients of the primary SP-frame (SP 2,t in Figure ), as shown in Figure 2. Interested readers are encouraged to read the references [4-6]. These extra steps ensure that the quantized transform coefficients of SP 2,t (denoted as Q SP s,t 2 ) can be quantized and de-quantized without loss at, which is used in the encoding process of the secondary SP-frame, SP 2,t. P, t- SP 2,t Fig 2. Simplified encoding block diagram of primary and secondary SP-frames [5]. For coding SP 2,t, the reconstructed P,t- ( P, t ) acts as the reference and its target is to reconstruct SP 2,t perfectly. By using the reference frame P, t, its prediction is first transformed and quantized using before generating the residue with SP,t Q SP,t s 2. Both the prediction and 2 are thus synchronized to and there is no further quantization from this point, meaning that the decoder, with P, t,, and the residue available, can perfectly reconstruct SP 2,t.

3 Size Reduction of Secondary SP-Frames in QDCT Domain 3. Motion-compensated prediction in secondary SP-frames Producing secondary SP-frames involves the processes of motion estimation and motion compensation. In H.264, it supports motion estimation using different block sizes such as 6 6, 6 8, 8 6, 8 8, 8 4, 4 8, and 4 4 [7]. To compute the coding modes and motion vectors for the secondary SP-frame, motion estimation is firstly performed for all modes and submodes independently by minimizing the Lagrangian cost function J motion. J motion ( 2 mv2, λ motion ) = SAD( s, r) + λmotion Rmotion ( mv2 pmv ) () where mv 2 is the motion vector used for prediction, λ motion is the Lagrangian multiplier for motion estimation, R motion (mv 2 - pv 2 ) is the estimated number of bits for coding mv 2, and SAD is sum of absolute differences between the original block s and its reference block r [7]. After motion estimation for each mode, a rate-distortion (RD) optimization technique is used to get the best mode and its general equation is given by J mode ( 2 s, c, mode2, λ mode ) = SSD( s, c, mode2 ) + λmode Rmode ( s, c, mode ) (2) where λ mode is the Lagrangian multiplier for mode decision, mode 2 is one of the candidate modes during motion estimation, SSD is sum of the squared differences between s and its reconstruction block c, and R mode (s,c,mode 2 ) represents the number of coding bits associated with the chosen mode. To compute J mode, forward and inverse integer transforms, and variable length coding are performed. In the implementation of H.264 codec such as JM.0[8], the motion estimation of the secondary SP-frame uses P, t and the original SP,t as the reference and current frames respectively. This arrangement allows the reuse of coding modes (mode, t in Figure ) and motion vectors (mv, t in Figure ) during secondary SP-frame encoding. It means that mv 2, t = mv (3), t and mode 2, t = mode (4), t However, the reuse of coding modes and motion vectors reduces the coding efficiency of a secondary SP-frame since the purpose of the secondary SP-frame is to reconstruct SP 2,t instead of SP,t. In [9], a secondary SP-frame is encoded to match the exact target frame (reconstructed SP 2,t, SP, ) based on the exact reference ( P, t ), as depicted in Figure 3. By using the correct target and reference frames, better compression performance of secondary SP-frames can be achieved. Note that the computational complexity evidently increases without reusing coding modes and 2 t

motion vectors. Nevertheless, secondary SP-frames are always generated in off-line for bitstream switching applications. Thus, complexity is not the major concern for coding secondary SP-frames. P, t- SP 2,t SP 2,t Fig 3. Motion estimation and compensation of a secondary SP-frame encoder [9]. 3.2 Motivation of using QDCT-domain motion-compensation prediction Nevertheless, the improvement in [9] is not so significant. In this section, we explain the deficiency in using the conventional motion estimation and compensation processes, which are operated in the pixel domain, for secondary SP-frames. Figure 4 illustrates the step of encoding a block in a P-frame using pixel-domain motion estimation. In this case, most of the transform coefficients become zero after transformation and quantization. This property benefits entropy coding. However, in Figure 3, the encoding of a secondary SP-frame involves carrying out transformation and quantization of original SP 2,t and P, t first. Then, quantized coefficients of the secondary SP-frame at t, [T (SP 2,t )], can be obtained as, [T[SP 2,t ]] = [T[SP 2,t ]] [T[MC( P, t )]] (5) where MC() is the motion-compensation operator. Figure 5 uses the same example in Figure 4 again to show the residue of a secondary SP-frame in which a block is transformed and quantized before calculating the residue. In this case, their quantized coefficients are only near, but not equal, resulting in generating many non-

zero residue, especially for a small. Since there is no further quantization from this point, these coefficients should be encoded completely. In entropy coding, even only one high-frequency coefficient exists, significant demanding of bits is required. Therefore, size of secondary SP-frames becomes large, and this also explains why the pixel-domain motion estimation is not suitable for coding secondary SP-frames. In this paper, we propose performing motion estimation and compensation in the quantized transform (QDCT) domain rather than the pixel domain to improve the coding efficiency of secondary SP-frames. 247 247 65 2 254 254 32 68 254 254 37 255 254 254 33 254 245 249 70 37 255 255 36 73 245 246 33 254 254 248 45 255 = 2-2 -5-6 - - -4-5 9 8 4 0 6-2 - 0 0 0-0 0-0 0 0 0 0 0 Fig. 4. Motion-compensated prediction using pixel-domain motion estimation in encoding a P- frame. 247 247 65 2 254 254 32 68 254 254 37 255 254 254 33 254 245 249 70 37 255 255 36 73 245 246 33 254 254 248 45 255 65 4-8 -4 5-5 4 - - - 0 0 65 0 5-8 -3 5-5 3 0 0-2 0-0 0 = 0-0 - 0 0-0 - 0 0 0 Fig. 5. Motion-compensated prediction using pixel-domain motion estimation in encoding a secondary SP-frame. 3.3 The proposed scheme for secondary SP-frame encoding In this section, we propose a quantized transform-domain motion estimation (TME) technique that minimizes [T[SP 2,t ]] [T[MC(P,t- )]] (quantized transform domain) instead of SP 2,t MC(P,t- ) (pixel domain). From (), SAD between pixels of the original block s and its reference block r is used to compute the distortion of J motion. The aforementioned investigation reveals that pixel-domain distortion measure is not appropriate for coding secondary SP-frames. In the proposed TME, the Lagrangian cost function J motion in () needs to be rewritten as J ' motion ( mv2, λ motion ) = SATD( s, r) + λmotion Rmotion ( mv2 pmv2 ) (6)

where SATD(s,r) is now the sum of absolute differences between the quantized transform coefficients of the original block s and the quantized transform coefficients of its reference block r, and it can be defined as SATD ( s, r) = [ T( s)] [ T ( r)] (7) For coding a secondary SP-frame, this distortion measure can find a better motion vector and mode for minimizing the residue, [T[SP 2,t ]], in (5). Note that SATD is computationally intensive since all the pixel blocks are necessary to be transformed and quantized to QDCT domain. However, the complexity is not the major concern for secondary SP-frame encoding since this frame type is always encoded off-line for bitstream switching applications. On the other hand, the accuracy of distortion measure increases the coding efficiency of secondary SP-frames which results in the significant reduction of the storage requirement in the video server. Figure 6 shows the block diagram of applying our new QDCT-domain motion estimation technique in the secondary SP-frame encoder. The reference and target frames in the QDCT domain are the inputs of TME. After the motion vectors for each block are obtained, a corresponding QDCT-domain motion compensation (TMC) is used to compute the motion-compensated frame, [T[MC( P, t )]]. With [T[MC( P, t )]] and [T[SP 2,t ]], as depicted in Figure 6, the residue [T[SP 2,t ]] can then be calculated. P, t- SP 2,t Fig 6. The proposed secondary SP-frame encoder in the QDCT domain.

4 Simulation Results In order to evaluate the performances of the proposed scheme and the scheme in [9], three test sequences, Foreman (CIF), Salesman (CIF) and Table Tennis (SIF) were used in our experiments. The H.264 reference codec (JM.0 [8]) was employed to encode primary SP-frames and secondary SP-frames with a frame rate of 30 fps. All test sequences have a length of 200 frames. For simplicity but without loss of generality, we used two different bitrate bitstreams encoded with two different sets of Q P and Q S, and only the switching from a low bitrate bitstream to a high bitrate bitstream is shown. For the low bitrate bitstream, Q P and Q S were both fixed to 4, whereas Q P and Q S were both set to 2 for the high bitrate bitstream. To have comprehensive and impartial comparisons between both schemes, every frame was encoded in turn as an SP-frame while non-switching frames were encoded as P- frames. Figures 7(a), 7(b) and 7(c) show the frame-by-frame comparisons of size reduction of secondary SP-frames. In these figures, the positive values of the Y-axis mean the size reduction of a secondary SP-frame in percentage difference of our proposed scheme over the scheme in [9] whereas the negative values mean the proposed scheme generates more bit-count as compare to [9]. From Figures 7(a), 7(b) and 7(c), it is observed that the proposed scheme can substantially reduce the size of secondary SP-frames, up to 30%, 2% and 0% in Foreman, Table Tennis and Salesman, respectively. The significant improvement of the proposed scheme is due to the benefit of performing motion estimation and compensation in the QDCT domain. In [9], even though a proper target frame is selected for motion estimation, the performance is still not significant. It is due to the reason that only the conventional pixel-domain motion estimation technique is employed for coding secondary SP-frames. In this situation, most of transformed coefficients become non-zero after transformation and quantization, as shown in Figure 4, which unfavour the use of entropy coding. Consequently, more bits are required to encode secondary SP-frames. On the other hand, our proposed scheme produces secondary SP-frames using motion estimation in the QDCT domain. The quantized and transformed coefficients are used to calculate the distortion in the Lagrangian cost function. The new SATD really finds the motion vector with more cofficients to be zero that benefits the entropy coding of secondary SP-frames. This provides the remarkable size reduction of our proposed scheme as shown in Figures 7(a), 7(b) and 7(c).

(a) (b) (c) Fig. 7. Size reduction of secondary SP-frames in percentage difference achieved by the proposed scheme over the scheme in [9], (a) Foreman, (b) Salesman, and (c) Table Tennis.

5 Conclusion In this paper, an efficient scheme for coding H.264 secondary SP-frames has been proposed. We found that the use of conventional pixel-domain motion estimation is not appropriate for a secondary SP-frame encoder, which incurs considerable size of secondary SP-frames. To alleviate this, we have incorporated the QDCT-domain motion estimation technique in the encoding process of secondary SP-frames. Experimental results show that the proposed scheme can significantly reduce the size of H.264 secondary SP-frames. Besides, the proposed technique does not affect the coding efficiency of primary SP-frames. Acknowledgments. The work described in this paper is partially supported by the Centre for Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic University and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (PolyU 525/06E). Ki-Kit Lai acknowledges the research studentships provided by the University. References. Joint Video Team of ISO/IEC MPEG and ITU-T VCEG: ITU-T Recommendation H.264 Advanced video coding for generic audiovisual services (2005) 2. ITU-T Recommendation H.263: Video coding for low bitrate communication(998) 3. Chang, C.P., and Lin, C.W.: R-D optimized quantization of H.264 SP-frames for bitstream switching under storage constraints: IEEE International Symposium on Circuits and Systems, Vol. 2, (2005) 242-235 4. Karczewicz, M. and Kurceren, R.: The SP- and SI-frames design for H.264/AVC. IEEE Transations on Circuits and Systems for video technology, Vol. 3, No. 7 (2003) 637-644 5. Sun, X., Li, S., Wu, F., Shen, K. and Gao, W.: The improved SP frame coding technique for the JVT standard. IEEE International Conference on Image Processing, Vol. 2(2003) 297-300 6. Kurceren, R. and Karczewicz, M.: Synchronization-Predictive coding for video compression: The SP frames design for JVT/H.26L. IEEE International Conference on Image Processing, Vol. 2 (2002) 497-500 7. Schafer, R., Wiegand, T. and Schwarz, H.: The emerging H.264/AVC standard. EBU Technical Review (2003) 8. Suhring, K.: H.264 Reference Software JM.0. http://iphome.hhi.de/suehring/tml/ (2006) 9. Tan, W.T. and Shen, B.: Methods to improve coding efficiency of SP frames. IEEE International Conference on Image Processing, Atlanta, USA, (2006)