EXPERIMENTAL ANALYSIS AND MODELING OF DIGITAL VIDEO QUALITY Mylène C.Q. Farias, a Michael S. Moore, a John M. Foley, b and Sanjit K.

Size: px
Start display at page:

Download "EXPERIMENTAL ANALYSIS AND MODELING OF DIGITAL VIDEO QUALITY Mylène C.Q. Farias, a Michael S. Moore, a John M. Foley, b and Sanjit K."

Transcription

1 EXPERIMENTAL ANALYSIS AND MODELING OF DIGITAL VIDEO QUALITY Mylène C.Q. Farias, a Michael S. Moore, a John M. Foley, b and Sanjit K. Mitra a a Department of Electrical and Computer Engineering, b Department of Psychology, University of California Santa Barbara, Santa Barbara, CA USA ABSTRACT In this paper we present a review of the work done at University of California Santa Barbara (UCSB) in the area of video quality. We performed a series of specially designed psychophysical experiments with the goal of studying the appearance, visibility, annoyance, and importance of different types of artifacts commonly found in digital videos. We used both real and synthetically generated artifacts. Full and reduced reference video quality metrics were developed based on a simple human visual system model and annoyance and detection models. For real-time applications, we have developed reduced and no-reference quality metrics based on measurements of individual artifacts. The proposed metrics have good performance and relatively low complexity. 1. INTRODUCTION A video impairment is any change in a video signal that if sufficiently strong will reduce the perceived quality. Video impairments can be introduced during capture, transmission, storage, and/or display, as well as by any image processing algorithm (e.g. compression) that may be applied along the way. Most impairments have more than one perceptual feature, but it is possible to produce impairments that are relatively pure. We will use the term artifacts to refer to the perceptual features of impairments and artifact signal to refer to the physical signal that produces the artifact. Examples of artifacts introduced by digital video systems are blurriness, noisiness, ringing, and blockiness [1]. There is an ongoing effort to develop video quality metrics that are able to detect impairments and estimate their annoyance as perceived by human viewers. Most of the quality metrics proposed in the literature are Full Reference (FR) metrics. These metrics estimate the quality of a video by comparing reference and impaired videos. Some examples include the works by Daly [2], Lubin [3], Watson [4], Wolf and Pinson [5], and Winkler [6]. A more complete survey of the available FR video quality metrics is presented in [7]. The major drawback of the FR approach is the fact that a large amount of reference information has to be provided at the final comparison point. Also, a very precise spatial and temporal alignment of reference and impaired videos is needed to guarantee the accuracy of the metric. Reduced Reference (RR) quality metrics are metrics that require only partial information about the reference video. In general, certain features or physical measures are extracted from the reference and transmitted to the receiver as side information to help evaluate the quality of the video. Metrics in this class may be less accurate than the FR metrics, but they are also less complex, and make real-time implementations more affordable. Some examples include the works of Webster et al. [8] and Brètillon et al. [9]. Requiring the reference video or even limited information about it becomes a serious impediment in many real-time transmission applications. In these cases, it becomes essential to develop ways of blindly estimating the quality of a video using a No-reference (NR) video quality metric. It turns out that, although human observers can usually assess the quality of a video without using the reference, creating a metric that will do this is a difficult task and, most frequently, results in a loss of performance in comparison to the FR approach. Most of the proposed NR metrics estimate annoyance by detecting and estimating the strength of commonly found artifact signals. For example, the metrics by Wu et al. and Wang et al. estimate quality based on blockiness measurements [10, 11], while the metric by Caviedes et al. takes into account measurements of 5 types of artifacts [12]. In this paper we present a review of two different projects completed during the last several of years [13, 14]. The main goal of the first project was to test some assumptions made in video quality metric research and to develop FR and RR metrics with the goal that the RR metric would have similar accuracy but reduced computational complexity. First, a set of experiments was performed that measured the detectability, annoyance, and importance of MPEG-2 compression impairments. Then, visibility and annoyance models were developed using a simplified model of the human visual system that took into account factors like defect size, duration, location, importance and video content. The goal of the second project was to investigate the possibility of designing a

2 quality metric based on blind artifact measurements. We first studied the visibility, annoyance, and strength of four of the more commonly found artifacts (blockiness, blurriness, noisiness, and ringing) by performing experiments using synthetically generated artifacts. Then, we created an annoyance model by developing a set of NR artifact metrics for estimating the strength of each of these artifacts and a rule for combining the outputs of these metrics to predict overall annoyance. 2. EXPERIMENTAL METHODOLOGY The normal approach to subjective quality testing is to degrade a video by a variable amount and ask the test subjects for a quality or annoyance rating. The degradation is introduced by compressing, transmitting, or processing an entire video in a way that will produce different degrees of impairments. The impairments introduced using this approach are rarely evenly spread. In this research, we have developed an experimental paradigm that introduces brief spatially and temporally limited impairments with varying strengths into video sequences [13]. We control the impairment strength by scaling the pixel-by-pixel difference between the corrupted and the original videos and inserting them into specific regions of the video for a short time interval. Different spatial regions and time intervals (defect regions) are used for each original to prevent the test subjects from learning the locations where the defects appear [13]. This method allows varying the strength of an impairment without affecting its waveform. Each experimental session includes five stages: instructions, training, practice trials, experimental trials, and interview. During the instructions, the experimenter orally describes the task to the subject. In the training stage, examples of the experimental sequences representing the impairment extremes are shown to the subjects to establish the score range. The practice trials are used to familiarize the subjects with the task and to stabilize their responses before the experimental trials. In the experimental trials subjects are asked to perform two or three of the following tasks: detection, annoyance, strength, appearance, and importance [13, 14]. The detection task consists of detecting a spatially and temporally localized impairment in a five-second video sequence. After each test sequence the subjects are asked Did you see a defect or an impairment?. The subject responds yes or no. The annoyance task consists of giving a numerical judgment of how annoying the detected impairment is. The subject is instructed to enter a positive numerical value indicating how annoying the impairment is. Any defect as annoying as the worst impairments in the training stage should be given 100, half as annoying 50, twice as annoying 200 and so forth. The description task consists of indicating the appearance of the detected impairment. For this task, the subject is given a set of classifiers and asked to choose the appropriate ones to describe the appearance of the impairment she/he has just seen. The strength task consists of estimating how strong each artifact is in the detected impairment. After the video is played, the subject is asked to enter a number in a scale with range from 0 to 10. Any artifact as strong as the strongest artifacts in the training stage is given 10, half as strong 5, and so forth. The content importance task consists of rating the importance of a region that is missing from the video. For this type of experiment, we generate a series of test sequences where the video content in the defect region is simply missing. The subjects are asked to give a response on a fixed scale ranging from 0 (insignificant) to 10 (vitally important). From the data gathered from the detection task, for each test sequence we compute the probability of detection (P) by dividing the number of subjects who detected the artifact by the total number of subjects. The PD as a function of the total squared error (logtse) (psychometric function) is fitted using the Weibull function [15], which is defined as: ( ) k ( xx ) P x = 1 2 T (1) where P(x) is the probability of detection, x is the logtse, x T is the 50% detection threshold, and k is the slope of the transition. From the data gathered from the annoyance, importance, and strength tasks we compute the mean annoyance values (MAV), mean strength values (MSV), and mean importance values (MIV), respectively, by averaging the subjective scores over all observers for each test video. The MAV as a function of the logtse (annoyance function), is fitted with the standard logistic function [15]: ( ( )) ( 100) 1 exp ( ) y = + x x β (2) where y is the predicted annoyance and x is the logtse. The parameter x (mid-annoyance logtse) translates the curve in the x-direction and β controls the steepness. 3. FR AND RR METRICS BASED ON DETECTION AND ANNOYANCE MODELS In our first project we performed psychophysical experiments where the appearance, strength, size, duration, and location of MPEG-2 impairments were varied [16]. The subjects performed detection, annoyance, and importance tasks. We found a strong correlation between the mid-annoyance defect strength ( x ) and the detection threshold ( x T ). In fact, x could be approximated using the following linear expression: x = cx T, where c was found to be equal to 1.14, 1.19, and

3 1.15 for three of the experiments. Figure 1 depicts the plot of x versus x T obtained from the data of one of the experiments [16]. We can conclude that the common assumption that annoyance can be predicted using a good detection model is correct. Analysis of variance of the parameters x and x T showed that the statistically most significant sources of variance were the original and the location of the defect region. They were more significant than variances due to size, duration or defect type. Indeed in most experiments type did not have a statistically significant effect. This means that, for errors of constant TSE, the context, and not the nature of the defect, is the primary determinant of its visibility and annoyance. The location and size of the defect region were the variables with the largest effect on the importance of a region. Surprisingly, the detection threshold did not significantly depend on importance, and annoyance was only weakly correlated to importance. This may be a consequence of our task, which required subjects to actively search for defects. Based on these results, separate models were developed for the dependence of threshold and annoyance values on defect strength. In Figure 2 a generic block diagram for both models is presented, where the output corresponds to an estimate of either the detection probability or mean annoyance values. For the detection model, the high level model corresponds to the psychometric function, while for the annoyance model it corresponds to a logistic function. Both models include a human visual system (HVS) model that takes into account human contrast sensitivity and spatial masking effects, as shown in Figure 3. The HVS model used in this work was developed starting with a simple HVS model and then adding complexity as necessary to improve prediction performance. A similar approach was also explored by Watson and Malo [17]. The HVS model implementation computed contrast using a single channel and Peli's definition of contrast [18]. For this study, the lowpass filter LLP was a simple FIR filter and the bandpass filter was its delay complement. The best filters to use for these calculations are not known. Many models use a multichannel approach, where a bank of filters implement a pyramid decomposition of the input [2]. Several early attempts to create perceptual quality metrics filtered the difference pattern using the human spatial contrast sensitivity function (CSF), in effect treating this part of the visual system as a linear system [2]. Although the visual system is known to be non-linear, a linear spatial filter can improve the performance of a fidelity metric by de-emphasizing high frequency components. The filter used for this work was derived from a CSF published in [19]. Figure 1 Plot of mid-annoyance versus detection threshold. Input Reference Register Videos Adjust HVS HVS + - Summarize High-level Fidelity Measure Figure 2. A generic fidelity measure overall block diagram. Reference Video Test Video R 0 T 0 Display Display R 1 T 1 Lowpass Contrast Contrast R 2 T 2 Spatial CSF Spatial CSF R 3 T 3 Masking Measure Variability Masking Figure 3. Schematic illustration of the HVS model. R 4 T 4 Sum of Squares There are many possible approaches to modeling masking, including contrast gain control models, noise masking, entropy masking, and activity masking. Our system used an activity masking model. Activity may be measured by summing deviations from a local mean or by summing the high frequency coefficients in the frequency domain [20]. Activity can be computed locally or globally [21]. Our system used an activity masking model. The implemented masking model used an estimate of the local contrast variation to weight the contrast values. Specifically, I ( q M IFC M0v 1) = +, where I FC are the CSF-filtered contrast values, M 0 is a scaling factor, v is the range of the values in a 12' local neighborhood around each pixel, and q is a compression factor. M0 and q were set to 40 and 0.7. The unit offset was added to ensure that the denominator never equaled zero. The local range is not a typical measure of variability, but it can be a good approximation to the local standard deviation and can be computed rapidly. After the model transformation has been applied separately to the impaired and the original (reference) videos, the difference between two signals is computed for each pixel. These differences are each raised to a TE

4 power greater than 1, weighted based on their position in space and time, and summed over the video. The common log of this Total Squared Perceived Error is related to detection probability by a Weibull function and to annoyance by a logistic function. The annoyance and detection models performed well for several experimental data sets using these fairly simple implementations for the contrast sensitivity and masking stages. The correlation of the FR metric was compared to of the Sarnoff model [22]. The execution time was much shorter than for the Sarnoff model. Figure 4 shows a plot of the measured annoyance versus the FR annoyance model output for the set containing data from four experiments. In the RR model there are two processing stages. In the first the reference video is processed to compute thresholds for faux patterns that are like MPEG-2 impairments. In the second stage both videos are highly downsampled and the difference computed. Then the thresholds for the faux patterns are used to adjust these crude difference measures. The RR model performed equivalently to the FR model. In some cases, the RR model actually improved on the FR performance. The amount of reference information required to process a test video was decreased 400 times and the execution time required by the comparison stage was decreased 10,000 times. These implementation gains were significant and did not decrease model accuracy. Nevertheless, it is important to emphasize that the RR model requires considerable time to compute the thresholds and allows for fast quality assessment only when there has been prior stage 1 analysis. 5. NR AND RR METRICS BASED ON ARTIFACT MEASUREMENTS When the reference is not available or insufficient processing time is available no-reference metrics are needed. Their design is often difficult because it is hard to differentiate the natural content of the video from the artifact signals. The approach taken in this project consists of designing individual artifact metrics and, then, combining them into an overall annoyance model. The assumption here is that, instead of trying to detect and estimate the strength of an unknown impairment signal, it is easier to detect individual artifact signals and estimate their strength because we know their appearance and the type of process which generates them. In Figure 5 a simplified block diagram of our NR model is shown. The development of an NR model requires a study of the visibility, annoyance, and strength of the most relevant artifacts, since we still do not have a good understanding of how each individual artifact depends on the physical properties of the video and how they combine to produce the overall annoyance. For this purpose we have used Figure 4. FR annoyance model. Figure 5. Block-diagram of the NR annoyance model. synthetic artifacts that look like real artifacts, yet are simpler, purer, and easier to describe. This approach offers a great degree of control with respect to the amplitude, distribution, and mixture of different types of artifacts and makes it possible, for example, to study the perceptual importance of each individual artifact. We have implemented algorithms for generating four types of synthetic artifacts commonly found in compressed video sequences blurriness, blockiness, ringing, and noisiness [23-26]. Using these synthetic artifacts, we have performed two sets of experiments. In the first set of experiments, subjects performed annoyance, detection, and description tasks. The results showed that the synthetic artifacts, besides being visually similar to real compression (MPEG-2) impairments, had similar psychometric and annoyance functions. Figure 6 shows the annoyance functions of four test videos containing only blockiness, blurriness, ringing, and noisiness for the original video Cheerleader. ANOVA tests performed on the data showed that the original had a significant effect on x and x T, while the artifact signal did not. It was also found that x and x T were highly correlated and related by x = a xt + b, where a and b varied according to the type of artifact. In the second set of experiments, half of the subjects performed strength tasks while the other half performed annoyance and detection tasks. When the individual synthetic artifact signals were presented by themselves with high TSE, the strength judgments indicated that they were correctly identified. At low TSE, on the other hand, other artifacts were also reported. Strength judgments for

5 combinations of artifact signals showed that noisy artifact signals seemed to decrease the perceived strength of the other artifacts, while blurry artifact signals seemed to increase them. Annoyance increased both with the number of artifact signals and their strengths. We examined both a linear model and a weighted Minkowski model for describing the relation between artifact strengths and overall Annoyance. The optimal value for the Minkowski exponent was 1.16 and the coefficients were 6.18, 9.35, and 7.74 for blockiness, blurriness, and noisiness. For the linear model, the coefficients were equal to 4.05, 6.36, and 5.31, for blockiness, blurriness, and noisiness. Both models produced a very good correlation (r = 0.87) with the data with no statistical difference in performance. Figure 8 shows a plot of the measured versus predicted annoyance using the linear model. The coefficients found for ringing were very low compared to the coefficients of the other three artifacts, so this artifact was excluded from the final model. This may be due to the fact that the other artifacts are more visible than ringing. We then attempted to predict the perceived artifact strengths from the physical artifact signals. We have designed a set of NR artifact metrics that are simple enough to be used in real-time applications. The metrics were tested using videos which contained only the desired artifact signal or combinations of artifact signals at several strengths. We obtained a model for overall annoyance based on a combination of the best artifact metrics using both a Minkowski metric and a linear model. Both models produced a good correlation (r = 0.86) with the data with no statistical difference in performance. The fit for the Minkowski metric returned an exponent equal to 0.66 and scaling coefficients equal to 0.91, 3.40, and 2.51, corresponding to blockiness, blurriness, and noisiness, while for the linear model the scaling coefficients were equal to 3.41, 7.40, and 5.39, corresponding to blockiness, blurriness, and noisiness. Figure 8 shows a plot of the Figure 6. Annoyance functions for 4 test videos containing blockiness, blurriness, ringing, and noisiness. Figure 7. Perceptual annoyance model (linear). Figure 8. NR Physical annoyance model (linear). measured versus predicted annoyance using linear metric. The annoyance models using the artifact metrics were somewhat similar to the annoyance models obtained using the artifact perceptual strengths, i.e., similar parameters were found for both perceptual and physical models. Nevertheless, the model using perceptual strengths produced a better fit than the one using physical strengths. This fact may be an indication that the perceptual and physical strengths of the artifacts are related by a non-linear function. A RR approach was also proposed that did not require any changes to the algorithms. It simply consisted of sending a single number per video that corresponded to the output value of the artifact metric for the original or reference video. The RR approach produced better fits with a correlation equal to 0.88, while the NR model produced a correlation of around SUMMARY AND CONCLUSIONS In this paper we present a review of two different projects completed during the last several of years. We performed a series of specially designed psychophysical experiments with the goal of studying the appearance, visibility, annoyance, strength, and importance of different types of artifacts commonly found in digital videos. We used both real and synthetically generated artifacts. The results

6 showed that the detection threshold and the midannoyance defect strength are correlated and related by a linear expression. Also, the original had a significant effect on mid-annoyance defect strength and detection threshold, while in most cases the artifact signal did not. FR and RR quality metrics using a simplified human visual system model produced good results. Nevertheless, these models are not fast enough to operate in real time. For real-time applications, we proposed a NR and RR approach based on simple artifact metrics. The outputs of the artifact metrics were combined in order to obtain an annoyance model. The model produced a slightly worse performance than the FR model, but it had the advantage of being simpler and not requiring the original. 5. ACKNOWLEDGEMENTS This work was supported in part by CAPES Brazil, in part by a National Science Foundation Grant, and in part by a University of California MICRO Grant with matching support from Philips Research Laboratories. 6. REFERENCES [1] M. Yuen and H. R. Wu, "A survey of hybrid MC/DPCM/DCT video coding distortions," Signal Processing, vol. 70, pp , [2] S. Daly, "The visible differences predictor: an algorithm for the assessment of image fidelity," in Digital images and human vision, Andrew B. Watson, Ed. Cambridge, Massachusetts: MIT Press, 1993, pp [3] J. Lubin, "A human vision system model for objective picture quality measurements," IEE International Broadcasting Conference, Amsterdam, The Netherlands, pp , [4] A. B. Watson, "Perceptual-components architecture for digital video," Journal of the Optical Society of America A- Optics & Image Science, vol. 7, pp , [5] S. Wolf and M. H. Pinson, "Spatial-temporal distortion metric for in-service quality monitoring of any digital video system," SPIE Multimedia Systems and Applications II, Boston, MA, USA, pp , [6] S. Winkler, "A perceptual distortion metric for digital color video," SPIE Human Vision and Electronic Imaging, San Jose, CA, USA, pp , [7] S. Winkler, "Issues in vision modeling for perceptual video quality assessment," Signal Processing, vol. 78, pp , [8] A. A. Webster, C. T. Jones, M. H. Pinson, S. D. Voran, and S. Wolf, "An objective video quality assessment system based on human perception," SPIE Human Vision, Visual Processing, and Digital Display IV, San Jose, CA, USA, pp , [9] P. Bretillon, N. Montard, J. Baina, and G. Goudezeune, "Quality meter and digital television applications," SPIE Visual Communications and Image Processing, Perth, WA, Australia, pp , [10] Zhou Wang, A.C. Bovik, and B.L. Evan, "Blind measurement of blocking artifacts in images," IEEE International Conference on Image Processing, pp , [11] H.R. Wu and M.; Yuen, "A generalized block-edge impairment metric for video coding," IEEE Signal Processing Letters, vol. 4, pp , [12] J. Caviedes and J. Jung, "No-Reference Metric for a Video Quality Control Loop," Int. Conf. on Information Systems, Analysis and Synthesis, [13] M. S. Moore, Psychophysical Measurement and Prediction of Digital Video Quality, Ph.D. Thesis, Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA, USA, [14] Mylene C.Q. Farias, No-Reference and Reduced Reference Video Quality Metrics: New Contributions, Ph.D Dissertation, Dept. of Electrical and Computer Engineering, University of California, Santa Barbara, CA, USA, [15] ITU Recommendation BT.500-8, "Methodology for the subjective assessment of the quality of television pictures," [16] M.S. Moore, J.M. Foley, and S.K. Mitra, "Defect visibility and content importance: Effects on perceived impairment," Image Communication, [17] A. B. Watson and J. Malo, "Video quality measures based on the standard spatial observer," Proceedings International Conference on Image Processing, vol. 3, pp. 41-4, [18] S. Winkler and P. Vandergheynst, "Computing isotropic local contrast from oriented pyramid decompositions," IEEE International Conference on Image Processing (ICIP), Kobe, Japan, pp , [19] M. Nadenau, Integration of human color vision models into high quality image compression, Doctor of Philosophy, Signal Processing Laboratory, Ecole Polytechnique Federale de Lausanne, Lausanne, [20] R. Rosenholtz and A. B. Watson, "Perceptual adaptive JPEG coding," IEEE International Conference on Image Processing, Lausanne, Switzerland, pp , [21] Barry G. Haskell, Atul Puri, and Arun N. Netravali, Digital video: an introduction to MPEG-2. New York, NY, USA: Chapman & Hall: International Thomson Pub., [22] J. Lubin, "Sarnoff JND Vision," Sarnoff Corp. [23] M.C.Q. Farias, J.M. Foley, and S.K. Mitra, "Detectability and annoyance of synthetic blurring and ringing in video sequences," IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, Montreal, Canada, [24] M.C.Q. Farias, J.M. Foley, and S.K. Mitra, "Perceptual Analysis of Video Impairments that Combine Blocky, Blurry, Noisy, and Ringing Synthetic Artifacts," SPIE Human Vision and Electronic Imaging, San Jose, CA, USA, [25] M.C.Q. Farias, J.M. Foley, and S.K. Mitra, "Some Properties of Synthetic Blocky and Blurry Artifacts," SPIE Human Vision and Electronic Imaging, Santa Clara, CA, USA, pp , [26] M.C.Q. Farias, J.M. Foley, and S.K. Mitra, "Perceptual Contributions of Blocky, Blurry and Noisy Artifacts to Overall Annoyance," IEEE Intl. Conf. on Multimedia & Expo, Baltimore, MD, USA, pp , 2003.

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

A NO-REFERENCE AUDIO-VISUAL VIDEO QUALITY METRIC

A NO-REFERENCE AUDIO-VISUAL VIDEO QUALITY METRIC A NO-REFERENCE AUDIO-VISUAL VIDEO QUALITY METRIC Helard Becerra Martinez and Mylène C. Q. Farias Department of Electrical Engineering Department of Computer Science University of Brasília, Brasília - DF,

More information

No Reference Medical Image Quality Measurement Based on Spread Spectrum and Discrete Wavelet Transform using ROI Processing

No Reference Medical Image Quality Measurement Based on Spread Spectrum and Discrete Wavelet Transform using ROI Processing No Reference Medical Image Quality Measurement Based on Spread Spectrum and Discrete Wavelet Transform using ROI Processing Arash Ashtari Nakhaie, Shahriar Baradaran Shokouhi Iran University of Science

More information

WAVELET VISIBLE DIFFERENCE MEASUREMENT BASED ON HUMAN VISUAL SYSTEM CRITERIA FOR IMAGE QUALITY ASSESSMENT

WAVELET VISIBLE DIFFERENCE MEASUREMENT BASED ON HUMAN VISUAL SYSTEM CRITERIA FOR IMAGE QUALITY ASSESSMENT WAVELET VISIBLE DIFFERENCE MEASUREMENT BASED ON HUMAN VISUAL SYSTEM CRITERIA FOR IMAGE QUALITY ASSESSMENT 1 NAHID MOHAMMED, 2 BAJIT ABDERRAHIM, 3 ZYANE ABDELLAH, 4 MOHAMED LAHBY 1 Asstt Prof., Department

More information

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz No-reference perceptual quality metric for H.264/AVC encoded video Tomás Brandão Maria Paula Queluz IT ISCTE IT IST VPQM 2010, Scottsdale, USA, January 2010 Outline 1. Motivation and proposed work 2. Technical

More information

DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM. Jeoong Sung Park and Tokunbo Ogunfunmi

DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM. Jeoong Sung Park and Tokunbo Ogunfunmi DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM Jeoong Sung Park and Tokunbo Ogunfunmi Department of Electrical Engineering Santa Clara University Santa Clara, CA 9553, USA Email: jeoongsung@gmail.com

More information

Wavelet Image Coding Measurement based on System Visual Human

Wavelet Image Coding Measurement based on System Visual Human Applied Mathematical Sciences, Vol. 4, 2010, no. 49, 2417-2429 Wavelet Image Coding Measurement based on System Visual Human Mohammed Nahid LIMIARF Laboratory, Faculty of Sciences, Med V University, Rabat,

More information

Blind Measurement of Blocking Artifact in Images

Blind Measurement of Blocking Artifact in Images The University of Texas at Austin Department of Electrical and Computer Engineering EE 38K: Multidimensional Digital Signal Processing Course Project Final Report Blind Measurement of Blocking Artifact

More information

SSIM Image Quality Metric for Denoised Images

SSIM Image Quality Metric for Denoised Images SSIM Image Quality Metric for Denoised Images PETER NDAJAH, HISAKAZU KIKUCHI, MASAHIRO YUKAWA, HIDENORI WATANABE and SHOGO MURAMATSU Department of Electrical and Electronics Engineering, Niigata University,

More information

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality Multidimensional DSP Literature Survey Eric Heinen 3/21/08

More information

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper)

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper) MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT Zhou Wang 1, Eero P. Simoncelli 1 and Alan C. Bovik 2 (Invited Paper) 1 Center for Neural Sci. and Courant Inst. of Math. Sci., New York Univ.,

More information

Structural Similarity Based Image Quality Assessment

Structural Similarity Based Image Quality Assessment Structural Similarity Based Image Quality Assessment Zhou Wang, Alan C. Bovik and Hamid R. Sheikh It is widely believed that the statistical properties of the natural visual environment play a fundamental

More information

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack BLIND QUALITY ASSESSMENT OF JPEG2 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack Laboratory for Image and Video Engineering, Department of Electrical

More information

Image Quality Assessment: From Error Visibility to Structural Similarity. Zhou Wang

Image Quality Assessment: From Error Visibility to Structural Similarity. Zhou Wang Image Quality Assessment: From Error Visibility to Structural Similarity Zhou Wang original Image Motivation MSE=0, MSSIM=1 MSE=225, MSSIM=0.949 MSE=225, MSSIM=0.989 MSE=215, MSSIM=0.671 MSE=225, MSSIM=0.688

More information

A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT. Dogancan Temel and Ghassan AlRegib

A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT. Dogancan Temel and Ghassan AlRegib A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT Dogancan Temel and Ghassan AlRegib Center for Signal and Information Processing (CSIP) School of

More information

Network-based model for video packet importance considering both compression artifacts and packet losses

Network-based model for video packet importance considering both compression artifacts and packet losses Network-based model for video packet importance considering both compression artifacts and packet losses Yuxia Wang Communication University of China Beijing, China, 124 Email: yuxiaw@cuc.edu.cn Ting-Lan

More information

Data Hiding in Video

Data Hiding in Video Data Hiding in Video J. J. Chae and B. S. Manjunath Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 9316-956 Email: chaejj, manj@iplab.ece.ucsb.edu Abstract

More information

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

Evaluation of Two Principal Approaches to Objective Image Quality Assessment Evaluation of Two Principal Approaches to Objective Image Quality Assessment Martin Čadík, Pavel Slavík Department of Computer Science and Engineering Faculty of Electrical Engineering, Czech Technical

More information

A NEW METHODOLOGY TO ESTIMATE THE IMPACT OF H.264 ARTEFACTS ON SUBJECTIVE VIDEO QUALITY

A NEW METHODOLOGY TO ESTIMATE THE IMPACT OF H.264 ARTEFACTS ON SUBJECTIVE VIDEO QUALITY A NEW METHODOLOGY TO ESTIMATE THE IMPACT OF H.264 ARTEFACTS ON SUBJECTIVE VIDEO QUALITY Stéphane Péchard, Patrick Le Callet, Mathieu Carnec, Dominique Barba Université de Nantes IRCCyN laboratory IVC team

More information

Image and Video Quality Assessment Using Neural Network and SVM

Image and Video Quality Assessment Using Neural Network and SVM TSINGHUA SCIENCE AND TECHNOLOGY ISSN 1007-0214 18/19 pp112-116 Volume 13, Number 1, February 2008 Image and Video Quality Assessment Using Neural Network and SVM DING Wenrui (), TONG Yubing (), ZHANG Qishan

More information

Reduction of Blocking artifacts in Compressed Medical Images

Reduction of Blocking artifacts in Compressed Medical Images ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 8, No. 2, 2013, pp. 096-102 Reduction of Blocking artifacts in Compressed Medical Images Jagroop Singh 1, Sukhwinder Singh

More information

THE QUALITY of an image is a difficult concept to

THE QUALITY of an image is a difficult concept to IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 8, NO. 5, MAY 1999 717 A Wavelet Visible Difference Predictor Andrew P. Bradley, Member, IEEE Abstract In this paper, we describe a model of the human visual

More information

ANALYZING THE SPATIAL QUALITY OF INTERNET STREAMING VIDEO. Amy R. Reibman, Subhabrata Sen, and Jacobus Van der Merwe

ANALYZING THE SPATIAL QUALITY OF INTERNET STREAMING VIDEO. Amy R. Reibman, Subhabrata Sen, and Jacobus Van der Merwe ANALYZING THE SPATIAL QUALITY OF INTERNET STREAMING VIDEO Amy R. Reibman, Subhabrata Sen, and Jacobus Van der Merwe AT&T Labs - Research amy,sen,kobus @research.att.com ABSTRACT The quality of video over

More information

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES Angela D Angelo, Mauro Barni Department of Information Engineering University of Siena ABSTRACT In multimedia applications there has

More information

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES Ming-Jun Chen, 1,3, Alan C. Bovik 1,3, Lawrence K. Cormack 2,3 Department of Electrical & Computer Engineering, The University of Texas

More information

Blind Prediction of Natural Video Quality and H.264 Applications

Blind Prediction of Natural Video Quality and H.264 Applications Proceedings of Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics January 30-February 1, 2013, Scottsdale, Arizona 1 Blind Prediction of Natural Video Quality

More information

Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric

Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric Electronic Letters on Computer Vision and Image Analysis 7(3):23-35, 2008 Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric B. Bringier, L. Quintard + and M.-C. Larabi Université

More information

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264 Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264 Jing Hu and Jerry D. Gibson Department of Electrical and Computer Engineering University of California, Santa Barbara, California

More information

ENTROPY-BASED IMAGE WATERMARKING USING DWT AND HVS

ENTROPY-BASED IMAGE WATERMARKING USING DWT AND HVS SETIT 2005 3 rd International Conference: Sciences of Electronic, Technologies of Information and Telecommunications March 27-31, 2005 TUNISIA ENTROPY-BASED IMAGE WATERMARKING USING DWT AND HVS Shiva Zaboli

More information

Spatial Standard Observer for Visual Technology

Spatial Standard Observer for Visual Technology Spatial Standard Observer for Visual Technology Andrew B. Watson NASA Ames Research Center Moffett Field, CA 94035-1000 Andrew.b.watson@nasa.gov Abstract - The Spatial Standard Observer (SSO) was developed

More information

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation Optimizing the Deblocking Algorithm for H.264 Decoder Implementation Ken Kin-Hung Lam Abstract In the emerging H.264 video coding standard, a deblocking/loop filter is required for improving the visual

More information

BLIND MEASUREMENT OF BLOCKING ARTIFACTS IN IMAGES Zhou Wang, Alan C. Bovik, and Brian L. Evans. (

BLIND MEASUREMENT OF BLOCKING ARTIFACTS IN IMAGES Zhou Wang, Alan C. Bovik, and Brian L. Evans. ( BLIND MEASUREMENT OF BLOCKING ARTIFACTS IN IMAGES Zhou Wang, Alan C. Bovik, and Brian L. Evans Laboratory for Image and Video Engineering, The University of Texas at Austin (Email: zwang@ece.utexas.edu)

More information

Coding of 3D Videos based on Visual Discomfort

Coding of 3D Videos based on Visual Discomfort Coding of 3D Videos based on Visual Discomfort Dogancan Temel and Ghassan AlRegib School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA, 30332-0250 USA {cantemel, alregib}@gatech.edu

More information

Reduced-Reference Image Quality Assessment Using A Wavelet-Domain Natural Image Statistic Model

Reduced-Reference Image Quality Assessment Using A Wavelet-Domain Natural Image Statistic Model Presented at: IS&T/SPIE s 17th Annual Symposium on Electronic Imaging San Jose, CA, Jan. 17-, 05 Published in: Human Vision and Electronic Imaging X, Proc. SPIE, vol. 5666. c SPIE Reduced-Reference Image

More information

Robust Image Watermarking based on DCT-DWT- SVD Method

Robust Image Watermarking based on DCT-DWT- SVD Method Robust Image Watermarking based on DCT-DWT- SVD Sneha Jose Rajesh Cherian Roy, PhD. Sreenesh Shashidharan ABSTRACT Hybrid Image watermarking scheme proposed based on Discrete Cosine Transform (DCT)-Discrete

More information

Multimedia Communications. Audio coding

Multimedia Communications. Audio coding Multimedia Communications Audio coding Introduction Lossy compression schemes can be based on source model (e.g., speech compression) or user model (audio coding) Unlike speech, audio signals can be generated

More information

Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation

Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation IJECT Vo l. 8, Is s u e 3, Ju l y - Se p t 2017 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation 1 Preeti Rani,

More information

EE 5359 Multimedia project

EE 5359 Multimedia project EE 5359 Multimedia project -Chaitanya Chukka -Chaitanya.chukka@mavs.uta.edu 5/7/2010 1 Universality in the title The measurement of Image Quality(Q)does not depend : On the images being tested. On Viewing

More information

Structural Similarity Based Image Quality Assessment Using Full Reference Method

Structural Similarity Based Image Quality Assessment Using Full Reference Method From the SelectedWorks of Innovative Research Publications IRP India Spring April 1, 2015 Structural Similarity Based Image Quality Assessment Using Full Reference Method Innovative Research Publications,

More information

Image Quality Assessment based on Improved Structural SIMilarity

Image Quality Assessment based on Improved Structural SIMilarity Image Quality Assessment based on Improved Structural SIMilarity Jinjian Wu 1, Fei Qi 2, and Guangming Shi 3 School of Electronic Engineering, Xidian University, Xi an, Shaanxi, 710071, P.R. China 1 jinjian.wu@mail.xidian.edu.cn

More information

Combining Audio And Video Metrics To Assess Audio-Visual Quality

Combining Audio And Video Metrics To Assess Audio-Visual Quality Noname manuscript No. (will be inserted by the editor) Combining Audio And Video Metrics To Assess Audio-Visual Quality Helard A. Becerra Martinez Mylène C. Q. Farias Received: date / Accepted: date Abstract

More information

TOD Test Method for Characterizing Electro-Optical System Performance

TOD Test Method for Characterizing Electro-Optical System Performance TOD Test Method for Characterizing Electro-Optical System Performance S. W. McHugh A. Irwin Santa Barbara Infrared, Inc. 312A North Nopal Street Santa Barbara, CA 9313 J. M. Valeton P. Bijl TNO Human Factors

More information

No-reference visually significant blocking artifact metric for natural scene images

No-reference visually significant blocking artifact metric for natural scene images No-reference visually significant blocking artifact metric for natural scene images By: Shan Suthaharan S. Suthaharan (2009), No-reference visually significant blocking artifact metric for natural scene

More information

SUBJECTIVE QUALITY EVALUATION OF H.264 AND H.265 ENCODED VIDEO SEQUENCES STREAMED OVER THE NETWORK

SUBJECTIVE QUALITY EVALUATION OF H.264 AND H.265 ENCODED VIDEO SEQUENCES STREAMED OVER THE NETWORK SUBJECTIVE QUALITY EVALUATION OF H.264 AND H.265 ENCODED VIDEO SEQUENCES STREAMED OVER THE NETWORK Dipendra J. Mandal and Subodh Ghimire Department of Electrical & Electronics Engineering, Kathmandu University,

More information

Quality versus Intelligibility: Evaluating the Coding Trade-offs for American Sign Language Video

Quality versus Intelligibility: Evaluating the Coding Trade-offs for American Sign Language Video Quality versus Intelligibility: Evaluating the Coding Trade-offs for American Sign Language Video Frank Ciaramello, Jung Ko, Sheila Hemami School of Electrical and Computer Engineering Cornell University,

More information

Joint Impact of MPEG-2 Encoding Rate and ATM Cell Losses on Video Quality

Joint Impact of MPEG-2 Encoding Rate and ATM Cell Losses on Video Quality Published in GLOBECOM 98, Sidney, November 998 Joint Impact of MPEG- Encoding Rate and ATM Cell Losses on Video Quality Olivier Verscheure, Pascal Frossard and Maher Hamdi Institute for computer Communications

More information

JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION

JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION Julio Pons 1, Miguel Mateo 1, Josep Prades 2, Román Garcia 1 Universidad Politécnica de Valencia Spain 1 {jpons,mimateo,roman}@disca.upv.es 2 jprades@dcom.upv.es

More information

ON QUALITY ASSESSMENT IN THE DELIVERY OF MOBILE VIDEO

ON QUALITY ASSESSMENT IN THE DELIVERY OF MOBILE VIDEO This paper appears was presented at the 18th International Symposium on Intelligent Signal Processing and Communications Systems, held December 6-8, 2010 at the University of Electronic Science and Technology

More information

MAXIMIZING BANDWIDTH EFFICIENCY

MAXIMIZING BANDWIDTH EFFICIENCY MAXIMIZING BANDWIDTH EFFICIENCY Benefits of Mezzanine Encoding Rev PA1 Ericsson AB 2016 1 (19) 1 Motivation 1.1 Consumption of Available Bandwidth Pressure on available fiber bandwidth continues to outpace

More information

User Level QoS Assessment of a Multipoint to Multipoint TV Conferencing Application over IP Networks

User Level QoS Assessment of a Multipoint to Multipoint TV Conferencing Application over IP Networks User Level QoS Assessment of a Multipoint to Multipoint TV Conferencing Application over IP Networks Yoshihiro Ito and Shuji Tasaka Department of Computer Science and Engineering, Graduate School of Engineering

More information

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

How Many Humans Does it Take to Judge Video Quality?

How Many Humans Does it Take to Judge Video Quality? How Many Humans Does it Take to Judge Video Quality? Bill Reckwerdt, CTO Video Clarity, Inc. Version 1.0 A Video Clarity Case Study page 1 of 5 Abstract for Subjective Video Quality Assessment In order

More information

Compression of VQM Features for Low Bit-Rate Video Quality Monitoring

Compression of VQM Features for Low Bit-Rate Video Quality Monitoring Compression of VQM Features for Low Bit-Rate Video Quality Monitoring Mina Makar, Yao-Chung Lin, Andre F. de Araujo and Bernd Girod Information Systems Laboratory, Stanford University, Stanford, CA 9435

More information

Edge-Directed Image Interpolation Using Color Gradient Information

Edge-Directed Image Interpolation Using Color Gradient Information Edge-Directed Image Interpolation Using Color Gradient Information Andrey Krylov and Andrey Nasonov Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics and Cybernetics,

More information

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES H. I. Saleh 1, M. E. Elhadedy 2, M. A. Ashour 1, M. A. Aboelsaud 3 1 Radiation Engineering Dept., NCRRT, AEA, Egypt. 2 Reactor Dept., NRC,

More information

Edge-directed Image Interpolation Using Color Gradient Information

Edge-directed Image Interpolation Using Color Gradient Information Edge-directed Image Interpolation Using Color Gradient Information Andrey Krylov and Andrey Nasonov Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics and Cybernetics,

More information

PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY

PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY Journal of ELECTRICAL ENGINEERING, VOL. 59, NO. 1, 8, 9 33 PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY Eugeniusz Kornatowski Krzysztof Okarma In the paper a probabilistic approach to quality

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection

A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection John N. Ellinas Abstract In this paper, a robust watermarking algorithm using the wavelet transform and edge detection is presented. The

More information

ISSN: An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding

ISSN: An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding Ali Mohsin Kaittan*1 President of the Association of scientific research and development in Iraq Abstract

More information

Performance of Quality Metrics for Compressed Medical Images Through Mean Opinion Score Prediction

Performance of Quality Metrics for Compressed Medical Images Through Mean Opinion Score Prediction RESEARCH ARTICLE Copyright 212 American Scientific Publishers All rights reserved Printed in the United States of America Journal of Medical Imaging and Health Informatics Vol. 2, 1 7, 212 Performance

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Engineering Acoustics Session 2pEAb: Controlling Sound Quality 2pEAb1. Subjective

More information

DVQ: A digital video quality metric based on human vision

DVQ: A digital video quality metric based on human vision Watson, A. B., Hu, J., & McGowan, J. F., III. (). Digital video quality metric based on human vision. Journal of Electronic Imaging, (), -9. DVQ: A digital video quality metric based on human vision Andrew

More information

Content Based Image Retrieval Using Curvelet Transform

Content Based Image Retrieval Using Curvelet Transform Content Based Image Retrieval Using Curvelet Transform Ishrat Jahan Sumana, Md. Monirul Islam, Dengsheng Zhang and Guojun Lu Gippsland School of Information Technology, Monash University Churchill, Victoria

More information

A new methodology to estimate the impact of H.264 artefacts on subjective video quality

A new methodology to estimate the impact of H.264 artefacts on subjective video quality A new methodology to estimate the impact of H.264 artefacts on subjective video quality Stéphane Péchard, Patrick Le Callet, Mathieu Carnec, Dominique Barba To cite this version: Stéphane Péchard, Patrick

More information

Quality of biometric data: definition and validation of metrics. Christophe Charrier GREYC - Caen, France

Quality of biometric data: definition and validation of metrics. Christophe Charrier GREYC - Caen, France Quality of biometric data: definition and validation of metrics Christophe Charrier GREYC - Caen, France 1 GREYC Research Lab Le pôle TES et le sans-contact Caen 2 3 Introduction Introduction Quality of

More information

Audio Compression. Audio Compression. Absolute Threshold. CD quality audio:

Audio Compression. Audio Compression. Absolute Threshold. CD quality audio: Audio Compression Audio Compression CD quality audio: Sampling rate = 44 KHz, Quantization = 16 bits/sample Bit-rate = ~700 Kb/s (1.41 Mb/s if 2 channel stereo) Telephone-quality speech Sampling rate =

More information

Video Quality Analysis for H.264 Based on Human Visual System

Video Quality Analysis for H.264 Based on Human Visual System IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021 ISSN (p): 2278-8719 Vol. 04 Issue 08 (August. 2014) V4 PP 01-07 www.iosrjen.org Subrahmanyam.Ch 1 Dr.D.Venkata Rao 2 Dr.N.Usha Rani 3 1 (Research

More information

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California,

More information

3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm

3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm 3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm Tobias Ritschel Kaleigh Smith Matthias Ihrke Thorsten Grosch Karol Myszkowski Hans-Peter

More information

A DCT Statistics-Based Blind Image Quality Index

A DCT Statistics-Based Blind Image Quality Index A DCT Statistics-Based Blind Image Quality Index Michele Saad, Alan C. Bovik, Christophe Charrier To cite this version: Michele Saad, Alan C. Bovik, Christophe Charrier. A DCT Statistics-Based Blind Image

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal.

Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal. Perceptual coding Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal. Perceptual encoders, however, have been designed for the compression of general

More information

CS 260: Seminar in Computer Science: Multimedia Networking

CS 260: Seminar in Computer Science: Multimedia Networking CS 260: Seminar in Computer Science: Multimedia Networking Jiasi Chen Lectures: MWF 4:10-5pm in CHASS http://www.cs.ucr.edu/~jiasi/teaching/cs260_spring17/ Multimedia is User perception Content creation

More information

Video Quality assessment Measure with a Neural Network H. El Khattabi, A. Tamtaoui and D. Aboutajdine

Video Quality assessment Measure with a Neural Network H. El Khattabi, A. Tamtaoui and D. Aboutajdine Video Quality assessment Measure with a Neural Network H. El Khattabi, A. Tamtaoui and D. Aboutajdine Abstract In this paper, we present the video quality measure estimation via a neural network. This

More information

Image Quality Assessment: From Error Measurement to Structural Similarity

Image Quality Assessment: From Error Measurement to Structural Similarity IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 13, NO 1, JANUARY 2004 1 Image Quality Assessment: From Error Measurement to Structural Similarity Zhou Wang, Member, IEEE, Alan C Bovik, Fellow, IEEE, Hamid

More information

A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS

A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS Enrico Giora and Clara Casco Department of General Psychology, University of Padua, Italy Abstract Edge-based energy models

More information

Performance study of Gabor filters and Rotation Invariant Gabor filters

Performance study of Gabor filters and Rotation Invariant Gabor filters Performance study of Gabor filters and Rotation Invariant Gabor filters B. Ng, Guojun Lu, Dengsheng Zhang School of Computing and Information Technology University Churchill, Victoria, 3842, Australia

More information

Just Noticeable Difference for Images with Decomposition Model for Separating Edge and Textured Regions

Just Noticeable Difference for Images with Decomposition Model for Separating Edge and Textured Regions > Manuscript for TCSVT-< 1 Just Noticeable Difference for Images with Decomposition for Separating Edge and Textured Regions Anmin Liu, Weisi Lin, Manoranjan Paul, Chenwei Deng, Fan Zhang Abstract In just

More information

New feature preserving noise removal algorithm based on thediscrete cosine transform and the a prior knowledge of pixel type.

New feature preserving noise removal algorithm based on thediscrete cosine transform and the a prior knowledge of pixel type. Title New feature preserving noise removal algorithm based on thediscrete cosine transform and the a prior knowledge of pixel type Author(s) Yung, HC; Lai, HS Citation International Conference on Image

More information

Pre- and Post-Processing for Video Compression

Pre- and Post-Processing for Video Compression Whitepaper submitted to Mozilla Research Pre- and Post-Processing for Video Compression Aggelos K. Katsaggelos AT&T Professor Department of Electrical Engineering and Computer Science Northwestern University

More information

Open Access Research on No-reference Video Quality Assessment Based on H.264/AVC Bitstream

Open Access Research on No-reference Video Quality Assessment Based on H.264/AVC Bitstream Send Orders for Reprints to reprints@benthamscience.ae 444 The Open Automation and Control Systems Journal, 204, 6, 444-450 Open Access Research on No-reference Video Quality Assessment Based on H.264/AVC

More information

Methods of Measure and Analyse of Video Quality of the Image

Methods of Measure and Analyse of Video Quality of the Image Methods of Measure and Analyse of Video Quality of the Image Iulian UDROIU (1, Ioan TACHE (2, Nicoleta ANGELESCU (1, Ion CACIULA (1 1 VALAHIA University of Targoviste, Romania 2 POLITEHNICA University

More information

SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT. Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu

SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT. Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu 2012 IEEE International Conference on Multimedia and Expo Workshops SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu Department of

More information

AUDIOVISUAL COMMUNICATION

AUDIOVISUAL COMMUNICATION AUDIOVISUAL COMMUNICATION Laboratory Session: Audio Processing and Coding The objective of this lab session is to get the students familiar with audio processing and coding, notably psychoacoustic analysis

More information

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV Jeffrey S. McVeigh 1 and Siu-Wai Wu 2 1 Carnegie Mellon University Department of Electrical and Computer Engineering

More information

MIXDES Methods of 3D Images Quality Assesment

MIXDES Methods of 3D Images Quality Assesment Methods of 3D Images Quality Assesment, Marek Kamiński, Robert Ritter, Rafał Kotas, Paweł Marciniak, Joanna Kupis, Przemysław Sękalski, Andrzej Napieralski LODZ UNIVERSITY OF TECHNOLOGY Faculty of Electrical,

More information

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING Engineering Review Vol. 32, Issue 2, 64-69, 2012. 64 VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING David BARTOVČAK Miroslav VRANKIĆ Abstract: This paper proposes a video denoising algorithm based

More information

ON EVALUATING METRICS FOR VIDEO SEGMENTATION ALGORITHMS. Elisa Drelie Gelasca, Touradj Ebrahimi

ON EVALUATING METRICS FOR VIDEO SEGMENTATION ALGORITHMS. Elisa Drelie Gelasca, Touradj Ebrahimi ON EVALUATING METRICS FOR VIDEO SEGMENTATION ALGORITHMS Elisa Drelie Gelasca, Touradj Ebrahimi Ecole Polytechnique Fédérale de Lausanne (EPFL) CH-1015 Lausanne, Switzerland. {elisa.drelie,touradj.ebrahimi}@epfl.ch.

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

No-reference Harmony-guided Quality Assessment

No-reference Harmony-guided Quality Assessment 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops No-reference Harmony-guided Quality Assessment Christel Chamaret and Fabrice Urban Technicolor 975, avenue des Champs Blancs ZAC

More information

Quality Estimation of Video Transmitted over an Additive WGN Channel based on Digital Watermarking and Wavelet Transform

Quality Estimation of Video Transmitted over an Additive WGN Channel based on Digital Watermarking and Wavelet Transform Quality Estimation of Video Transmitted over an Additive WGN Channel based on Digital Watermarking and Wavelet Transform Mohamed S. El-Mahallawy, Attalah Hashad, Hazem Hassan Ali, and Heba Sami Zaky Abstract

More information

Adaptive Image Coding With Perceptual Distortion Control

Adaptive Image Coding With Perceptual Distortion Control Adaptive Image Coding With Perceptual Distortion Control Ingo Höntsch, Member, IEEE, and Lina J. Karam, Member, IEEE Abstract This paper presents a discrete cosine transform (DCT)-based locally adaptive

More information

Attention modeling for video quality assessment balancing global quality and local quality

Attention modeling for video quality assessment balancing global quality and local quality Downloaded from orbit.dtu.dk on: Jul 02, 2018 Attention modeling for video quality assessment balancing global quality and local quality You, Junyong; Korhonen, Jari; Perkis, Andrew Published in: proceedings

More information

Voice Quality Assessment for Mobile to SIP Call over Live 3G Network

Voice Quality Assessment for Mobile to SIP Call over Live 3G Network Abstract 132 Voice Quality Assessment for Mobile to SIP Call over Live 3G Network G.Venkatakrishnan, I-H.Mkwawa and L.Sun Signal Processing and Multimedia Communications, University of Plymouth, Plymouth,

More information

perceptual quality metric design for wireless image and video communication

perceptual quality metric design for wireless image and video communication perceptual quality metric design for wireless image and video communication Ulrich Engelke Blekinge Institute of Technology Licentiate Dissertation Series No. 2008:08 School of Engineering Perceptual Quality

More information

Coding and Scheduling for Efficient Loss-Resilient Data Broadcasting

Coding and Scheduling for Efficient Loss-Resilient Data Broadcasting Coding and Scheduling for Efficient Loss-Resilient Data Broadcasting Kevin Foltz Lihao Xu Jehoshua Bruck California Institute of Technology Department of Computer Science Department of Electrical Engineering

More information

BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY

BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY Michele A. Saad The University of Texas at Austin Department of Electrical and Computer Engineering Alan

More information