MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni

Similar documents
MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper)

Image Quality Assessment Techniques: An Overview

Image Quality Assessment based on Improved Structural SIMilarity

DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM. Jeoong Sung Park and Tokunbo Ogunfunmi

A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT. Dogancan Temel and Ghassan AlRegib

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES

Blind Prediction of Natural Video Quality and H.264 Applications

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz

A ROBUST WATERMARKING SCHEME BASED ON EDGE DETECTION AND CONTRAST SENSITIVITY FUNCTION

Image and Video Quality Assessment Using Neural Network and SVM

ON THE EFFECTIVENESS OF LOCAL WARPING AGAINST SIFT-BASED COPY-MOVE DETECTION

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

Short Survey on Static Hand Gesture Recognition

5. Feature Extraction from Images

Structural Similarity Based Image Quality Assessment

Motion Estimation. There are three main types (or applications) of motion estimation:

Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation

A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection

PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY

Color Universal Design without Restricting Colors and Their Combinations Using Lightness Contrast Dithering

Texture Image Segmentation using FCM

SSIM Image Quality Metric for Denoised Images

Image Enhancement Techniques for Fingerprint Identification

OBJECTIVE IMAGE QUALITY ASSESSMENT WITH SINGULAR VALUE DECOMPOSITION. Manish Narwaria and Weisi Lin

No Reference Medical Image Quality Measurement Based on Spread Spectrum and Discrete Wavelet Transform using ROI Processing

ENTROPY-BASED IMAGE WATERMARKING USING DWT AND HVS

EE 5359 Multimedia project

SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT. Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu

BLIND IMAGE QUALITY ASSESSMENT WITH LOCAL CONTRAST FEATURES

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

Edge-directed Image Interpolation Using Color Gradient Information

Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index

Subpixel Corner Detection Using Spatial Moment 1)

FACE RECOGNITION USING INDEPENDENT COMPONENT

WAVELET VISIBLE DIFFERENCE MEASUREMENT BASED ON HUMAN VISUAL SYSTEM CRITERIA FOR IMAGE QUALITY ASSESSMENT

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Performance of Quality Metrics for Compressed Medical Images Through Mean Opinion Score Prediction

Filtering Images. Contents

Part I, Chapters 4 & 5. Data Tables and Data Analysis Statistics and Figures

Objective Quality Assessment of Screen Content Images by Structure Information

Tools for texture/color based search of images

Edge-Directed Image Interpolation Using Color Gradient Information

No-reference visually significant blocking artifact metric for natural scene images

Structural Similarity Based Image Quality Assessment Using Full Reference Method

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Computer Vision I - Filtering and Feature detection

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT

Image Quality Assessment Method Based On Statistics of Pixel Value Difference And Local Variance Similarity

Blind Measurement of Blocking Artifact in Images

Voice Analysis for Mobile Networks

Digital Image Steganography Using Bit Flipping

An Improvement Technique based on Structural Similarity Thresholding for Digital Watermarking

Chapter 7: Dual Modeling in the Presence of Constant Variance

Reduction of Blocking artifacts in Compressed Medical Images

Multibit Digital Watermarking Robust against Local Nonlinear Geometrical Distortions

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov

Noise filtering for television receivers with reduced memory

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

A Modified SVD-DCT Method for Enhancement of Low Contrast Satellite Images

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

EE795: Computer Vision and Intelligent Systems

Single Slit Diffraction

Adaptive Fingerprint Image Enhancement Techniques and Performance Evaluations

Local Image Registration: An Adaptive Filtering Framework

Computer Vision I - Basics of Image Processing Part 2

JPEG IMAGE COMPRESSION USING QUANTIZATION TABLE OPTIMIZATION BASED ON PERCEPTUAL IMAGE QUALITY ASSESSMENT. Yuebing Jiang and Marios S.

Accurate Image Registration from Local Phase Information

Locating ego-centers in depth for hippocampal place cells

Application of Tatian s Method to Slanted-Edge MTF Measurement

Evaluation of texture features for image segmentation

Pattern Matching & Image Registration

Anno accademico 2006/2007. Davide Migliore

SUBJECTIVE ANALYSIS OF VIDEO QUALITY ON MOBILE DEVICES. Anush K. Moorthy, Lark K. Choi, Gustavo de Veciana and Alan C. Bovik

Fingerprint Classification Using Orientation Field Flow Curves

Blur Space Iterative De-blurring

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

Attention modeling for video quality assessment balancing global quality and local quality

Experiments with Edge Detection using One-dimensional Surface Fitting

A NO-REFERENCE AUDIO-VISUAL VIDEO QUALITY METRIC

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

A Study and Analysis on a Perceptual Image Hash Algorithm Based on Invariant Moments

Measuring and Managing Picture Quality

MAXIMIZING BANDWIDTH EFFICIENCY

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks

QR Code Watermarking Algorithm based on Wavelet Transform

Edge and local feature detection - 2. Importance of edge detection in computer vision

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

Context based optimal shape coding

Motion Estimation for Video Coding Standards

Confusion/Diffusion Capabilities of Some Robust Hash Functions

A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS

How Many Humans Does it Take to Judge Video Quality?

Image Retargeting for Small Display Devices

Transcription:

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES Angela D Angelo, Mauro Barni Department of Information Engineering University of Siena ABSTRACT In multimedia applications there has been an increasing interest in the use of quality measures based on human perception, however, only few works have dealt with distortions due to geometric transformations. In this paper we describe the solutions proposed so far in the literature for the quality evaluation of geometric distortions in images and we improve the performances of two recently described metrics with the introduction of a multiresolution framework and a perceptibility map. The results show the good performances of the proposed techniques compared to the previous versions of the metrics.. INTRODUCTION Digital images are subject to various kinds of distortions that may result in a degradation of visual quality during acquisition, processing, compression, storage, transmission and reproduction. It is therefore necessary for many applications to be able to quantify the image quality degradation that occurs in a system, so that it is possible to control and enhance the quality of the images it produces. Despite the existence of several studies on human perception of image quality, only few works can be found dealing with geometric distortions. A geometric distortion can be seen as a transformation of the position of the pixels in the image. Let I be an original image, neglecting the border effects, the geometrically distorted image Z is obtained by assigning to each pixel I(x; y) a vector D(x; y) = (D h (x; y); D v (x; y)), where D h (x; y) and D v (x; y) are respectively the horizontal and vertical displacement. In the following, D will be referred to as displacement field, depending on how it is produced it is possible to distinguish between global and local geometric distortions. Roughly speaking, a global transformation is defined by a mapping function that relates the points in the input image to the corresponding points in the output The research described in this paper has been partially funded by the Italian Ministry of Research and Education under FIRB project no. RBIN0AC9W. image. The mapping function is defined by a set of operational parameters and applied to all the image pixels, that is, the same operation, under the same parameters, affects all the image pixels. Local geometric distortions, instead, refer to transformations affecting in different ways the position of the pixels of the same image or affecting only part of the image. The goal of this paper is to improve the performance of two recently introduced image quality metrics, the metric based on Markov Random Field potential function [] (called in the sequel MF metric) and the metric based on the structural distortion evaluation [] (called in the sequel Gabor-based metric), through the introduction of a multiresolution framework to incorporate image details at different levels of resolution. The MF metric is also improved with the introduction of a perceptibility mask to take into account the characteristics of the images. The performance of these new measures are compared with the previous versions of the metrics. The proposed objective metrics deal with the problem of local geometric transformations. Global transformations, in fact, usually do not affect image quality at all or introduce a degradation proportional to the parameters defining the distortion, and, thus, easily quantifiable, for example, linking the perceptual degradation to the parameters of the transformation. Moreover in some applications like digital watermarking it is more difficult to deal with local transformations with respect to global distortions for which many watermarking schemes have been proposed during the last years. This paper is organized as follows. In section we describe the solutions proposed so far in the literature for the assessment of geometric distortions. In section we present the mathematical background behind the idea of the two methods and their multiresolution extensions. In section a psychovisual experiment is performed to tune the objective metrics with psychovisual data in order to obtain the perceptual metrics. In section a second subjective test is described to validate the proposed metrics. Finally, in Section 6, we derive our conclusions and propose some ideas for future researches.

. QUALITY ASSESSMENT OF GEOMETRIC DISTORTIONS In the last three decades, a great deal of effort has been paid to develop quality assessment methods that take advantage of known characteristics of the HVS. Many image quality assessment algorithms have been proposed and some of them have been shown to behave consistently when applied to certain kinds of distortions (e.g. JPEG compression, gaussian noise, median filtering, etc.), but the effectiveness of these metrics degrades when they are applied to images affected by geometric distortions. To best of our knowledge, only few works can be found in literature regarding the problem of the assessment of geometric distortions. A simple measure, proposed by Licks et al.[] is based on the evaluation of the variance of the sampling grid jitter. This method does not take into account the spectral features of the jitter on which the perception of the transformation depends, this means that two jitter noises with completely different spectral characteristic but with the same variance will be evaluated in the same way. The method proposed by Desurmont et al.[], called Mean Scaling of the Geometric Transformation (MSGT), is based on the average gradient of the sampling grid transformation. Setyawan et al.[] proposed an objective quality measurement scheme based on the hypothesis that the perceptual quality of a geometrically distorted image depends on the homogeneity of the geometric distortion, that is if the underlying complex transform can be approximated by a simpler transformation model applied on a more local scale. The less homogenous the distortion, the worse the visual quality would be. This method is very expensive from a computational point view due to the optimization involved in its computation. In [] D Angelo et al. proposed an objective metric based on the theory of Markov Random Fields, it relies on the assumption that the potential function of the configuration defining the geometric distortion gives an indication of the degradation of the distorted image. The experimental results show quite good performances of the proposed metric. The limitation of all the above works is that they just rely on the displacement field defining the distortion without taking into account the characteristics of the images, that is, the same distortion applied to different images returns the same value of the objective metric while the visual quality could be drastically different. An efficient image-quality measure would need to be able to consider the structural information in the image. In a recent work [], to overcome this problem, we propose a method based on image features processed by human vision. The proposed approach is based on the use of Gabor filters that have recently received considerable attention because the characteristics of certain cells in the visual cortex of some mammals can be approximated by these filters. The novelty of the proposed technique is that it considers both the displacement field describing the distortion and the structure of the image. The experimental results show the good performances of the proposed metric. As already explained in the introduction the goal of this paper is to improve the performances of the MF metric and the Gabor-based metric by introducing a multiresolution fra- /me-/work. In fact, since real-world images contain distinct features at various resolutions, in order to manage the level of detail of the image and to accomodate a wide range of viewing contexts, we need to represent them as a multiresolution model. Furthermore, because the MF metric does not take into account the characteristics of the images, we modify it by introducing a perceptibility map to weight the potential function in order to link the displacement field generating the distortion to the characteristic of the image at the same location.. THEORETICAL APPROACH In this section we present the mathematical background behind the idea of the two methods and their multiresolution extensions. Psychophysical studies show that human vision is sensitive to edges and bars in images, hence we decided to use Gabor filters to extract bar and edges information from the images and to use these features to evaluate the perceptibility of the distortions. In. we describe how to use Gabor filters for the edges and bars extraction and in the sections. and. we apply these considerations to the development of the multiresolution versions of the MF and Gaborbased metrics respectively... Gabor filters A D Gabor kernel can be mathematically defined as: ( f λ,θ,σ,γ,ϕ (x, y) = e x +γ y ) ) σ cos (π x λ + ϕ where: x = x cos θ + y sin θ y = x sin θ + y cos θ λ (whose value is specified in pixels) is the wavelength of the cosine factor of the Gabor filter kernel, ϕ is the phase offset of the cosine factor, θ specifies the orientation of the normal to the parallel stripes of the filter, γ describes the ellipticity of the support of the Gabor function and σ is the standard deviation of the Gaussian factor of the function. The value of σ cannot be specified directly, it can only be ()

changed through the half-response spatial frequency bandwidth b (default is b =, in which case σ and λ are connected as follows: σ = 0.6λ). For ϕ = 90 degrees (or 90) the filter in Eq.() deploies an antisymmetric Gabor function and gives a maximum response at an edge. A symmetric Gabor function (ϕ = 0 or 80 degrees) can be used for the bar detection. For the design of the filters we adopt the widely used parameters γ = 0. and b =. An efficient features extraction requires the filtering process across several scales. In our approach this consideration corresponds to find the correct value of σ (or λ) for each level of resolution L and we set experimentally the value of λ equal to 0 pixels. L A D Gabor kernel for a given level of resolution L in the θ direction will be given, with the above parameters, by the following equation: ( f L,θ,ϕ (x, y) = e L x +0.y ) ) 6,7 cos ( L π x + ϕ () where: x = x cos θ + y sin θ y = x sin θ + y cos θ Once defined the level of resolution and θ, we use the function described in Eq.() to filter the original image and to find edges and bars in the direction orthogonal to θ. The filtering function we use is described by the following equation: If L,θ (x, y) = IfL,θ,bar (x, y) + If L,θ,edge (x, y) () where If L,θ is the filtered image along θ and If L,θ,edge and If L,θ,bar are the original image convolved with the Gabor filters described by Eq.() with ϕ = 90 and ϕ = 0 respectively. The overall filtered image along different orientations will be given by: If L (x, y) = θ S If L,θ (x, y) () where S = {0, π }. In the next subsection we will see how to apply these considerations to build the MF and Gabor-based metrics in a multiresolution framework... Multiresolution MF metric In [] we showed that a general geometric distortion can be described as a Markov Random Field defined on the set The performance for a large variety of perceptual tasks is superior for stimuli aligned in horizontal or vertical orientations as compared to stimuli in oblique orientations (please refer to [] for details) of the image pixels and the displacement field generating the distortion is a possible configuration of the field. The idea behind this metric is that the potential function of the Markov Random Field describing the distortion gives an indication of the perceptual degradation of the distorted image. Thus, once defined a potential function, given a particular distortion and the corresponding configuration, we can evaluate the probability of the occurrence of that configuration by applying the Hammersley-Clifford theorem (for sake of brevity we do not describe the metric theory here, please refer to [] for details): the more probable the configuration, the more invisible the corresponding distortion, or equivalently invisible distortions correspond to configurations that result in a low value of the potential function. The potential function we use to evaluate the degradation introduced by a geometric distortion is a bivariate normal distribution and the score associate to each pixel in the image, quantifying the perceived distortion in that pixel, is given by the following equation (the overall score is simply obtained by summarizing all the image pixels potential): V σ,n (x, y) = ( x,ỹ) N exp { [ (Dh D h) σ x + (Dv D ṽ) σ y πσ x σ y ( x,ỹ) N () where D h and D v are the components of the displacement vector D(x; y) associated to the pixel (x, y), ( x, ỹ) is a point belonging to N, that is the first order neighborhood system of (x, y), D h and Dṽ are the components of the displacement vector D( x, ỹ) associated to the pixel ( x, ỹ) and σ x and σ y are the two components of the standard deviation vector σ. This last parameter is linked to the resolution of the image, we set experimentally σ x = σ y = 8, where L L is the level of resolution. The new equation for the potential function, for a given level of resolution L and a neighborhood system N, is described by: { [ ]} V L,N (x, y) = L exp L (D h D h) +(D v Dṽ) 8 8π (6) To apply the metric at different levels of resolution we need first of all to resize both the image and the displacement field by means of bicubic interpolation from the original dimension S S (for semplicity let us consider a square image) to S L S L, and then to apply Eq.(6) to evaluate the potential function for each pixel. As explained in the previous section, we improve the perfomances of the MF metric by introducing both the multiresolution framework and a perceptibility mask using the Gabor filters. The perceptibility map to be applied to the potential found by Eq.(6) is obtained filtering the image along different orientations as described by Eq.(). The final metric, that is the multiresolution version of ]}

the MF metric, is obtained by summarizing the contributions for each level of resolution, as described by the following equation: M-MF = log( S 6 ) L=0 (x,y) I V L,N (x, y) If L (x, y) (7) where the upper bound of L is given by visual constraints in order to do not lose too many details in the image... Multiresolution Gabor-based metric The main function of the HVS when looking at an image is to extract structural information from the viewing field, therefore a measurement of structural distortions should be a good approximation of perceived image distortion, since the more distortion affects the structure of the objects in the visual scene, the more the corresponding degradation is visible and annoying. Structures of objects in images are typically outlined by edges and bars. Hence, we expect that a measure that links the displacement field describing the distortion with the presence of edges and bars in the corresponding location of the image is likely to provide ad adequate measure. Specifically, fixed a particular θ, we consider the displacement field D θ (the projection of D along θ), orthogonal to bars and edges of the image. After that, to estimate the loss of structure in the image, we evaluate the gradient of D θ with respect to the direction orthogonal to θ. The assumption is that this gradient evaluates the smoothness of the displacement field and give us an indication of the perceptual degradation of the image. To obtain a multiresolution version of the Gabor-based metric, we need first of all to resize both the image and the displacement fields by means of bicubic interpolation from the original dimension S S to S L S L and then to apply, for each level of resolution L, the following formula: Gb = x,y θ S ( Dθ If L,θ(x,y) d (x, y)) (8) θ where If L,θ(x,y) is the filtered image described by Eq.() in the θ direction, S = {0, π } and the notation D θ indicates d θ the gradient of the displacement field in the θ direction with respect to the direction orthogonal to θ. The multiresolution version of the Gabor-based metric it will simply be given by: M-Gb = log( S 6 ) L=0 L ( x,y θ S ( ) ) Dθ If L,θ(x,y) (x, y) d θ (9). METRIC TUNING Two sets of subjective experiments were carried out with different purposes. The first set of experiments was performed to tune the objective metric with psychovisual data in order to transform them into perceptual metrics. The goal is to define two functions that, given the metrics defined by Eq.(7) and (9), return a numerical value that quantifies the image quality. The second set of experiments was conducted to validate the proposed metrics and it is discussed in the Section... The image database The image database used for the test included twelve gray scale high quality images, pixel in size, and was derived from a set of source images that reflects adequate diversity in image contents. The images of the database include pictures of faces, houses and natural scenes. To generate the local geometric distortions to be applied to the images and to have a broad range of image impairments we used the Constrained LPCD distortion [6] and the Markov Random Field distortion [6]. We used ten different distortions for each image for a total of 0 distortions, going from invisible distortions to very annoying distortions... The test methodology We need to measure the perceived quality of geometrically distorted images trough subjective scaling methods. The subjective scaling method we used is the Absolute Category Rating (ACR) method, a category judgement where the test images are presented one at a time and are rated independently on a category scale (this method is also called Single Stimulus Method). The ACR method specifies that after each presentation the subjects are asked to evaluate the quality of the stimulus shown using a five-level scale score. Procedures for ACR experiment have been designed by following the ITU-T Recommendation P.90,. The experiments were conducted in a dark room by using an adhoc setup system. The tests involved a panel of fifteen subjects with a good vision, all naives with respect to image quality assessment methods and image impairments... Processing of data The subjective scores must be analyzed with statistical techniques to yield results which summarize the performance of the metrics. We used standard methods based on Kurtosis coefficient to screen the judgments provided by the subjects and the MOS values (that is the arithmetic mean of all the individual scores) obtained following the above approach The image database and the software we used for the experiments is available on the web site http://www.dii.unisi.it/ vipp.

were used to derive the perceptual metrics by fitting them with a psychometric curve. The purpose of a psychometric curve is to associate the values given by the objective metric to the scores provided by the subjects. Psychometric curves exhibit a typical sigmoid shape to take into account the saturation effects typical of human senses. Through psychometric mapping, a match between the human perception of geometric artifacts and the objective metrics is established. In particular, in order to obtain a numerical score going from to, with corresponding to a bad image quality and to an excellent image quality, we used the Weibull function defined as follow: y = ( e ( ) x c ) k + (0) where x is the score obtained with Eq.(7) and (9), c and k are parameters to be estimated by fitting the objective metrics to the subjective data. We opted for this psychometric curve since it provided the best fit for our data among the commonly used curves, i.e., Gaussian, logistic and Weibull curves. To estimate c and k we used a nonlinear least squares data fitting by the Gauss-Newton method and we found c =.69 and k = 0.96 for the Multiresolution Markov Field metric, and c = 0.7 and k = 0.968 for the Multiresolution Gabor based metric. The final perceptual metrics will be obtained by applying the so defined Weibull function to the two metrics described in the previous section (Eq.(7) and (9)).. EXPERIMENTAL RESULTS To validate the proposed metric another subjective test was designed and performed. We used the ACR test as explained in the previous section. A new database of twelve images was used and ten new different distortions for each image were generated by using the same kinds of distortions. The results of the test are shown in Fig., for the Multiresolution MF metric, and in Fig., for the Multiresolution Gabor-based metric. The correlation between the objective datas and the users response is evident in both the plots, anyway, in order to provide quantitative measures on the performance of the proposed models, we followed the performance evaluation procedures employed in the Video Quality Experts Group Phase I FR-TV test. The relationship between objective data and the subjective ratings was estimated by using a nonlinear regression (the Weibull functions described by the solid plot in the figures). Then, the objective metrics were evaluated through three performance metrics, applied on the fitted values, as specified in the report of the VQEG group: the Pearson linear correlation coefficient, the Spearman rank-order correlation coefficient and the outlier ratio (percentage of the num-... Scatter plot of the MOS vs Objective Metric..... Fig.. Scatter plot of the versus Multiresolution MF metric... Scatter plot of the MOS vs Objective Metric..... Fig.. Scatter plot of the versus Multiresolution Gabor-based metric ber of predictions outside the range of times of the standard deviations). The result of the performance evaluation of the proposed algorithms through these three coefficients is shown in table. Pearson Spearman Outlier ratio M-MF 0.86 0.898 0 M-Gb 0.88 0.879 0 Table. Performance of the proposed metrics By referring to this table and by looking at the scatter plots in Fig. and, we can observe that the outlier ratio is always equal to zero, meaning that the metrics maintain prediction accuracy over the range of image sequences, and both the Pearson and the Spearman coefficients are quite high revealing a good prediction accuracy and monotonicity of both the models. The Multiresolution Gabor-based metric has better performances than the Multiresolution MF metric. We now provide a comparison of the proposed techniques with the plain MF and Gabor-based metric.

Fig. shows the results obtained by using the MF metric and Fig. describes the scatter plot of the MOS versus the Gabor-based metric.... Scatter plot of the MOS vs Objective Metric)..... Fig.. Scatter plot of the versus the MF metric... Scatter plot of the MOS vs Objective Metric..... Fig.. Scatter plot of the versus Gabor-based metric The results of the performance evaluation of these algorithms are shown in table. Pearson Spearman Outlier ratio MF metric 0.790 0.80 0 Gb metric 0.8 0.88 0 Table. Performance of the MF metric and the Gb metric Looking at table and comparing these results with the ones described in table we can observe that, as we expected, the multiresolution framework improves the performances of both the metrics. 6. CONCLUSIONS the perceptual quality of geometric distortions with the introduction of a multiresolution framework and a perceptibility map. The experimental results show the good performance of the described techniques with respect to the previous plain versions. Some ideas for future works could be to consider higher level perceptual factors in the proposed techniques such as a visual attention model. The attention model, in fact, takes into account several factors which are known to influence visual attention and eye movements and these factors must be considered to include higher level visual processing in human vision that is not captured by the low level processing of edges and bars. 7. REFERENCES [] A. D Angelo, M. Pacitto, and M. Barni, A Psychovisual Experiment on the Use of Gibbs Potential for the Quality Assessment of Geometrically Distorted Image, in Proc. of SPIE Human Vision and Electronic Imaging XIII, vol. 69, 008 [] A. D Angelo, M. Barni, A structural method for quality evaluation of desynchronization attacks in image watermarking, International Workshop on Multimedia Signal Processing, October, 008. [] V. Licks and R. Jordan, Geometric Attacks on Image Watermarking Systems, IEEE MULTIMEDIA, pp. 6878, 00. [] X. Desurmont, J. Delaigle, and B. Macq, Characterization of geometric distortions attacks in robust watermarking, Proceedings of SPIE, vol. 06, pp. 870878, 00. [] I. Setyawan, D. Delannay, B. Macq, and R. Lagendijk, Perceptual quality evaluation of geometrically distorted images using relevant geometric transformation modeling, Proceedings of SPIE, Security and Watermarking of Multimedia Contents V, vol. 00, pp. 89. [6] A. D Angelo, M. Barni, and N. Merhav, Sochastic Image Warping for Improved Watermark Desynchronization, Eurasip Journal on Information Security, vol. 008, Article ID8, pages, 008. [7] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transaction on Image Processing, vol., 00. In this paper we improve the performances of two recently introduced full-reference methods of objectively assessing