F-MAD: A Feature-Based Extension of the Most Apparent Distortion Algorithm for Image Quality Assessment

Similar documents
Image Quality Assessment Techniques: An Overview

Image Quality Assessment based on Improved Structural SIMilarity

Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation

A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT. Dogancan Temel and Ghassan AlRegib

OBJECTIVE IMAGE QUALITY ASSESSMENT WITH SINGULAR VALUE DECOMPOSITION. Manish Narwaria and Weisi Lin

SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT. Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu

DEEP BLIND IMAGE QUALITY ASSESSMENT

IMAGE QUALITY ASSESSMENT BASED ON EDGE

Keywords: Contrast Masking, Gradient Similarity, Human Visual System (HVS), Image Quality Assessment (IQA), Structural Similarity (SSIM).

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper)

SCREEN CONTENT IMAGE QUALITY ASSESSMENT USING EDGE MODEL

AN IMAGE, before it is displayed to a human, is often

Structural Similarity Based Image Quality Assessment

DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM. Jeoong Sung Park and Tokunbo Ogunfunmi

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

Structural Similarity Based Image Quality Assessment Using Full Reference Method

An Algorithm for No-Reference Image Quality Assessment Based on Log-Derivative Statistics of Natural Scenes

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

Signal Processing: Image Communication

LOCAL MASKING IN NATURAL IMAGES MEASURED VIA A NEW TREE-STRUCTURED FORCED-CHOICE TECHNIQUE. Kedarnath P. Vilankar and Damon M.

Comparison of No-Reference Image Quality Assessment Machine Learning-based Algorithms on Compressed Images

A Regression-Based Family of Measures for Full-Reference Image Quality Assessment

Efficient Motion Weighted Spatio-Temporal Video SSIM Index

Information Content Weighting for Perceptual Image Quality Assessment Zhou Wang, Member, IEEE, and Qiang Li, Member, IEEE

BLIND IMAGE QUALITY ASSESSMENT WITH LOCAL CONTRAST FEATURES

Attention modeling for video quality assessment balancing global quality and local quality

Unsupervised Feature Learning Framework for No-reference Image Quality Assessment

A new spatial hue angle metric for perceptual image difference

A PERCEPTUALLY RELEVANT SHEARLET-BASED ADAPTATION OF THE PSNR

NO-REFERENCE IMAGE QUALITY ASSESSMENT ALGORITHM FOR CONTRAST-DISTORTED IMAGES BASED ON LOCAL STATISTICS FEATURES

Blind Prediction of Natural Video Quality and H.264 Applications

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack

PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY

Objective Quality Assessment of Screen Content Images by Structure Information

A New Spatial Hue Angle Metric for Perceptual Image Difference

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz

SUBJECTIVE ANALYSIS OF VIDEO QUALITY ON MOBILE DEVICES. Anush K. Moorthy, Lark K. Choi, Gustavo de Veciana and Alan C. Bovik

NEW APPROACHES AND A SUBJECTIVE DATABASE FOR VIDEO QUALITY ASSESSMENT PHONG VAN VU

EE 5359 Multimedia project

New Directions in Image and Video Quality Assessment

Image Quality Assessment: From Error Visibility to Structural Similarity. Zhou Wang

QUANTITATIVE evaluation of an image s perceptual

Main Subject Detection via Adaptive Feature Selection

2284 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 9, SEPTEMBER VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images

MS-UNIQUE: Multi-model and Sharpness-weighted Unsupervised Image Quality Estimation

Performance of Quality Metrics for Compressed Medical Images Through Mean Opinion Score Prediction

Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index

Maximum Differentiation Competition: Direct Comparison of Discriminability Models

Multiscale contrast similarity deviation: An effective and efficient index for perceptual image quality assessment

No-reference stereoscopic image-quality metric accounting for left and right similarity map and spatial structure degradation

SPATIO-TEMPORAL SSIM INDEX FOR VIDEO QUALITY ASSESSMENT

No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement

SSIM Image Quality Metric for Denoised Images

Image Quality Assessment Based on Regions of Interest

IMAGE quality assessment (IQA) is a very important step

Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index

New structural similarity measure for image comparison

Full Reference Image Quality Assessment Based on Saliency Map Analysis

Speech Modulation for Image Watermarking

Reduction of Blocking artifacts in Compressed Medical Images

MIXDES Methods of 3D Images Quality Assesment

Learning without Human Scores for Blind Image Quality Assessment

A Multi-purpose Objective Quality Metric for Image Watermarking

New Approach of Estimating PSNR-B For Deblocked

Implementation and analysis of Directional DCT in H.264

Image and Video Quality Assessment Using Neural Network and SVM

OVER the past years, there has been an exponential

3D VIDEO QUALITY METRIC FOR MOBILE APPLICATIONS

No Reference Medical Image Quality Measurement Based on Spread Spectrum and Discrete Wavelet Transform using ROI Processing

No-Refrence Image Quality Assessment Using Blind Image Quality Indices

RANKING CONSISTENT RATE: NEW EVALUATION CRITERION ON PAIRWISE SUBJECTIVE EXPERIMENTS. Yeji Shen, Tingting Jiang

Perceptual Fidelity Aware Mean Squared Error

New Edge-Enhanced Error Diffusion Algorithm Based on the Error Sum Criterion

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES

A Subjective Study to Evaluate Video Quality Assessment Algorithms

FOUR REDUCED-REFERENCE METRICS FOR MEASURING HYPERSPECTRAL IMAGES AFTER SPATIAL RESOLUTION ENHANCEMENT

COLOR F ACE-TUNED SALIENT DETECTION FOR IMAGE QUALITY ASSESSMENT

No-reference visually significant blocking artifact metric for natural scene images

Objective View Synthesis Quality Assessment

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

Convolutional Neural Networks for No-Reference Image Quality Assessment

A DCT Statistics-Based Blind Image Quality Index

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS

BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY

Edge-directed Image Interpolation Using Color Gradient Information

Image Utility Assessment and a Relationship with Image Quality Assessment

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

IT IS an indispensable step to evaluate the quality of

VISUAL QUALITY METRIC FOR PERCEPTUAL VIDEO CODING

Blind Image Quality Assessment Through Wakeby Statistics Model

Spectral Images and the Retinex Model

Quality Metric for Shadow Rendering

Nonlinear Operations for Colour Images Based on Pairwise Vector Ordering

EE 5359 MULTIMEDIA PROCESSING SPRING Final Report IMPLEMENTATION AND ANALYSIS OF DIRECTIONAL DISCRETE COSINE TRANSFORM IN H.

Blind Image Quality Assessment on Real Distorted Images using Deep Belief Nets

Blind DT-CWT Domain Additive Spread-Spectrum Watermark Detection

LEARNING NATURAL STATISTICS OF BINOCULAR CONTRAST FOR NO REFERENCE QUALITY ASSESSMENT OF STEREOSCOPIC IMAGES

IMPROVING THE PERFORMANCE OF NO-REFERENCE IMAGE QUALITY ASSESSMENT ALGORITHM FOR CONTRAST-DISTORTED IMAGES USING NATURAL SCENE STATISTICS

A Full Reference Based Objective Image Quality Assessment

Transcription:

F-MAD: A Feature-Based Etension of the Most Apparent Distortion Algorithm for Image Quality Assessment Punit Singh and Damon M. Chandler Laboratory of Computational Perception and Image Quality, School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078 USA ABSTRACT In this paper, we describe the results of a study designed to investigate the effectiveness of peak signal-to-noise ratio () as a quality estimator when measured in various feature domains. Although is well known to be a poor predictor of image quality, has been shown be quite effective for additive, piel-based distortions. We hypothesized that might also be effective for other types of distortions which induce changes to other visual features, as long as is measured between local measures of such features. Given a reference and distorted image, five feature maps are measured for each image (lightness distance, color distance, contrast, edge strength, and sharpness). We describe a variant of in which quality is estimated based on the etent to which these feature maps for the reference image differ from the corresponding maps for the distorted image. We demonstrate how this feature-based approach can lead to improved estimators of image quality. 1. INTRODUCTION A crucial requirement for any system that processes images is a means of assessing the impacts of such processing on the visual quality of the resulting images. Over the last several decades, numerous algorithms for image quality assessment (IQA) have been developed to meet this requirement. IQA algorithms aim to predict the quality of an image in a manner that agrees with quality as judged by human subjects. Here, we specifically focus on full-reference IQA algorithms which require both a reference image and a distorted image. The simplest approach to full-reference IQA is to measure local pielwise differences, and then to collapse these local measurements into a scalar which represents the overall quality. The mean-squared error (MSE) and its log-based counterpart, peak signal-to-noise ratio (), were the earliest and simplest measures of local pielwise differences. To improve predictive performance, variants of MSE/ have been measured in the luminance domain, 1 with frequency weighting based on the the human contrast sensitivity function (see, e.g., Ref. 2), and with further adjustments for other low-level properties of the human visual system (HVS) (e.g., Ref. 3). More recent and complete IQA algorithms have employed a wide variety of approaches. Numerous IQA algorithms have been designed based on computational models of the HVS (e.g., Refs. 2, 4 9). Numerous IQA algorithms have also been designed based on structural similarity (e.g., Refs. 10, 11). Other IQA algorithms have been designed based on various statistical and information-theoretic-based approaches (e.g., Refs. 12, 13), based on machine-learning (e.g., Refs. 14, 15), and based on many other techniques (see Ref. 16 for a review). All of the aforementioned IQA approaches have been shown to outperform when tested across images and distortion-types from various IQA databases. However, one important observation when eamining the performances of these IQA algorithm vs. is that the latter is still quite competitive (and can even outperform most IQA algorithms) on certain types of distortions, most notably additive noise. For eample, on the TID database, 17 outperforms the vast majority of IQA algorithms on most additive noise types (white grayscale noise, white color noise, correlated noise, impulse noise, high-frequency noise, etc.). Thus,, which is measured between the piel values of the reference vs. distorted images, appears to be quite effective at capturing quality differences when the changes are perceived as piel-based degradation. Following this argument, it would seem that might also be effective when measured between feature values of the reference vs. distorted images, when the changes are perceived as degradations to the corresponding features (e.g., degradation of perceived contrast, perceived sharpness, perceived edge clarity, etc.). P. S.: E-mail: punit.singh.banga@okstate.edu; D.M.C.: E-mail: damon.chandler@okstate.edu

In this paper, we describe the results of a study designed to investigate the effectiveness of as a quality estimator when is measured in various feature domains. We specifically investigated measuring between the same feature maps used in our algorithm for detecting main subjects in images. 18 Given a reference and distorted image, we measure, for each block in each image, five low-level features: (1) lightness distance, (2) color distance, (3) contrast, (4) edge strength, and (5) sharpness. These block-based measures thus result in five feature maps for the reference image, and five feature maps for the distorted image. From these feature maps, quality is estimated based on the etent to which the feature maps for the reference image differ from the corresponding maps for the distorted image. We specifically present a measure of quality F- in which the differences between the feature maps are quantified based on a combination of the average and the average Pearson correlation coefficient. We also describe a straightforward technique of integrating F- into the MAD (Most Apparent Distortion) IQA algorithm, 9 resulting in what we have termed F-MAD. As we will demonstrate, this feature-based approach can lead to improved estimates of quality. This paper is organized as follows. Section 2 describes the feature maps and how they are used to estimate quality (F- and F-MAD). Section 3 analyzes the performances of these feature-based IQA measures in predicting subjective ratings of quality. General conclusions are provided in Section 4. 2. ALGORITHM In this section, we first provide details of the feature maps (Section 2.1), and then we describe how these maps are used in the F- and F-MAD measures of quality (Sections 2.2 and 2.3). 2.1. Feature Maps Given a reference and distorted image, five feature maps are computed for each image: (1) a map of local lightness distance (the distance between the average lightness of a region and the average lightness of the entire image); (2) a map of local color distance; (3) a map of local luminance contrast; (4) a map of local edge-strength; and (5) a map of local sharpness. We have previously shown that these features maps are effective for detecting main subjects in images. 18 Here, we argue that these maps can also be effective for quality assessment. Let X denote a (reference or distorted) image, and let denote an 88 block of X with 50% overlap between neighboring blocks. Let f i (), i [1, 5], denote the i th feature value measured for. Fromallf i (), X, we form the i th feature map, which we denote as f i (X). 2.1.1. Lightness and Color Distance Let f 1 () denote the Euclidean distance between the average lightness of block and the average lightness of the entire image. Let f 2 () denote the Euclidean distance between the average color of block and the average color of the entire image. These two features are given by: f 1 () = L L X, (1) f 2 () = (ā ā X )2 + ( b b 2, X) (2) where L,ā, b denote the average L, a, b measured in the CIE 1976 (L, a, b ) color space (CIELAB). See Ref. 18 for details on how RGB values are converted to (L, a, b )values. The second and third rows of Figure 1 show lightness-distance maps f 1 (X) and color-distance maps f 2 (X), respectively, for various images.

Input Image Lightness Distance Color Distance Contrast Edge Strength Sharpness Monument Fisher Sparrow Swarm Native US Figure 1. Eample images and their feature maps. Images in first row are select reference images from the CSIQ database. 19 The second through sith rows show maps of lightness distance, color distance, contrast, edge strength, and sharpness. 2.1.2. Contrast Local contrast can also be an important factor which influences an image s visual appearance. To measure this, we first convert the image into the luminance domain. Then, the root mean square (RMS) contrast of each block is given by the ratio of the standard deviation of luminances to the mean luminance of the respective block. The result is a map in which each value represents local RMS contrast. Specifically, let f 3 () denote the RMS contrast of block. In order to compute f 3 (), we first convert the

image X into a grayscale image X g via X g =0.299R +0.587G +0.114B. Let g denote the corresponding block in X g.letl() =(b+k g ) γ denotes the luminance-valued block, with b =0.7297, k =0.0376 and γ =2.2 assuming srgb display conditions. The quantity f 3 () is then computed via: f 3 () = { σ l() /μ l(), μ l() > 0, 0, μ l() =0, where σ l() and μ l() denote the standard deviation and the mean of l(), respectively. The fourth row of Figure 1 shows contrast maps f 3 (X) for various images. 2.1.3. Edge Strength To quantify similarity between object boundaries, we use maps of local edge strength. First, edges are detected by using Robert s edge detector. 20 Thentheedgestrengthofeachblockis computed by averaging the number of detected edge piels within that block. The result is a map in which each value represents local edge strength. Specifically, let f 4 () denote the edge strength of block. LetE(X) denote the binary edge map computed by running Roberts edge detector on X. The feature f 4 () is then given by: f 4 () =μ E() = where E() is the corresponding block of in E(X) ande j is a piel of E(). The fifth row of Figure 1 shows edge-strength maps f 4 (X) for various images. (3) 1 m 2 e j, (4) 2.1.4. Sharpness In general the sharper an image the better is its quality. If the image is blurred, it is difficult to distinguish between neighboring objects; blurring also reduces the ability to visually recognize objects. Thus, sharpness can potentially be a useful feature for estimating image quality. Let f 5 (X) denote the sharpness map for image X. For measuring local sharpness, we employ our own S3 sharpness estimator 21 in which local sharpness is measured in both the frequency domain and the spatial domain. In the frequency domain, the image is divided into 3232 piel blocks with 75% overlap. The slope of the power spectrum averaged across all orientations serves as the spectral sharpness measure. In the spatial domain, the image is divided into 88 piel blocks, and then a measure of the local total variation serves as the spatial sharpness measure. The two sharpness measures are then combined via a geometric mean. The result is a map in which each value represents local sharpness. The sith row of Figure 1 shows sharpness maps f 5 (X) for various images. 2.2. and elation Between Feature Maps Given the five feature maps, we estimate quality based on the etent to which the feature maps of the distorted image differ from the feature maps of the reference image. We employ and Pearson correlation coefficient to quantify the overall difference between each pair of maps (distorted image s map vs. reference image s map). Let F- denote this feature-based quality measure. A block diagram of the F- computation is shown in Figure 2. Let X r and X d denote the reference and distorted images, respectively. The between each feature map is given by (f i (X r ),f i (X d )) = 10 log 10 ( R 2 j MSE ), (5) where f i (X r )andf i (X d )denotethei th feature map for images X r and X d, respectively; and where R denotes peak value of the signal, and MSE denotes the mean-squared error between f i (X r )andf i (X d ).

Contrast Distorted image Graysale, luminance, or L* (see tet for details) Sharpness Edge Strength + F- Reference image b* Luminance Distance a* Color Distance CORR(f i (X r ),f i (X d )) = ( Figure 2. Block diagram of the F- quality measure. We also compute the linear correlation coefficient between the corresponding maps from the two images, given by ( )( ) n 1 n 2 f i (X r ) n1,n 2 f i (X r ) f i (X d ) n1,n 2 f i (X d ) n 1 n 2 (f i (X r ) n1,n 2 f i (X r )) 2 )( n 1 ( ) ), (6) 2 n 2 f i (X d ) n1,n 2 f i (X d ) where f i (X r ) n1,n 2 and f i (X d ) n1,n 2 denote the (n 1,n 2 )elementoff i (X r )andf i (X d ), respectively; and where f i (X r )andf i (X d ) denote the mean of f i (X r )andf i (X d ), respectively. Finally, F- is computed by multiplying the correlation coefficients with the corresponding s, and then averaging the products: F- = 1 5 5 (f i (X r ),f i (X d )) CORR(f i (X r ),f i (X d )). (7) i=1

As we will demonstrate in Section 3, F- on its own performs quite competitively with current stateof-the-art IQA algorithms in predicting quality. However, additional improvements in predictive performance can potentially be gained by combining F- with an eisting IQA algorithm. In the following section, we describe a combination of F- and the MAD IQA algorithm. 9 2.3. F-MAD: Augmenting MAD with F- To investigate the effectiveness of F- as a supplement to eisting IQA algorithms, we augmented the MAD (Most Apparent Distortion) 9 algorithm with F-. MAD was one of the first algorithms to demonstrate that quality can be predicted by modeling two strategies employed by the HVS, and by adapting these strategies based on the amount of distortion. For high-quality images, in which the distortion is less noticeable, the image is most apparent, and thus the HVS attempts to look past the image and look for the distortion a detection-based strategy. For low-quality images, the distortion is most apparent, and thus the HVS attempts to look past the distortion and look for the image s subject matter an appearance-based strategy. In MAD, two main stages are employed: (1) a detection-based stage, which computes the perceived distortion due to visual detection of distortions d detect ; and (2) an appearance-based stage, which computes the perceived distortion due to visual appearance changes d appear. The detection-based stage of MAD computes d detect by using a masking-weighted block-based mean square error which is computed in the lightness domain. The appearancebased stage of MAD computes d appear by computing the average difference between the block-based log-gabor statistics of the original image to those of the distorted image. To augment MAD with F-, we employ the following weighted geometric mean: F-MAD = (d detect ) α (d appear ) β (F-) γ (8) where d detect and d appear denote the outputs of MAD s detection-based and appearance-based stages, respectively. The parameters β and γ are given by β = (1 α) 2 and γ =1 α β, whereα is the blending parameter computed in the original MAD algorithm: 1 α = (9) (1 + β 1 (d detect ) β2 ) where β 1 =0.32 and β 2 =0.132. As argued in Ref. 9, Equation (9) was designed to give greater weight to d detect for high-quality images and greater weight to d appear for low-quality images. Here, because F- does not take into account visual masking, we chose β = (1 α) 2 and γ = 1 α β so that F- supplements MAD s appearance-based stage rather than its detection-based stage. 3. RESULTS We applied F- and F-MAD to two publicly available databases of subjective image quality: LIVE 22 and CSIQ. 19 We compared F- and F-MAD with normal and five other modern full-reference IQA algorithms for which code is publicly available: SSIM, 10 MS-SSIM, 11 VIF, 12 VSNR, 7 and MAD. 9 Four measures of performance were employed: Pearson correlation coefficient (CC), Spearman rank order correlation coefficient (SROCC), outlier ratio (OR), and outlier distance (OD). For all IQA algorithms, a four-parameter sigmoid was applied before computing CC, OR, and OD to compensate for nonlinear relations between the predictions and subjective scores. Table 1 lists the resulting CC, SROCC, OR, and OD of each algorithm on each database. Notice from Table 1 is that F- outperforms, SSIM, MS-SSIM, and VSNR. In terms of CC, F- yields values of of 0.949 and 0.931 on LIVE and CSIQ, respectively. This finding suggests that changes to the feature maps caused by the distortions can be an effective proy for estimating quality. For F-MAD, the results in Table 1 demonstrate that the combination of F- and MAD may or may not lead to improved predictions over MAD alone. In terms of CC, F-MAD yields CC values of 0.970 and 0.962 on

LIVE and CSIQ, respectively; and MAD alone yields 0.968 and 0.950 on these databases. The improvement on the LIVE database is negligible; however, the improvement on the CSIQ database is significant. (For comparison, the net overall best performer, VIF, yields CC values of 0.960 and 0.925 on the respective databases.) Although F- on its own shows promise, there is clearly a need to further research proper techniques of combining F- with eisting IQA algorithms. Table 1. Performances of F-MAD and other quality assessment algorithms on images from the LIVE and CSIQ databases. The results in the Average rows denote averages weighted by the number of images in the databases. The best performances are bolded. SSIM MSSIM VSNR VIF MAD F- F-MAD CC LIVE 0.871 0.938 0.933 0.923 0.960 0.968 0.949 0.970 CSIQ 0.800 0.815 0.897 0.800 0.925 0.950 0.931 0.962 Average 0.835 0.876 0.915 0.862 0.942 0.959 0.940 0.966 SROCC LIVE 0.876 0.947 0.944 0.928 0.963 0.968 0.953 0.970 CSIQ 0.806 0.837 0.914 0.811 0.919 0.947 0.929 0.956 Average 0.841 0.892 0.929 0.869 0.941 0.957 0.941 0.963 OR LIVE 0.682 0.592 0.619 0.588 0.546 0.415 0.557 0.398 CSIQ 0.343 0.335 0.245 0.311 0.226 0.180 0.216 0.170 Average 0.512 0.463 0.432 0.449 0.386 0.297 0.387 0.284 OD LIVE 4943 2814 2960 3247 1890 1370 2331 1282 CSIQ 3178 2896 1528 3325 1218 626 936 579 4. CONCLUSIONS This paper described the results of a study designed to investigate the effectiveness of as a quality estimator when measured between feature maps for the reference and distorted images. Given a reference and distorted image, five feature maps are measured for each image (lightness distance, color distance, contrast, edge strength, and sharpness). Quality is then estimated based on the etent to which these feature maps for the reference image differ from the corresponding maps for the distorted image. We demonstrated how this feature-map-based approach (F-) can yield a competitive IQA strategy, and how it can be used to augment and improve an eisting IQA algorithm (F-MAD). 5. ACKNOWLEDGMENTS This material is based upon work supported by, or in part by, the National Science Foundation Award 0917014. REFERENCES 1. B. Moulden, F. A. A. Kingdom, and L. F. Gatley, The standard deviation of luminance as a metric for contrast in random-dot images, Perception 19, pp. 79 101, 1990. 2. N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, Image quality assessment based on a degradation model, IEEE Transactions on Image Processing 9, 2000. 3. K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and M. Carli, A NEW FULL- REFERENCE QUALITY METRICS BASED ON HVS, in Proceedings of the Second International Workshop on Video Processing and Quality Metrics, (Scottsdale, AZ USA). 4. P. LeCallet, A. Saadane, and D. Barba, Frequency and spatial pooling of visual differences for still image quality assessment, Proc. SPIE Human Vision and Electronic Imaging V 3959, pp. 595 603, 2000. 5. JNDMetri technology. Sarnoff Corporation. 6. A. Ninassi, O. Le Meur, P. Le Callet, and D. Barba, Which semi-local visual masking model for wavelet based image quality metric?, in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, pp. 1180 1183, Oct. 2008.

7. D. M. Chandler and S. S. Hemai, Vsnr: A wavelet-based visual signal-to-noise ratio for natural images, IEEE Transactions on Image Processing 16(9), pp. 2284 2298, 2007. 8. V. Laparra, J. M. noz Marí, and J. Malo, Divisive normalization image quality metric revisited, J. Opt. Soc. Am. A 27, pp. 852 864, Apr. 2010. 9. E. C. Larson and D. M. Chandler, Most apparent distortion: full-reference image quality assessment and the role of strategy, Journal of Electronic Imaging 19(1), p. 011006, 2010. 10. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing 13, pp. 600 612, 2004. 11. Z. Wang, E. Simoncelli, and A. Bovik, Multiscale structural similarity for image quality assessment, in Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers, 2, pp. 1398 1402, Nov. 2003. 12. H. R. Sheikh and A. C. Bovik, Image information and visual quality, IEEE Transactions on Image Processing 15(2), pp. 430 444, 2006. 13. A. Shnayderman, A. Gusev, and A. M. Eskicioglu, An SVD-based grayscale image quality measure for local and global assessment, IEEE Transactions on Image Processing 15(2), pp. 422 429, 2006. 14. M. Liu and X. Yang, A new image quality approach based on decision fusion, Fuzzy Systems and Knowledge Discovery, Fourth International Conference on 4, pp. 10 14, 2008. 15. P. Peng and Z. Li, Image quality assessment based on distortion-aware decision fusion, in Proceedings of the Second Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering, pp. 644 651, 2012. 16. D. M. Chandler, Seven challenges in image quality assessment: Past, present, and future research, ISRN Signal Processing, 2012. in press. 17. N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti, Tid2008-a database for evaluation of full-reference visual quality assessment metrics, Advances of Modern Radioelectronics 10, pp. 30 45, 2009. 18. C. Vu and D. M. Chandler, Main subject detection via adaptive feature refinement, Journal of Electronic Imaging 20, Mar. 2011. 19. E. C. Larson and D. M. Chandler, Categorical subjective image quality CSIQ database, 2009. 20. L. G. Roberts, Machine perception of three-dimensional solids, Optical and Electrooptical Information Processing, MIT Press, Cambridge, MA, 1965. 21. C. T. Vu, T. D. Phan, and D. M. Chandler, S3: A spectral and spatial measure of local perceived sharpness in natural images, Trans. Img. Proc. 21, pp. 934 945, Mar. 2012. 22. H. Sheikh, Z.Wang, L. Cormack, and A. Bovik, LIVE image quality assessment database Release 2..