WITH respect to quality assessment, the full-reference

Size: px
Start display at page:

Download "WITH respect to quality assessment, the full-reference"

Transcription

1 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY Image-Difference Prediction: From Grayscale to Color Ingmar Lissner, Jens Preiss, Philipp Urban, Matthias Scheller Lichtenauer, and Peter Zolliker Abstract Existing image-difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an image-difference framework that comprises image normalization, feature extraction, and feature combination. Based on this framework, we create image-difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped images. Our best image-difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures. Index Terms Color, image difference, image quality. I. INTRODUCTION WITH respect to quality assessment, the full-reference still-image problem is essentially solved []. This recent, somewhat controversial statement by A. C. Bovik, cocreator of the SSIM index [2], sounds surprising at first. It is, however, confirmed by the excellent prediction accuracy of the multiscale SSIM index [3] on the LIVE database [4]: the Spearman correlation between subjective quality assessments and corresponding predictions is greater than 0.95 for all included distortions (lossy compression, noise, blur, and channel fading). Given that the SSIM index operates on grayscale data, color information is obviously not required to predict these distortions. Nevertheless, the above statement about image-quality assessment is only true to a certain extent: ) Changes of image semantics cannot be detected. If, for instance, a particular distortion affects a human face in a portrait, the subjective image quality is greatly reduced. A similar change to an object in the background may not even be noticed. Manuscript received February 6, 202; revised July 23, 202; accepted August 5, 202. Date of publication September 9, 202; date of current version January 8, 203. This work was supported in part by the German Research Foundation and the Swiss National Research Foundation under SNF Project 20002_ I. Lissner and J. Preiss contributed equally to this work. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Erhardt Barth. I. Lissner, J. Preiss, and P. Urban are with the Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt 64289, Germany ( lissner@idd.tu-darmstadt.de; preiss@idd.tu-darmstadt.de; urban@idd.tu-darmstadt.de). M. Scheller Lichtenauer and P. Zolliker are with the Laboratory for Media Technology, Empa Swiss Federal Laboratories for Materials Science and Technology, Dübendorf 8600, Switzerland ( matthias.scheller@empa.ch; peter.zolliker@empa.ch). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier 0.09/TIP /$ IEEE 2) Changes in the chromatic components (chroma and hue) may not affect the lightness component. This occurs frequently in gamut-mapping [5] and tone-mapping [6] applications. The accuracy of grayscale-based quality measures in predicting such distortions has room for improvement [7] [9]. Although several extensions of the SSIM index for color images have been proposed [0], [], we believe that further improvements are possible. In this paper we address the color-related aspects of imagedifference assessment. We focus on full-reference measures, which predict the perceived difference of two input images. Apart from the SSIM index, many such measures have been proposed [3], [2] [7] (to cite only a few). Ideally, they reflect the actual visual mechanisms responsible for imagedifference assessment. These mechanisms, however, are poorly understood, which applies especially to the cortical processing of complex visual stimuli. As a result, assumptions are made about how the human visual system (HVS) extracts and processes image information. Hypotheses on which information is extracted [2], [8], [9] and how it is weighted and combined [7] can be found in the literature. II. IMAGE-DIFFERENCE FRAMEWORK The image-difference framework we present in this paper normalizes the input images with an image-appearance model and transforms them into a working color space. An imagedifference prediction is then computed using so-called imagedifference features (IDFs) that are extracted from the images. An overview of our framework is provided in Fig.. A. Image Normalization The interpretation of an image by the visual system depends on the viewing conditions, e.g., viewing distance, illuminant, and luminance level. Consequently, the images should be normalized to specific viewing conditions before any information is extracted. So-called image-appearance models [20] have been developed for this purpose. Among the mechanisms that they model are chromatic adaptation, contrast sensitivity, and various appearance phenomena such as the Hunt effect and the Stevens effect [20]. Fig. 2 illustrates the image normalization: a subthreshold distortion may turn into a suprathreshold distortion if the viewing conditions change. Image-appearance modeling is still in its infancy. For example, the contrast-sensitivity mechanism is often modeled as a convolution in an intensity-linear opponent color space. Different filters are applied to the achromatic and chromatic

2 436 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY prediction RGB color space Imageappearance model Working color space Feature extraction Feature combination Fig.. General overview of our image-difference framework. (a) (b) Adapting luminance = 000 cd/m 2 Adapting luminance = 6 cd/m 2 Adapting luminance = 6 cd/m 2 Fig. 2. Influence of the viewing conditions. (a) Continuous-tone and (b) halftone image as seen on a display with a white-point luminance of 80 cd/m 2. Left: the adapting luminance is 000 cd/m 2 (e.g., outdoor daylight environment). Middle: the adapting luminance is 6 cd/m 2 (adaptation takes place on the display; dark surround). The images were rendered pixel-wise using the CIECAM02 color-appearance model assuming an adapting luminance of 20% of the white-point luminance in the scene [20]. The subthreshold distortion (left) turns into a suprathreshold distortion (middle) if the adapting luminance is changed. At a closer viewing distance (right), the perceived image difference increases. The original image is part of the Kodak Lossless True Color Image Suite [2]. channels in the frequency domain. This involves several simplifications, because the contrast-sensitivity mechanism is orientation-dependent [22]; is age-dependent [23]; depends on the luminance level [24]; is usually measured for sinusoidal gratings instead of complex stimuli [25]. Similar limitations apply to other components of current image-appearance models. In addition, it is unlikely that the results of various individual studies on certain aspects of the HVS can be seamlessly combined into an overall model of the visual processing. In this paper we test, among other things, if simple image-appearance models can improve the prediction performance of image-difference measures. B. Transformation into Working Color Space In the final step of the normalization process, the images are transformed into a working color space. This color space should provide simple access to color attributes lightness, chroma, and hue and it should be free of crosscontamination between these attributes. One of the most important properties is perceptual uniformity, meaning that Euclidean distances in the space match perceived color differences. This is required for an accurate representation of image features such as edges and gradients. In an RGB color space, such features may be over- or underestimated, i.e., their computed magnitudes exceed their perceived magnitudes or vice versa. Although a perfectly perceptually uniform color space does not exist [26], various approximations have been proposed [27] [30]. Note that the underlying color-difference data was collected using uniform color patches and may not fully apply to complex visual stimuli. C. Information Extraction We extract image-difference features (IDFs) from the normalized input images. These features are mathematical formulations of hypotheses on the visual processing. They are combined into an overall image-difference prediction using a combination model. The parameters of this model are optimized using image-difference datasets.

3 LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR 437 In this paper we extend our previous work [8], [3] in several key aspects: ) There are various ways of normalizing the images to specific viewing conditions. We test how a normalization to a specific viewing distance affects the prediction accuracy of our image-difference measures. 2) We derive our lightness-, chroma-, and hue-comparison terms from the SSIM luminance function and adapt it to a perceptually uniform color space. 3) The sensitivity of the HVS to visible distortions (sometimes modeled by a suprathreshold contrast-sensitivity function [32] [34]) depends on the viewing distance. We investigate if the prediction of gamut-mapping distortions is improved by an existing multiscale approach. 4) We evaluate whether chromatic IDFs adversely affect the prediction of conventional image distortions (e.g., lossy compression, noise, and blur). III. EXTRACTING IMAGE-DIFFERENCE FEATURES An image-difference feature (IDF) is a transformation IDF: I M,N I M,N P [0, ] () where I M,N is the set of all colorimetrically specified RGB images with M rows and N columns; P is a set of parameter arrays, each of which parametrizes the employed imageappearance model. Depending on the model, P may include the viewing distance, the luminance level, and the adaptation state of the observer. According to the proposed modular framework, an IDF may be expressed as the concatenation of a transformation N that normalizes the images to the viewing conditions and a transformation F that expresses the actual feature extraction, i.e., IDF = F N (2) where N: I M,N I M,N P W M,N W M,N (3) F: W M,N W M,N [0, ] (4) and W M,N is the set of images in the working color space with M rows and N columns. The feature-extraction transformation F can be realized in various ways. Each transformation used in this paper is based upon a specific image-comparison transformation t : W k,k W k,k [0, ] (5) which compares pixels within corresponding k k windows (k min{m, N}) of the input images. The feature-extraction transformation F is computed by averaging the local differences as follows: F(X norm, Y norm ) = K t(x i, y i ) (6) K i= where K is the number of considered windows within the normalized images X norm, Y norm W M,N and x i and y i are the corresponding pixel arrays defined by the i-th window. Although we compute the mean of the difference maps, more complex pooling methods may be in better agreement with human perception. A comprehensive analysis is provided by Wang and Li [7]. Scale-dependent IDFs include a transformation that extracts a specific image scale: S: W M,N W M,N WḾ,Ń W Ḿ,Ń (7) where Ḿ M and Ń N. The IDF that operates on this scale is defined by concatenation: IDF = F S N (8) where F is adjusted to the scale defined by S. A. Image-Comparison Transformations To ensure a high prediction accuracy, we utilize established terms describing image-difference features. We adjust these terms to our framework and extend them to assess chromatic distortions. All terms are either adopted or derived from the SSIM index [2] because of its wide use and good prediction accuracy on various image distortions. In addition, its modular structure three comparison terms are evaluated separately and then multiplied is well suited for our image-difference framework. The terms are computed within sliding windows in the compared images X and Y. The arguments x and y are the pixel arrays within these windows. In the working color space, each pixel x consists of a lightness and two chromatic values: x = (L x, a x, b x ). The chroma of the pixel is defined as C x = ax 2 + b2 x. ) Lightness, chroma, and hue comparisons: l L (x, y) = c L(x, y) 2 (9) + l C (x, y) = c 4 C(x, y) 2 (0) + l H (x, y) = c 5 H (x, y) 2 () + where f (x, y) denotes the Gaussian-weighted mean of f (x, y) computed for each pixel pair (x, y) in the window. The pixel-wise transformations used above are defined as: L(x, y) = L x L y (2) C(x, y) = C x C y (3) H (x, y) = (a x a y ) 2 +(b x b y ) 2 C(x, y) 2. (4) These terms are based upon the hypothesis that the HVS is sensitive to lightness, chroma, and hue differences. Their structure is derived from the luminance function of the SSIM index [2], which is designed for an intensitylinear space. We transformed it into our perceptually uniform working space as shown in Appendix. We chose the terms L, C, and H such that they return similar results for similar perceived differences

4 438 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 203 in a perceptually uniform color space. Note that this applies only to small color differences [35] for gamutmapped images, the chroma differences to the original are usually quite large. An adjustment to large color differences is possible using the parameters c i. Please note that H defined in (4) is a Euclidean rather than a hue-angle difference. This is required because the perceived hue difference of colors increases with chroma if their hue-angle difference stays constant [36]. It also serves to adjust the scaling of hue differences to that of lightness and chroma differences (in a perceptually uniform color space). 2) Lightness-contrast comparison according to [2]: Original image Distorted image c L (x, y) = 2σ xσ y + c 2 σx 2 + σ y 2 + c (5) 2 where σ x and σ y are the standard deviations of the lightness components in the sliding windows mentioned above. The term reflects the visual system s sensitivity to achromatic contrast differences and its so-called contrast-masking property [37]. The impact of this property is modeled by adjusting the parameter c 2 to the working color space. This is illustrated in Fig. 3: contrast deviations in low-contrast areas (red feathers) are highly disturbing and should be considered accordingly. 3) Lightness-structure comparison according to [2]: s L (x, y) = σ xy + c 3 (6) σ x σ y + c 3 where σ xy corresponds to the cosine of the angle between x x and y y [2] in the lightness component. The term incorporates the assumption that the HVS is sensitive to achromatic structural differences. Computing the terms in (9), (0), (), (5), and (6) for sliding windows within the images X and Y results in five difference maps (for an example, see Fig. 5). B. Resulting Image-Difference Features Each comparison term is incorporated into an individual IDF as shown in (2) and (6). To distinguish between terms and IDFs we use L, C, ands to denote the IDFs based upon the l-, c-, and s-terms. The visual system is more sensitive to high-frequency distortions (such as noise [38]) in the lightness component than in the chromatic components. Therefore, we create three lightness-based IDFs using the l L -term shown in (9) and the terms from (5) and (6), c L and s L. The lightness-contrast and lightness-structure IDFs C L and S L are computed on several scales (see (8)), because the visual system s response to differences in contrast and structure varies between scales [3]. On the first scale, the unaltered input images are used. They are then lowpass-filtered and downsampled by a factor of two to determine the images for the next smaller scale. C. Image-Difference Measure In the context of our framework, an image-difference measure (IDM) is a transformation that combines several IDFs Lightness-contrast comparison (c 2 = 58) Lightness-contrast comparison (c 2 = 0.5) Fig. 3. Lightness-contrast comparison as in (5) using different parameters c 2.Asmallc 2 emphasizes contrast differences in low-contrast regions. The original image is part of the Kodak Lossless True Color Image Suite [2]. TABLE I PARAMETERS α i AND c i OF OUR MODEL IN (7) Scale weights of the multiscale SSIM index [3]. α α 2 α 3 α 4 α Proposed parameters c i. Although our results were computed with different parameters, the prediction accuracy is not significantly affected if these simple parameters are used. c c 2 c 3 c 4 c to predict image differences. It has the same structure as an IDF (shown in ()). All IDFs that are combined into an IDM share the same normalization transformation N (see (3)). In the following, the arguments of the IDFs and IDMs are omitted for the sake of brevity. We employ a factorial combination model: IDM = ( L n L) αn n i= ( C i L S i L) αi (7) where n is the number of scales used by the multiscale model, L n L is the lightness-comparison IDF on the n-th (smallest) scale, i and S L i are the lightness-contrast and lightnessstructure IDFs on the i-th scale, and α i is the weight of this scale. The multiscale model in (7) and the α i (see Table I) are adopted from the multiscale SSIM index [3]. The α i weight the contribution of each scale to the overall image-difference prediction and were obtained through psychophysical experiments on n = 5 scales [3]. The product of all scales is a weighted geometric mean, i.e., α i =. The model can be adjusted to the working color space and the training data with the parameters c i of the individual IDFs. Additive or hybrid combination models did not yield significantly different prediction accuracies [8].

5 LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR 439 IV. EXPERIMENTS A. Experimental Data We train and test our IDMs using two types of image-difference datasets that differ primarily in the distortions they include: ) The Tampere Image Database 2008 (TID2008) [39], [40] comprises,700 distorted images derived from 25 reference images. It is based on more than 256,000 paired comparisons by more than 800 observers. The distortions include, among others, lossy compression, noise, and blur. We will refer to these distortions as conventional distortions in the following. 2) Six gamut-mapping datasets collected in different paircomparison experiments [], [4] [44]. These datasets comprise 326 reference images some of them show the same scene and 2,659 distorted images. A total of 29,665 decisions were collected not counting ties. In each so-called trial of a pair-comparison experiment, the observers are shown two distorted images and the corresponding reference image side by side. They select the image that is more similar to the reference (left or right) the resulting binary choices for all trials and all observers are the raw experimental data (we do not consider tie decisions here). B. Hit Rates A common performance indicator of image-difference measures is the correlation between human judgments and corresponding predictions. The Spearman and Kendall rank-order correlations are widely used [4], [39], [40]. The human judgments are usually expressed as mean opinion scores (MOS) that are derived from the raw data (the observers choices). There are, however, three main problems with this approach: ) To convert the raw results of a pair-comparison experiment into MOS, a model of the choice distribution has to be assumed [45], e.g., Thurstone s [46] or Bradley- Terry s [47] model. 2) It is not straightforward to include inter-observer and intra-observer uncertainties into the MOS. Some MOS may be affected by higher uncertainty than others this information is important for an accurate interpretation of the data. 3) For most image-difference experiments, several distorted images are derived from each reference image [4], [39], [43]. Only images derived from the same reference are compared by the observers; that is, all compared images show the same scene. Consequently, subjective scores of images showing different scenes cannot be compared. This is especially important if the corresponding distortions depend on the image content e.g., if an image with highly chromatic colors is gamut-mapped, the loss of chroma will be much greater than for an image whose colors are close to the gray axis. However, this is not reflected by the MOS depending on the scene, the same score can be assigned to images of very different deviation from the reference. Distortions like noise and blur depend on image content to a much lesser extent. For these reasons, we use hit rates to determine the prediction performance of our IDMs. The hit rate ˆp is defined as ˆp = i (8) m where m is the total number of choices in an experiment and i is the number of correctly predicted choices. A choice is correctly predicted if an IDM computes a better score (smaller difference to the reference) for the image selected by the observer. Tie decisions are excluded. Since we operate on the raw visual data, no assumptions about the choice distribution are necessary. An IDM that returns completely random predictions is expected to achieve a hit rate of ˆp = 0.5. This indicates the lowest possible prediction accuracy IDMs with lower accuracy become more accurate by inverting their predictions for all image pairs. Note that, if all image pairs are compared exactly once, the hit rate of an IDM is linearly related to the Kendall correlation of the corresponding MOS. It is particularly interesting to compare a hit rate with the maximum achievable hit rate on the same data, which we call majority hit rate ( ˆp m ). Usually, each image pair is compared by several observers whose choices may differ. An IDM reaches the majority hit rate if its predictions agree with the majority of choices for all image pairs. We define the achievable hit-rate range as the interval [0.5, ˆp m ], where 0.5 is the hit rate of random predictions. The ratio ˆp/ ˆp m may be used to compare IDM predictions for different datasets. In addition, it is not affected by inter- and intra-observer uncertainties. All hit rates we provide in this paper are absolute hit rates, i.e., they have not been rescaled to the achievable hit-rate range. C. Significance Analysis Even if an IDM has a higher hit rate than another on the same data, this may have happened by chance. To determine whether the hit rates are significantly different, we assume that the IDMs predictions of observer choices can be modeled as binomial distributions. The respective success probabilities p and p 2, i.e., the probabilities of a correct prediction, are unknown. We denote m as the total number of choices; i and i 2 are the correctly predicted choices by the first and second IDM. Yule s two-sample binomial confidence interval [48] for p p 2 with α = 0.05 is then computed as follows: I = [ ˆp ˆp 2 ψ; ˆp ˆp 2 +ψ ] with ψ = z α/2 (2/m) p q (9) where ˆp = i /m, ˆp 2 = i 2 /m, p = (i +i 2 )/(2m), q = p, and z α/2 is the upper α/2 quantile of the standard normal distribution [48]. The hit rates are assumed to be significantly different if 0 / I. D. Working Color Space We chose the LAB2000HL space [30] as our working color space, because it was designed to satisfy the requirements stated in Section II-B. Its perceptual uniformity is based upon the CIEDE2000 color-difference formula [49] and only holds

6 440 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 203 Fig. 4. Structure of the IDMs proposed in this paper. All IDMs are based on our image-difference framework from Fig.. for small color differences. In addition, this color space is hue linear with respect to the Hung and Berns data of constant perceived hue [50]. This means that the perceived hue remains constant on lines of constant predicted hue in the color space. Other possible working spaces are IPT [5] and CIECAM02 [28], [52]. E. Image-Appearance Models Some important viewing-condition parameters of the visual data were not available, e.g., the luminance level. As a result, our normalization step is limited to contrast-sensitivity filtering of the input images. However, the gamut-mapping experiments were conducted on liquid crystal displays in a typical office environment. Since the working color space LAB2000HL was designed for related viewing conditions, the normalization to an average viewing distance is probably the most important adjustment. From the variety of existing contrast-sensitivity functions (CSFs) we included two into our evaluation: ) The chromatic and achromatic CSFs proposed for evaluating image differences with the icam framework [53]. As suggested by Johnson and Fairchild [54], the bandpass-shaped achromatic CSF was turned into a lowpass filter and clipped above (dashed line in Fig. 3 of Ref. [54]). These CSFs were applied in two different color spaces: the working color space LAB2000HL [30] and the intensity-linear orthogonal opponent color space YCC [53]. Filtering was performed in the frequency domain. The corresponding IDMs were denoted as IDM- CSF (filtering in LAB2000HL) and IDM-CSF2 (filtering in YCC). 2) The chromatic and achromatic CSFs used by the S-CIELAB model applied in the intensity-linear AC C 2 opponent color space as proposed by Zhang and Wandell [55]. The images were transformed into this color space and convolved with the CSFs in the spatial domain. The corresponding IDM was denoted as IDM-CSF3. Please note that both contrast-sensitivity models apply different filters to the achromatic and chromatic channels, respectively. To adjust the CSFs to the viewing distance we assumed a spatial frequency of 20 cycles per degree, which corresponds to a viewing distance of 75 cm for the average pixel pitch of the utilized displays. We also created an IDM without CSF filtering and denoted it as IDM-None. F. Fitting the Parameters A part of the gamut-mapping datasets (see Section IV-A) was used to determine the parameters c,...,c 5 of, C L, S L,,and. We selected 50% of the reference images from each dataset and combined them into a training set. This set contained 62 reference images with,320 corresponding distorted images and 4,239 observer choices. The parameters of the IDMs were optimized by maximizing their hit rates on the training set. The remaining images composed the test set, on which we obtained the following results. The majority hit rate on these test data is ˆp m = A hit-rate difference of about 0.0 or more indicates a significant difference on a 95% confidence level (see Section IV-C for details). For every IDM, the corresponding hit rates did not change appreciably if the parameters were varied to some extent. Since a unified parameter set is desirable, we propose a set of simple parameters as listed in Table I. Using these instead of the optimized parameters does not affect the hit rates significantly. Note that the results presented in the following are based on the optimized parameters. The parameters c 2, c 3 cannot be compared with c, c 4, c 5, because they are used differently in their respective IDFs. However, since c 2 = c 3, lightness-contrast and -structure differences are weighted equally. The parameter c 5 is considerably greater than both c and c 4, indicating that deviations in hue have greater influence on the predictions than deviations in lightness and chroma of similar magnitude. This agrees with heuristics commonly employed by gamutmapping algorithms [5]. The structure of the IDMs we test in this paper is provided in Fig. 4. All difference maps computed by IDM-CSF3 for a test pair are shown in Fig. 5.

7 LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR 44 Fig. 5. Example of all difference maps computed for the images from Fig. 3 using IDM-CSF3 (S-CIELAB filtering). The L L map (largest scale) illustrates this concept. However, in accordance with Fig. 4, only the smallest scale L n L is used by our IDMs. The original image is part of the Kodak Lossless True Color Image Suite [2]. V. RESULTS AND DISCUSSION The major aim of the experiments is to determine the impact of each IDF on the hit rate. We are also interested in how different contrast-sensitivity models and the multiscale approach affect the results. The SSIM index serves as a reference in our evaluation; it performs significantly better on the experimental data (see Section IV-A) than all other image-quality measures included in the MeTriX MuX Package [56] and the PSNR-HVS [57] measure. 4) Adding one or both chromatic IDFs ( and )to the lightness-based IDFs (, C L,andS L ) results in hit rates that are significantly higher than that of SSIM. The best IDM shows an improvement of 0% of the achievable hit-rate range (see Section IV-B) compared to the SSIM index. 5) There is still much room for improvement: the best IDM has a hit rate of 0.68, which is far below the majority hit rate (0.80). However, it is unlikely that this gap can be closed by adding low-level features without considering other factors such as image semantics. A. How Do Our IDFs Affect the Prediction Performance? Hit rates for all combinations of single-scale IDFs are shown in Fig. 6. All IDF combinations that use a particular contrastsensitivity model share the same parameters c i.theywere optimized for the IDMs with all five IDFs (last column in Fig. 6) on the training data. SSIM s hit rate on the test data (0.650) is marked by a red line. To ensure a fair comparison, the parameters c i of the SSIM index were also optimized on the training data. However, the SSIM index with default parameters shows almost the same performance (hit rate = 0.649). Fig. 6 allows the following conclusions: ) Most hit rates of IDMs that use CSF filtering are not significantly different. Based on these results, we cannot recommend a particular contrast-sensitivity model. Neglecting the viewing distance, however, results in an inferior prediction performance in most cases. 2) The combination of the three lightness-based IDFs (, C L, and S L ) performs better than the SSIM index, but not significantly better. It seems that a perceptually uniform lightness scale in combination with our adjusted IDF (see (9)) has a positive but minor effect on the prediction performance. 3) The lightness-contrast IDF C L is the most important IDF. Adding C L to any combination of IDFs significantly improves the hit rate in all cases. B. Does a Multiscale Approach Improve the Predictions? Fig. 7 provides hit rates of the multiscale IDMs operating on 5 scales. All hit rates were computed on our gamutmapping test set. The hit rates of the SSIM index (red line) and the multiscale SSIM index (M-SSIM, dashed black line) are included for comparison. Unlike for conventional distortions, the prediction performance of the M-SSIM index on the gamut-mapping distortions is significantly lower (0.632) than that of its single-scale counterpart (0.650). This also applies to our multiscale approach, which uses the same concept and weighting parameters α i as the M-SSIM index. In most cases, the hit rates do not change significantly if 3 scales are employed. They drop if more scales are used. One possible explanation is the disagreement between the viewing conditions in the gamut-mapping studies (40 pixels per degree) and the experiment to determine the α i of the M-SSIM index (32 pixels per degree) [3]. To investigate the influence of this disagreement on the hit rates, we adjusted the α i to the gamut-mapping conditions by interpolating the original parameters. The resulting hit rate of the adjusted M-SSIM index is the same (0.632). It is therefore unlikely that this minor difference in viewing distances has great influence on the hit rates. This raises the question whether lightness distortions resulting from gamut mapping are fundamentally different from conventional distortions. To investigate this issue with respect

8 442 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY Hit Rate IDM-CSF IDM-CSF2 IDM-CSF3 IDM-None SSIM 0.52 IDFs LL LL LH LH LL LL LH LL LL LH LH LH S L S L Fig. 6. Hit rates for all possible combinations of IDFs on the gamut-mapping test data. A hit-rate difference of about 0.0 (or more) is significant. The contrast-sensitivity models are abbreviated as CSF (Johnson/Fairchild in LAB2000HL), CSF2 (Johnson/Fairchild in YCC), and CSF3 (S-CIELAB). Hit Rate IDM CSF IDM CSF2 IDM CSF3 IDM None SSIM M-SSIM Number of Scales Fig. 7. Relationship between number of scales and hit rate on the gamutmapping test data. A hit-rate difference of about 0.0 (or more) is significant on a 95% confidence level. to our lightness-based IDFs, we calculated the Pearson correlations between IDFs extracted from the first scale, S L,and higher scales, i S L i, i = 2,...,5. Fig. 8 shows the results. Pearson Correlation C LS L Gamut mapping 0 C LS L TID2008 c s M-SSIM Gamut mapping c s M-SSIM TID Scale Comparison Fig. 8. Pearson correlations between C L S L (largest scale) and C i L S i L, i =,...,5. The correlations of corresponding scale values used by the M-SSIM index are given for comparison. For the gamut-mapping database, the correlations between IDFs across scales are high. This means that the imagedifference data extracted from scales 2 5 are very similar to those extracted from scale.

9 LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR 443 TABLE II SPEARMAN CORRELATIONS ON THE TID2008 Single-scale Multiscale (5 scales) SSIM SSIM IDM-CSF IDM-CSF IDM-CSF IDM-CSF IDM-CSF IDM-CSF IDM-None IDM-None hue- and chroma-based IDFs omitted. In contrast, correlations across scales are much smaller for the TID2008. These findings also apply to the corresponding terms of the M-SSIM index. It appears that gamut-mapping distortions in the lightness component are indeed very different from conventional distortions. Further analysis of such image degradations is required to find multiscale strategies that improve the prediction performance. As we focus on colorrelated aspects of image-difference prediction, we leave this to future research. C. Is Color Important for Judging Conventional Distortions? To investigate this question we tested our IDMs with and without the hue- and chroma-based IDFs on conventional distortions from the TID2008. We compared the results with those of the M-SSIM index and two implementations of the SSIM index, one of which (denoted as SSIM09) takes the viewing distance into account. Both SSIM implementations show similar performance on the gamut-mapping data, but differ considerably on the TID2008. As only the mean opinion scores (MOS) were available, the Spearman correlation was used as a performance indicator. The results are summarized in Table II. multiscale IDMs use five scales just like the M-SSIM index. Note that the parameters c i were the same as in the gamut-mapping evaluation. Our results allow the following conclusions: ) Hue- and chroma-based IDFs do not considerably affect the prediction performance of IDMs that use CSF filtering. They have a negative influence on the accuracy if the viewing distance is not taken into account. 2) The performance of the single-scale IDM-CSF and IDM-CSF2 is comparable to that of the SSIM index. The S-CIELAB-based IDM-CSF3 performs better it almost matches the SSIM09 index. 3) In contrast to our results on the gamut-mapping data, all IDMs benefited from the multiscale approach. The M- SSIM index performs better than all proposed multiscale IDMs on the TID2008, even though the underlying concepts are similar. In conclusion, color information is neither essential for judging conventional distortions nor does it adversely affect the predictions of our single-scale IDMs. Although our multiscale IDMs are inferior to the M-SSIM index on the TID2008, we should keep in mind that our parameters were optimized only on gamut-mapping data. VI. CONUSION We presented a framework for the assessment of perceived image differences. It normalizes the images to specific viewing conditions with an image-appearance model, extracts imagedifference features (IDFs) that are based upon hypotheses on perceptually important distortions, and combines them into an overall image-difference prediction. Particular emphasis was placed on color distortions, especially those resulting from gamut-mapping transformations. We created image-difference measures (IDMs) based on this framework using IDFs adopted from the terms of the SSIM index. They are numerical representations of assumptions about perceptually important achromatic and chromatic distortions. We tested the framework on gamut-mapping distortions using several datasets. Only the viewing distance was considered in the normalization step, because other viewingcondition parameters were not available. Our main goal was to investigate the impact of chromatic IDFs on the prediction performance. We also tested if viewing-distance normalization as well as multiscale IDFs adopted from the M-SSIM index significantly affect the prediction of gamut-mapping distortions. On gamut-mapped images, the achromatic IDFs achieve a prediction performance similar to that of the SSIM index. Our most important conclusion is that adding a chroma- or huebased IDF (or both) significantly improves the predictions on the gamut-mapping data. This illustrates the benefit of including color information into image-difference measures. The most accurate IDM proposed in this paper is 0% more accurate than the SSIM index on gamut-mapped images. Furthermore, chromatic IDFs do not adversely affect the prediction performance on conventional distortions, such as noise and blur, from the Tampere Image Database 2008 (TID2008). It should be mentioned that our best hit rate is still far below the maximum achievable hit rate i.e., there is room for improvement in predicting gamut-mapping distortions. Our results show the importance of normalizing the input images to a specific viewing distance. The prediction performance with normalization is generally higher than without normalization. This applies to gamut-mapping distortions as well as conventional distortions. Finally, using lightness-based multiscale IDFs adopted from the M-SSIM index decreases the prediction performance on gamut-mapped images. This is in contrast to our results on conventional distortions. We performed a multiscale analysis of lightness distortions resulting from gamut mapping. Our results show that lightness-based IDFs extracted from different scales show a much higher inter-scale correlation than for conventional distortions. This suggests that a more suitable multiscale approach could further increase the prediction accuracy. We believe that our most important contributions are the image-difference framework, the chromatic difference fea-

10 444 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 203 l(x,y) μ y = 20 μ y = 40 μ y = 60 μ y = 80 μ y = μ x Fig. 9. SSIM l-function from (20) for different values of μ x and μ y.the constant c was set to as in [2]. tures, and the hit-rate-based significance analysis of the prediction performance. These concepts could aid the creation and testing of image-difference measures. Future research should focus on the creation of an improved image-difference database of gamut-mapped images. The images used in most gamut-mapping experiments exhibit similar distortions, e.g., reduced chroma and almost no change in hue. IDMs trained on such data may underestimate the importance of chroma changes because all images exhibit reduced chroma. For optimal results, a database with highly uncorrelated distortions is required. To test if further improvements are possible using only low-level image-difference features, both semantic and nonsemantic distortions should be included into such a database. Implementations of our IDMs are provided as MATLAB code on our website [58]. For the sake of simplicity, these IDMs can be seen as different configurations of a single IDM, which we call CID measure ( color-image-difference measure ). By default, it uses only a single scale, S-CIELAB as an image-appearance model (at 20 cycles per degree), and the proposed parameters from Table I. APPENDIX TRANSFORMING THE SSIM LUMINANCE FUNCTION FROM INTENSITY LINEARITY INTO PERCEPTUAL UNIFORMITY The SSIM luminance function reads as follows [2]: l(x, y) = 2μ xμ y + c μ 2 x + μ2 y + c. (20) It depends strongly on the intensity level, i.e., for a constant difference between μ x and μ y the function value increases with increasing absolute values of μ x and μ y. This is illustrated in Fig. 9. We neglect the parameter c, which was included to stabilize the term if the denominator is close to zero. Instead of mean values μ x and μ y we use I x and I y to emphasize that we are in the intensity domain: l(x, y) = 2I x I y I 2 2 x + I. (2) y Applying Fechner s law [59] yields a logarithmic relation of intensity I to perceptually uniform lightness L: and L x = L max ln I x = I max e Lx Lmax I x I max L y = L max ln I y I max (22) I y = I max e L y Lmax (23) where I max and L max represent the maximum intensity I and lightness L, respectively. Stevens showed that Fechner s law is not quite correct [60]. However, because the difference between Fechner s logarithmic function and Stevens power function is rather small, we use Fechner s law for the sake of simplicity. Substituting (23) into (2) leads to: 2e Lx Lmax e L y Lmax l L (x, y) = ( ) = e 2 Lx Lmax + e 2 L y Lmax With (24) reads: L xy = 2e Lx Lmax e L y Lmax ( e 2 Lx Lmax e 2 L y Lmax ) +. (24) L x L y (25) L max L max l L (x, y) = 2e L xy e 2 L xy + = cosh( L xy ). (26) If L xy, we can make the following approximation using the first two terms of the corresponding Taylor series: l L (x, y) = cosh( L xy ) + ( L xy ) 2 /2. (27) Thus, in a perceptually uniform color space, the term in (27) corresponds closely to the term in (20) in an intensity-linear color space. REFERENCES [] A. C. Bovik. (20). New dimensions in visual quality. presented at Electronic Imaging Conf. [Online]. Available: [2] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., vol. 3, no. 4, pp , Apr [3] Z. Wang, E. P. Simoncelli, and A. C. Bovik, Multiscale structural similarity for image quality assessment, in Proc. IEEE 37th Asilomar Conf. Signals, Syst. Comput., vol. 2. Pacific Grove, CA, Nov. 2003, pp [4] H. R. Sheikh, M. F. Sabir, and A. C. Bovik, A statistical evaluation of recent full reference image quality assessment algorithms, IEEE Trans. Image Process., vol. 5, no., pp , Nov [5] J. Morovič, Color Gamut Mapping. Chichester, U.K.: Wiley, [6] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting, sted. San Francisco, CA: Morgan Kaufmann, [7] Z. Wang and Q. Li, Information content weighting for perceptual image quality assessment, IEEE Trans. Image Process., vol. 20, no. 5, pp , May 20. [8] J. Preiss, I. Lissner, P. Urban, M. Scheller Lichtenauer, and P. Zolliker, The impact of image-difference features on perceived image differences, in Proc. 6th Eur. Conf. Color Graph., Imag., Vis., Amsterdam, The Netherlands, 202, pp

11 LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR 445 [9] M. Scheller Lichtenauer, P. Zolliker, I. Lissner, J. Preiss, and P. Urban, Learning image similarity measures from choice data, in Proc. 6th Eur. Conf. Color Graph., Imag., Vis., Amsterdam, The Netherlands, 202, pp [0] N. Bonnier, F. Schmitt, H. Brettel, and S. Berche, Evaluation of spatial gamut mapping algorithms, in Proc. 4th Color Imag. Conf., Scottsdale, AZ, 2006, pp [] P. Zolliker, Z. Barańczuk, and J. Giesen, Image fusion for optimizing gamut mapping, in Proc. 9th Color Imag. Conf., San Jose, CA, 20, pp [2] N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, Image quality assessment based on a degradation model, IEEE Trans. Image Process., vol. 9, no. 4, pp , Apr [3] Z. Wang and A. C. Bovik, A universal image quality index, IEEE Signal Process. Lett., vol. 9, no. 3, pp. 8 84, Mar [4] H. R. Sheikh, A. C. Bovik, and G. de Veciana, An information fidelity criterion for image quality assessment using natural scene statistics, IEEE Trans. Image Process., vol. 4, no. 2, pp , Dec [5] H. R. Sheikh and A. C. Bovik, Image information and visual quality, IEEE Trans. Image Process., vol. 5, no. 2, pp , Feb [6] D. M. Chandler and S. S. Hemami, VSNR: A wavelet-based visual signal-to-noise ratio for natural images, IEEE Trans. Image Process., vol. 6, no. 9, pp , Sep [7] J. Hardeberg, E. Bando, and M. Pedersen, Evaluating colour image difference metrics for gamut-mapped images, Colorat. Technol., vol. 24, no. 4, pp , [8] J. Morovič and P.-L. Sun, Predicting image differences in color reproduction from their colorimetric correlates, J. Imag. Sci. Technol., vol. 47, no. 6, pp , [9] Z. M. Parvez Sazzad, Y. Kawayoke, and Y. Horita, No reference image quality assessment for JPEG2000 based on spatial features, Signal Process., Image Commun., vol. 23, no. 4, pp , [20] M. D. Fairchild, Color Appearance Models, 2nd ed. Chichester, U.K.: Wiley, [2] Kodak Lossless True Color Image Suite. (202) [Online]. Available: [22] F. W. Campbell, J. J. Kulikowski, and J. Levinson, The effect of orientation on the visual resolution of gratings, J. Physiol., vol. 87, no. 2, pp , 966. [23] C. Owsley, R. Sekuler, and D. Siemsen, Contrast sensitivity throughout adulthood, Vis. Res., vol. 23, no. 7, pp , 983. [24] F. L. van Nes and M. A. Bouman, Spatial modulation transfer in the human eye, J. Opt. Soc. Amer., vol. 57, no. 3, pp , 967. [25] E. Peli, Contrast in complex images, J. Opt. Soc. Amer. A, vol. 7, no. 0, pp , 990. [26] D. B. Judd, Ideal color space: Curvature of color space and its implications for industrial color tolerances, Palette, vol. 29, pp. 25 3, 968. [27] G. Cui, M. R. Luo, B. Rigg, G. Roesler, and K. Witt, Uniform colour spaces based on the DIN99 colour-difference formula, Color Res. Appl., vol. 27, no. 4, pp , [28] N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, The CIECAM02 color appearance model, in Proc. 0th Color Imag. Conf., Scottsdale, AZ, 2002, pp [29] P. Urban, D. Schleicher, M. R. Rosen, and R. S. Berns, Embedding non-euclidean color spaces into Euclidean color spaces with minimal isometric disagreement, J. Opt. Soc. Amer. A, vol. 24, no. 6, pp , [30] I. Lissner and P. Urban, Toward a unified color space for perceptionbased image processing, IEEE Trans. Image Process., vol. 2, no. 3, pp , Mar [3] I. Lissner, J. Preiss, and P. Urban, Predicting image differences based on image-difference features, in Proc. 9th Color Imag. Conf., San Jose, CA, 20, pp [32] P. J. Bex and K. Langley, The perception of suprathreshold contrast and fast adaptive filtering, J. Vis., vol. 7, no. 2, pp. 23, [33] G. M. Johnson, X. Song, E. D. Montag, and M. D. Fairchild, Derivation of a color space for image color difference measurement, Color Res. Appl., vol. 35, no. 6, pp , 200. [34] F. Zhang, L. Ma, S. Li, and K. N. Ngan, Practical image quality metric applied to image coding, IEEE Trans. Multimedia, vol. 3, no. 4, pp , Aug. 20. [35] R. S. Berns, F. W. Billmeyer, K. Ikeda, A. R. Robertson, and K. Witt, Parametric effects in colour-difference evaluation, Central Bureau of the CIE, Vienna, Austria, Tech. Rep. 0, 993. [36] R. G. Kuehni, Color Space and Its Divisions, st ed. Hoboken, NJ: Wiley, [37] G. E. Legge and J. M. Foley, Contrast masking in human vision, J. Opt. Soc. Amer., vol. 70, no. 2, pp , 980. [38] X. Song, G. M. Johnson, and M. D. Fairchild, Minimizing the perception of chromatic noise in digital images, in Proc. 2th Color Imag. Conf., Scottsdale, AZ, 2004, pp [39] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti, TID2008 A database for evaluation of full-reference visual quality assessment metrics, Adv. Modern Radioelectron., vol. 0, pp , [40] N. Ponomarenko, F. Battisti, K. Egiazarian, J. Astola, and V. Lukin, Metrics performance comparison for color image database, in Proc. 4th Int. Workshop Video Process. Qual. Metrics Consumer Electron., Scottsdale, AZ, [4] P. Zolliker and K. Simon, Retaining local image information in gamut mapping algorithms, IEEE Trans. Image Process., vol. 6, no. 3, pp , Mar [42] J. Giesen, E. Schuberth, K. Simon, P. Zolliker, and O. Zweifel, Imagedependent gamut mapping as optimization problem, IEEE Trans. Image Process., vol. 6, no. 0, pp , Oct [43] F. Dugay, I. Farup, and J. Y. Hardeberg, Perceptual evaluation of color gamut mapping algorithms, Color Res. Appl., vol. 33, no. 6, pp , [44] Z. Barańczuk, P. Zolliker, and J. Giesen, Image-individualized gamut mapping algorithms, J. Imag. Sci. Technol., vol. 54, no. 3, pp , 200. [45] P. Zolliker, Z. Barańczuk, I. Sprow, and J. Giesen, Conjoint analysis for evaluating parameterized gamut mapping algorithms, IEEE Trans. Image Process., vol. 9, no. 3, pp , Mar [46] L. L. Thurstone, A law of comparative judgment, Psychol. Rev., vol. 34, no. 4, pp , 927. [47] R. A. Bradley and M. E. Terry, Rank analysis of incomplete block designs: I. The method of paired comparisons, Biometrika, vol. 39, nos. 3 4, pp , 952. [48] L. Brown and X. Li, Confidence intervals for two sample binomial distribution, J. Stat. Plan. Inference, vol. 30, nos. 2, pp , [49] D. H. Alman, R. S. Berns, H. Komatsubara, W. Li, M. R. Luo, M. Melgosa, J. H. Nobbs, B. Rigg, A. R. Robertson, and K. Witt, Improvement to industrial colour-difference evaluation, Central Bureau of the CIE, Vienna, Austria, Tech. Rep. 42, 200. [50] P.-C. Hung and R. S. Berns, Determination of constant Hue Loci for a CRT gamut and their predictions using color appearance spaces, Color Res. Appl., vol. 20, no. 5, pp , 995. [5] F. Ebner and M. D. Fairchild, Development and testing of a color space (IPT) with improved hue uniformity, in Proc. 6th Color Imag. Conf., Scottsdale, AZ, 998, pp [52] A colour appearance model for colour management systems: CIECAM02, Central Bureau of the CIE, Vienna, Austria, Tech. Rep. 59, [53] E. Reinhard, E. A. Khan, A. O. Akyüz, and G. M. Johnson, Color Imaging: Fundamentals and Applications. Wellesley,MA:AKPeters, [54] G. M. Johnson and M. D. Fairchild, Darwinism of color image difference models, in Proc. 9th Color Imag. Conf., Scottsdale, AZ, 200, pp [55] X. Zhang and B. A. Wandell, A spatial extension of CIELAB for digital color image reproduction, in Soc. Inf. Display Symp. Tech. Dig., vol , pp [56] MeTriX MuX Visual Quality Assessment Package. (202) [Online]. Available: [57] K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and M. Carli, Two new full-reference quality metrics based on HVS, in Proc. 2nd Int. Workshop Video Process. Qual. Metrics Consumer Electron., Scottsdale, AZ, [58] MATLAB Implementation of the Color-Image-Difference (CID) Measure. (202) [Online]. Available: [59] G. T. Fechner, Elemente der Psychophysik, Erster Theil. Leipzig, Germany: Breitkopf und Härtel, 860. [60] S. S. Stevens, To honor Fechner and repeal his law, Science, vol. 33, no. 3446, pp , 96.

12 446 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 203 color images. Ingmar Lissner received the Engineering degree in computer science and engineering from the Hamburg University of Technology, Hamburg, Germany, in He is currently pursuing the Ph.D. degree with the Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt, Germany. He is also a Research Assistant with the Institute of Printing Science and Technology. His current research interests include color perception, uniform color spaces, and image-difference measures for Matthias Scheller Lichtenauer received the Master of Science degree in computer science from ETH, Zurich, Switzerland, in 2008, and is currently pursuing the Ph.D. degree with the Group of Joachim Giesen, Friedrich-Schiller-University, Jena, Germany. He is currently with the Laboratory of Media Technology, Empa, Dübendorf, Switzerland, where he is researching design and analysis of psychometric measurements. Jens Preiss received the Diploma degree in physics (equivalent to the M.S. degree) from the University of Freiburg, Freiburg, Germany, in 200, and is currently pursuing the Doctoral degree in color and imaging science with the Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt, Germany. He is currently a Research Assistant with the Institute of Printing Science and Technology. Peter Zolliker received the Diploma degree in physics from ETH, Zurich, Switzerland, and the Ph.D. degree in crystallography from the University of Geneva, Geneva, Switzerland, in 987. He was a Post-Doctoral Fellow with the Brookhaven National Laboratory, Upton, NY. He joined Gretag Imaging, Inc, Regensdorf, Switzerland, in 988. Since 2003, he has been with Empa, Dübendorf, Switzerland, where he has been engaged in research on image quality, psychometrics, color management, and statistical analysis. Philipp Urban received the M.S. degree in mathematics from the University of Hamburg, Hamburg, Germany, and the Ph.D. degree from the Hamburg University of Technology, Hamburg. He was a Visiting Scientist with the Munsell Color Science Laboratory, Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, from 2006 to Since 2009, he has been the Head of the Color Research Group, Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt, Germany. His current research interests include spectral-based acquisition, processing, and reproduction of color images, considering the limited metameric and spectral gamut as well as the low dynamic range of output devices.

A new spatial hue angle metric for perceptual image difference

A new spatial hue angle metric for perceptual image difference A new spatial hue angle metric for perceptual image difference Marius Pedersen 1,2 and Jon Yngve Hardeberg 1 1 Gjøvik University College, Gjøvik, Norway 2 Océ Print Logic Technologies S.A., Créteil, France.

More information

A New Spatial Hue Angle Metric for Perceptual Image Difference

A New Spatial Hue Angle Metric for Perceptual Image Difference A New Spatial Hue Angle Metric for Perceptual Image Difference Marius Pedersen 1,2 and Jon Yngve Hardeberg 1 1 Gjøvik University College, Gjøvik, Norway 2 Océ Print Logic Technologies S.A., Créteil, France

More information

OBJECTIVE IMAGE QUALITY ASSESSMENT WITH SINGULAR VALUE DECOMPOSITION. Manish Narwaria and Weisi Lin

OBJECTIVE IMAGE QUALITY ASSESSMENT WITH SINGULAR VALUE DECOMPOSITION. Manish Narwaria and Weisi Lin OBJECTIVE IMAGE UALITY ASSESSMENT WITH SINGULAR VALUE DECOMPOSITION Manish Narwaria and Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore, 639798 Email: {mani008, wslin}@ntu.edu.sg

More information

Image Quality Assessment based on Improved Structural SIMilarity

Image Quality Assessment based on Improved Structural SIMilarity Image Quality Assessment based on Improved Structural SIMilarity Jinjian Wu 1, Fei Qi 2, and Guangming Shi 3 School of Electronic Engineering, Xidian University, Xi an, Shaanxi, 710071, P.R. China 1 jinjian.wu@mail.xidian.edu.cn

More information

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

F-MAD: A Feature-Based Extension of the Most Apparent Distortion Algorithm for Image Quality Assessment

F-MAD: A Feature-Based Extension of the Most Apparent Distortion Algorithm for Image Quality Assessment F-MAD: A Feature-Based Etension of the Most Apparent Distortion Algorithm for Image Quality Assessment Punit Singh and Damon M. Chandler Laboratory of Computational Perception and Image Quality, School

More information

A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT. Dogancan Temel and Ghassan AlRegib

A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT. Dogancan Temel and Ghassan AlRegib A COMPARATIVE STUDY OF QUALITY AND CONTENT-BASED SPATIAL POOLING STRATEGIES IN IMAGE QUALITY ASSESSMENT Dogancan Temel and Ghassan AlRegib Center for Signal and Information Processing (CSIP) School of

More information

Opponent Color Spaces

Opponent Color Spaces EE637 Digital Image Processing I: Purdue University VISE - May 1, 2002 1 Opponent Color Spaces Perception of color is usually not best represented in RGB. A better model of HVS is the so-call opponent

More information

Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS

Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS Cheng Li, Ming Ronnier Luo and Changjun Li Department of Colour Science, University of Leeds,

More information

Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation

Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation IJECT Vo l. 8, Is s u e 3, Ju l y - Se p t 2017 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Efficient Color Image Quality Assessment Using Gradient Magnitude Similarity Deviation 1 Preeti Rani,

More information

SSIM Image Quality Metric for Denoised Images

SSIM Image Quality Metric for Denoised Images SSIM Image Quality Metric for Denoised Images PETER NDAJAH, HISAKAZU KIKUCHI, MASAHIRO YUKAWA, HIDENORI WATANABE and SHOGO MURAMATSU Department of Electrical and Electronics Engineering, Niigata University,

More information

Visual Evaluation and Evolution of the RLAB Color Space

Visual Evaluation and Evolution of the RLAB Color Space Visual Evaluation and Evolution of the RLAB Color Space Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester, New York Abstract The

More information

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

Evaluation of Two Principal Approaches to Objective Image Quality Assessment Evaluation of Two Principal Approaches to Objective Image Quality Assessment Martin Čadík, Pavel Slavík Department of Computer Science and Engineering Faculty of Electrical Engineering, Czech Technical

More information

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality Multidimensional DSP Literature Survey Eric Heinen 3/21/08

More information

The ZLAB Color Appearance Model for Practical Image Reproduction Applications

The ZLAB Color Appearance Model for Practical Image Reproduction Applications The ZLAB Color Appearance Model for Practical Image Reproduction Applications Mark D. Fairchild Rochester Institute of Technology, Rochester, New York, USA ABSTRACT At its May, 1997 meeting in Kyoto, CIE

More information

Structural Similarity Based Image Quality Assessment

Structural Similarity Based Image Quality Assessment Structural Similarity Based Image Quality Assessment Zhou Wang, Alan C. Bovik and Hamid R. Sheikh It is widely believed that the statistical properties of the natural visual environment play a fundamental

More information

DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM. Jeoong Sung Park and Tokunbo Ogunfunmi

DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM. Jeoong Sung Park and Tokunbo Ogunfunmi DCT-BASED IMAGE QUALITY ASSESSMENT FOR MOBILE SYSTEM Jeoong Sung Park and Tokunbo Ogunfunmi Department of Electrical Engineering Santa Clara University Santa Clara, CA 9553, USA Email: jeoongsung@gmail.com

More information

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper)

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper) MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT Zhou Wang 1, Eero P. Simoncelli 1 and Alan C. Bovik 2 (Invited Paper) 1 Center for Neural Sci. and Courant Inst. of Math. Sci., New York Univ.,

More information

PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY

PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY Journal of ELECTRICAL ENGINEERING, VOL. 59, NO. 1, 8, 9 33 PROBABILISTIC MEASURE OF COLOUR IMAGE PROCESSING FIDELITY Eugeniusz Kornatowski Krzysztof Okarma In the paper a probabilistic approach to quality

More information

Keywords: Contrast Masking, Gradient Similarity, Human Visual System (HVS), Image Quality Assessment (IQA), Structural Similarity (SSIM).

Keywords: Contrast Masking, Gradient Similarity, Human Visual System (HVS), Image Quality Assessment (IQA), Structural Similarity (SSIM). ISSN 2348 2370 Vol.06,Issue.02, March-2014, Pages:64-71 www.semargroup.org Image Quality Assessment Based on Gradient Similarity F. ASMA BEGUM 1, NAZIA SHABANA 2, NAHID JABEEN 3 1 Assoc Prof, Dept of ECE,

More information

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack BLIND QUALITY ASSESSMENT OF JPEG2 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack Laboratory for Image and Video Engineering, Department of Electrical

More information

DEEP BLIND IMAGE QUALITY ASSESSMENT

DEEP BLIND IMAGE QUALITY ASSESSMENT DEEP BLIND IMAGE QUALITY ASSESSMENT BY LEARNING SENSITIVITY MAP Jongyoo Kim, Woojae Kim and Sanghoon Lee ICASSP 2018 Deep Learning and Convolutional Neural Networks (CNNs) SOTA in computer vision & image

More information

Color Appearance in Image Displays. O Canada!

Color Appearance in Image Displays. O Canada! Color Appearance in Image Displays Mark D. Fairchild RIT Munsell Color Science Laboratory ISCC/CIE Expert Symposium 75 Years of the CIE Standard Colorimetric Observer Ottawa 26 O Canada Image Colorimetry

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

Blind Prediction of Natural Video Quality and H.264 Applications

Blind Prediction of Natural Video Quality and H.264 Applications Proceedings of Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics January 30-February 1, 2013, Scottsdale, Arizona 1 Blind Prediction of Natural Video Quality

More information

Perceptual quality assessment of color images using adaptive signal representation

Perceptual quality assessment of color images using adaptive signal representation Published in: Proc. SPIE, Conf. on Human Vision and Electronic Imaging XV, vol.7527 San Jose, CA, USA. Jan 2010. c SPIE. Perceptual quality assessment of color images using adaptive signal representation

More information

BLIND IMAGE QUALITY ASSESSMENT WITH LOCAL CONTRAST FEATURES

BLIND IMAGE QUALITY ASSESSMENT WITH LOCAL CONTRAST FEATURES BLIND IMAGE QUALITY ASSESSMENT WITH LOCAL CONTRAST FEATURES Ganta Kasi Vaibhav, PG Scholar, Department of Electronics and Communication Engineering, University College of Engineering Vizianagaram,JNTUK.

More information

EE 5359 Multimedia project

EE 5359 Multimedia project EE 5359 Multimedia project -Chaitanya Chukka -Chaitanya.chukka@mavs.uta.edu 5/7/2010 1 Universality in the title The measurement of Image Quality(Q)does not depend : On the images being tested. On Viewing

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, www.ijcea.com ISSN 2321-3469 A STUDY ON STATISTICAL METRICS FOR IMAGE DE-NOISING A.Ramya a, D.Murugan b, S.Vijaya

More information

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT ANJIBABU POLEBOINA 1, M.A. SHAHID 2 Digital Electronics and Communication Systems (DECS) 1, Associate

More information

A DCT Statistics-Based Blind Image Quality Index

A DCT Statistics-Based Blind Image Quality Index A DCT Statistics-Based Blind Image Quality Index Michele Saad, Alan C. Bovik, Christophe Charrier To cite this version: Michele Saad, Alan C. Bovik, Christophe Charrier. A DCT Statistics-Based Blind Image

More information

2284 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 9, SEPTEMBER VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images

2284 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 9, SEPTEMBER VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images 2284 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 9, SEPTEMBER 2007 VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images Damon M. Chandler, Member, IEEE, and Sheila S. Hemami, Senior

More information

SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT. Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu

SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT. Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu 2012 IEEE International Conference on Multimedia and Expo Workshops SVD FILTER BASED MULTISCALE APPROACH FOR IMAGE QUALITY ASSESSMENT Ashirbani Saha, Gaurav Bhatnagar and Q.M. Jonathan Wu Department of

More information

A Statistical Model of Tristimulus Measurements Within and Between OLED Displays

A Statistical Model of Tristimulus Measurements Within and Between OLED Displays 7 th European Signal Processing Conference (EUSIPCO) A Statistical Model of Tristimulus Measurements Within and Between OLED Displays Matti Raitoharju Department of Automation Science and Engineering Tampere

More information

Color Vision. Spectral Distributions Various Light Sources

Color Vision. Spectral Distributions Various Light Sources Color Vision Light enters the eye Absorbed by cones Transmitted to brain Interpreted to perceive color Foundations of Vision Brian Wandell Spectral Distributions Various Light Sources Cones and Rods Cones:

More information

Application of CIE with Associated CRI-based Colour Rendition Properties

Application of CIE with Associated CRI-based Colour Rendition Properties Application of CIE 13.3-1995 with Associated CRI-based Colour Rendition December 2018 Global Lighting Association 2018 Summary On September 18 th 2015, the Global Lighting Association (GLA) issued a position

More information

SCREEN CONTENT IMAGE QUALITY ASSESSMENT USING EDGE MODEL

SCREEN CONTENT IMAGE QUALITY ASSESSMENT USING EDGE MODEL SCREEN CONTENT IMAGE QUALITY ASSESSMENT USING EDGE MODEL Zhangkai Ni 1, Lin Ma, Huanqiang Zeng 1,, Canhui Cai 1, and Kai-Kuang Ma 3 1 School of Information Science and Engineering, Huaqiao University,

More information

MULTICHANNEL image processing is studied in this

MULTICHANNEL image processing is studied in this 186 IEEE SIGNAL PROCESSING LETTERS, VOL. 6, NO. 7, JULY 1999 Vector Median-Rational Hybrid Filters for Multichannel Image Processing Lazhar Khriji and Moncef Gabbouj, Senior Member, IEEE Abstract In this

More information

IMAGE QUALITY ASSESSMENT BASED ON EDGE

IMAGE QUALITY ASSESSMENT BASED ON EDGE IMAGE QUALITY ASSESSMENT BASED ON EDGE Xuanqin Mou 1, Min Zhang 1, Wufeng Xue 1 and Lei Zhang 1 Institute of Image Processing and Pattern Recognition, Xi'an Jiaotong University, China Department of Computing,

More information

Characterizing and Controlling the. Spectral Output of an HDR Display

Characterizing and Controlling the. Spectral Output of an HDR Display Characterizing and Controlling the Spectral Output of an HDR Display Ana Radonjić, Christopher G. Broussard, and David H. Brainard Department of Psychology, University of Pennsylvania, Philadelphia, PA

More information

Full Reference Image Quality Assessment Based on Saliency Map Analysis

Full Reference Image Quality Assessment Based on Saliency Map Analysis Full Reference Image Quality Assessment Based on Saliency Map Analysis Tong Yubing *, Hubert Konik *, Faouzi Alaya Cheikh ** and Alain Tremeau * * Laboratoire Hubert Crurien UMR 5516, Université Jean Monnet

More information

Using modern colour difference formulae in the graphic arts

Using modern colour difference formulae in the graphic arts Using modern colour difference formulae in the graphic arts Funded project: Evaluating modern colour difference formulae. AiF-Nr.: 14893 N 1 Agenda 1. Graphic arts image assessment 2. Impact of the background

More information

Meet icam: A Next-Generation Color Appearance Model

Meet icam: A Next-Generation Color Appearance Model Meet icam: A Next-Generation Color Appearance Model Why Are We Here? CIC X, 2002 Mark D. Fairchild & Garrett M. Johnson RIT Munsell Color Science Laboratory www.cis.rit.edu/mcsl Spatial, Temporal, & Image

More information

AN IMAGE, before it is displayed to a human, is often

AN IMAGE, before it is displayed to a human, is often IEEE SIGNAL PROCESSING LETTERS, VOL. 23, NO. 1, JANUARY 2016 65 Decision Fusion for Image Quality Assessment using an Optimization Approach Mariusz Oszust Abstract The proliferation of electronic means

More information

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks Compressive Sensing for Multimedia 1 Communications in Wireless Sensor Networks Wael Barakat & Rabih Saliba MDDSP Project Final Report Prof. Brian L. Evans May 9, 2008 Abstract Compressive Sensing is an

More information

Edge-directed Image Interpolation Using Color Gradient Information

Edge-directed Image Interpolation Using Color Gradient Information Edge-directed Image Interpolation Using Color Gradient Information Andrey Krylov and Andrey Nasonov Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics and Cybernetics,

More information

Gamut Mapping through Perceptually-Based Contrast Reduction

Gamut Mapping through Perceptually-Based Contrast Reduction Gamut Mapping through Perceptually-Based Contrast Reduction Syed Waqas Zamir, Javier Vazquez-Corral, and Marcelo Bertalmío Department of Information and Communication Technologies, Universitat Pompeu Fabra,

More information

NO-REFERENCE IMAGE QUALITY ASSESSMENT ALGORITHM FOR CONTRAST-DISTORTED IMAGES BASED ON LOCAL STATISTICS FEATURES

NO-REFERENCE IMAGE QUALITY ASSESSMENT ALGORITHM FOR CONTRAST-DISTORTED IMAGES BASED ON LOCAL STATISTICS FEATURES NO-REFERENCE IMAGE QUALITY ASSESSMENT ALGORITHM FOR CONTRAST-DISTORTED IMAGES BASED ON LOCAL STATISTICS FEATURES Ismail T. Ahmed 1, 2 and Chen Soong Der 3 1 College of Computer Science and Information

More information

MIXDES Methods of 3D Images Quality Assesment

MIXDES Methods of 3D Images Quality Assesment Methods of 3D Images Quality Assesment, Marek Kamiński, Robert Ritter, Rafał Kotas, Paweł Marciniak, Joanna Kupis, Przemysław Sękalski, Andrzej Napieralski LODZ UNIVERSITY OF TECHNOLOGY Faculty of Electrical,

More information

COLOR imaging and reproduction devices are extremely

COLOR imaging and reproduction devices are extremely IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 10, OCTOBER 2004 1319 Indexing of Multidimensional Lookup Tables in Embedded Systems Michael J. Vrhel, Senior Member, IEEE Abstract The proliferation

More information

AN important task of low level video analysis is to extract

AN important task of low level video analysis is to extract 584 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 17, NO. 5, MAY 2007 Spatio Temporal Regularity Flow (SPREF): Its Estimation and Applications Orkun Alatas, Pingkun Yan, Member,

More information

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics Presented at: IS&T/SPIE s 16th Annual Symposium on Electronic Imaging San Jose, CA, Jan. 18-22, 2004 Published in: Human Vision and Electronic Imaging IX, Proc. SPIE, vol. 5292. c SPIE Stimulus Synthesis

More information

Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction Supplemental Material for ACM Transactions on Graphics 07 paper Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction Michael Waechter, Mate Beljan, Simon Fuhrmann, Nils Moehrle, Johannes

More information

SUBJECTIVE ANALYSIS OF VIDEO QUALITY ON MOBILE DEVICES. Anush K. Moorthy, Lark K. Choi, Gustavo de Veciana and Alan C. Bovik

SUBJECTIVE ANALYSIS OF VIDEO QUALITY ON MOBILE DEVICES. Anush K. Moorthy, Lark K. Choi, Gustavo de Veciana and Alan C. Bovik SUBJECTIVE ANALYSIS OF VIDEO QUALITY ON MOBILE DEVICES Anush K. Moorthy, Lark K. Choi, Gustavo de Veciana and Alan C. Bovik Department of Electrical and Computer Engineering, The University of Texas at

More information

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz No-reference perceptual quality metric for H.264/AVC encoded video Tomás Brandão Maria Paula Queluz IT ISCTE IT IST VPQM 2010, Scottsdale, USA, January 2010 Outline 1. Motivation and proposed work 2. Technical

More information

Encoding Color Difference Signals for High Dynamic Range and Wide Gamut Imagery

Encoding Color Difference Signals for High Dynamic Range and Wide Gamut Imagery Encoding Color Difference Signals for High Dynamic Range and Wide Gamut Imagery Jan Froehlich, Timo Kunkel, Robin Atkins, Jaclyn Pytlarz, Scott Daly, Andreas Schilling, Bernd Eberhardt March 17, 2016 Outline

More information

Spatially Resolved Joint Spectral Gamut Mapping and Separation

Spatially Resolved Joint Spectral Gamut Mapping and Separation Spatially Resolved Joint Spectral Gamut Mapping and Separation Sepideh Samadzadegan and Philipp Urban; Institute of Printing Science and Technology, Technische Universität Darmstadt; Magdalenenstr. 2,

More information

Consistent Colour Appearance: proposed work at the NTNU ColourLab, Gjøvik, Norway

Consistent Colour Appearance: proposed work at the NTNU ColourLab, Gjøvik, Norway Consistent Colour Appearance: proposed work at the NTNU ColourLab, Gjøvik, Norway Gregory High The Norwegian Colour and Visual Computing Laboratory Faculty of Information Technology and Electrical Engineering

More information

Brightness, Lightness, and Specifying Color in High-Dynamic-Range Scenes and Images

Brightness, Lightness, and Specifying Color in High-Dynamic-Range Scenes and Images Brightness, Lightness, and Specifying Color in High-Dynamic-Range Scenes and Images Mark D. Fairchild and Ping-Hsu Chen* Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science,

More information

3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm

3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm 3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm Tobias Ritschel Kaleigh Smith Matthias Ihrke Thorsten Grosch Karol Myszkowski Hans-Peter

More information

A Novel Approach for Deblocking JPEG Images

A Novel Approach for Deblocking JPEG Images A Novel Approach for Deblocking JPEG Images Multidimensional DSP Final Report Eric Heinen 5/9/08 Abstract This paper presents a novel approach for deblocking JPEG images. First, original-image pixels are

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Edge-Directed Image Interpolation Using Color Gradient Information

Edge-Directed Image Interpolation Using Color Gradient Information Edge-Directed Image Interpolation Using Color Gradient Information Andrey Krylov and Andrey Nasonov Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics and Cybernetics,

More information

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING Engineering Review Vol. 32, Issue 2, 64-69, 2012. 64 VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING David BARTOVČAK Miroslav VRANKIĆ Abstract: This paper proposes a video denoising algorithm based

More information

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 7, NO. 2, APRIL 1997 429 Express Letters A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation Jianhua Lu and

More information

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Orlando HERNANDEZ and Richard KNOWLES Department Electrical and Computer Engineering, The College

More information

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS ABSTRACT Compressed sensing (CS) is a signal processing framework for efficiently reconstructing a signal from a small

More information

RANKING CONSISTENT RATE: NEW EVALUATION CRITERION ON PAIRWISE SUBJECTIVE EXPERIMENTS. Yeji Shen, Tingting Jiang

RANKING CONSISTENT RATE: NEW EVALUATION CRITERION ON PAIRWISE SUBJECTIVE EXPERIMENTS. Yeji Shen, Tingting Jiang RANKING CONSISTENT RATE: NEW EVALUATION CRITERION ON PAIRWISE SUBJECTIVE EXPERIMENTS Yeji Shen, Tingting Jiang National Engineering Laboratory for Video Technology Cooperative Medianet Innovation Center

More information

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS Andrey Nasonov, and Andrey Krylov Lomonosov Moscow State University, Moscow, Department of Computational Mathematics and Cybernetics, e-mail: nasonov@cs.msu.ru,

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

Objective Quality Assessment of Screen Content Images by Structure Information

Objective Quality Assessment of Screen Content Images by Structure Information Objective Quality Assessment of Screen Content Images by Structure Information Yuming Fang 1, Jiebin Yan 1, Jiaying Liu 2, Shiqi Wang 3, Qiaohong Li 3, and Zongming Guo 2 1 Jiangxi University of Finance

More information

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES Ming-Jun Chen, 1,3, Alan C. Bovik 1,3, Lawrence K. Cormack 2,3 Department of Electrical & Computer Engineering, The University of Texas

More information

Non-Linear Masking based Contrast Enhancement via Illumination Estimation

Non-Linear Masking based Contrast Enhancement via Illumination Estimation https://doi.org/10.2352/issn.2470-1173.2018.13.ipas-389 2018, Society for Imaging Science and Technology Non-Linear Masking based Contrast Enhancement via Illumination Estimation Soonyoung Hong, Minsub

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication

More information

A PERCEPTUALLY RELEVANT SHEARLET-BASED ADAPTATION OF THE PSNR

A PERCEPTUALLY RELEVANT SHEARLET-BASED ADAPTATION OF THE PSNR A PERCEPTUALLY RELEVANT SHEARLET-BASED ADAPTATION OF THE PSNR Sebastian Bosse, Mischa Siekmann, Wojciech Samek, Member, IEEE, and Thomas Wiegand,2, Fellow, IEEE Fraunhofer Institute for Telecommunications,

More information

Black generation using lightness scaling

Black generation using lightness scaling Black generation using lightness scaling Tomasz J. Cholewo Software Research, Lexmark International, Inc. 740 New Circle Rd NW, Lexington, KY 40511 e-mail: cholewo@lexmark.com ABSTRACT This paper describes

More information

Information Content Weighting for Perceptual Image Quality Assessment Zhou Wang, Member, IEEE, and Qiang Li, Member, IEEE

Information Content Weighting for Perceptual Image Quality Assessment Zhou Wang, Member, IEEE, and Qiang Li, Member, IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011 1185 Information Content Weighting for Perceptual Image Quality Assessment Zhou Wang, Member, IEEE, and Qiang Li, Member, IEEE Abstract Many

More information

Perceptually Uniform Color Spaces for Color Texture Analysis: An Empirical Evaluation

Perceptually Uniform Color Spaces for Color Texture Analysis: An Empirical Evaluation 932 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 6, JUNE 2001 Perceptually Uniform Color Spaces for Color Texture Analysis: An Empirical Evaluation George Paschos Abstract, a nonuniform color space,

More information

Colour appearance modelling between physical samples and their representation on large liquid crystal display

Colour appearance modelling between physical samples and their representation on large liquid crystal display Colour appearance modelling between physical samples and their representation on large liquid crystal display Chrysiida Kitsara, M Ronnier Luo, Peter A Rhodes and Vien Cheung School of Design, University

More information

SALIENCY WEIGHTED QUALITY ASSESSMENT OF TONE-MAPPED IMAGES

SALIENCY WEIGHTED QUALITY ASSESSMENT OF TONE-MAPPED IMAGES SALIENCY WEIGHTED QUALITY ASSESSMENT OF TONE-MAPPED IMAGES Hamid Reza Nasrinpour Department of Electrical & Computer Engineering University of Manitoba Winnipeg, MB, Canada hamid.nasrinpour@umanitoba.ca

More information

Structural Similarity Optimized Wiener Filter: A Way to Fight Image Noise

Structural Similarity Optimized Wiener Filter: A Way to Fight Image Noise Structural Similarity Optimized Wiener Filter: A Way to Fight Image Noise Mahmud Hasan and Mahmoud R. El-Sakka (B) Department of Computer Science, University of Western Ontario, London, ON, Canada {mhasan62,melsakka}@uwo.ca

More information

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca Gray-World assumption on perceptual color spaces Jonathan Cepeda-Negrete jonathancn@laviria.org Raul E. Sanchez-Yanez sanchezy@ugto.mx Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

More information

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES Angela D Angelo, Mauro Barni Department of Information Engineering University of Siena ABSTRACT In multimedia applications there has

More information

A Similarity Measure for Large Color Differences

A Similarity Measure for Large Color Differences A Similarity Measure for Large Color Differences Nathan Moroney, Ingeborg Tastl and Melanie Gottwals Hewlett-Packard Laboratories, Palo Alto, CA Abstract Hundreds of large color differences, of magnitude

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

A New Time-Dependent Tone Mapping Model

A New Time-Dependent Tone Mapping Model A New Time-Dependent Tone Mapping Model Alessandro Artusi Christian Faisstnauer Alexander Wilkie Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract In this article we

More information

Comparison of No-Reference Image Quality Assessment Machine Learning-based Algorithms on Compressed Images

Comparison of No-Reference Image Quality Assessment Machine Learning-based Algorithms on Compressed Images Comparison of No-Reference Image Quality Assessment Machine Learning-based Algorithms on Compressed Images Christophe Charrier 1 AbdelHakim Saadane 2 Christine Fernandez-Maloigne 3 1 Université de Caen-Basse

More information

Satellite Image Processing Using Singular Value Decomposition and Discrete Wavelet Transform

Satellite Image Processing Using Singular Value Decomposition and Discrete Wavelet Transform Satellite Image Processing Using Singular Value Decomposition and Discrete Wavelet Transform Kodhinayaki E 1, vinothkumar S 2, Karthikeyan T 3 Department of ECE 1, 2, 3, p.g scholar 1, project coordinator

More information

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Syed Gilani Pasha Assistant Professor, Dept. of ECE, School of Engineering, Central University of Karnataka, Gulbarga,

More information

Automated Control of The Color Rendering Index for LED RGBW Modules in Industrial Lighting

Automated Control of The Color Rendering Index for LED RGBW Modules in Industrial Lighting Automated Control of The Color Rendering Index for LED RGBW Modules in Industrial Lighting Julia L. Suvorova usuvorova2106@gmail.com Sergey Yu. Arapov arapov66@yandex.ru Svetlana P. Arapova arapova66@yandex.ru

More information

An Efficient Saliency Based Lossless Video Compression Based On Block-By-Block Basis Method

An Efficient Saliency Based Lossless Video Compression Based On Block-By-Block Basis Method An Efficient Saliency Based Lossless Video Compression Based On Block-By-Block Basis Method Ms. P.MUTHUSELVI, M.E(CSE), V.P.M.M Engineering College for Women, Krishnankoil, Virudhungar(dt),Tamil Nadu Sukirthanagarajan@gmail.com

More information

Optimization of optical systems for LED spot lights concerning the color uniformity

Optimization of optical systems for LED spot lights concerning the color uniformity Optimization of optical systems for LED spot lights concerning the color uniformity Anne Teupner* a, Krister Bergenek b, Ralph Wirth b, Juan C. Miñano a, Pablo Benítez a a Technical University of Madrid,

More information

Attention modeling for video quality assessment balancing global quality and local quality

Attention modeling for video quality assessment balancing global quality and local quality Downloaded from orbit.dtu.dk on: Jul 02, 2018 Attention modeling for video quality assessment balancing global quality and local quality You, Junyong; Korhonen, Jari; Perkis, Andrew Published in: proceedings

More information

BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY

BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY BLIND QUALITY ASSESSMENT OF VIDEOS USING A MODEL OF NATURAL SCENE STATISTICS AND MOTION COHERENCY Michele A. Saad The University of Texas at Austin Department of Electrical and Computer Engineering Alan

More information

Blind Measurement of Blocking Artifact in Images

Blind Measurement of Blocking Artifact in Images The University of Texas at Austin Department of Electrical and Computer Engineering EE 38K: Multidimensional Digital Signal Processing Course Project Final Report Blind Measurement of Blocking Artifact

More information

Low Cost Colour Measurements with Improved Accuracy

Low Cost Colour Measurements with Improved Accuracy Low Cost Colour Measurements with Improved Accuracy Daniel Wiese, Karlheinz Blankenbach Pforzheim University, Engineering Department Electronics and Information Technology Tiefenbronner Str. 65, D-75175

More information

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES H. I. Saleh 1, M. E. Elhadedy 2, M. A. Ashour 1, M. A. Aboelsaud 3 1 Radiation Engineering Dept., NCRRT, AEA, Egypt. 2 Reactor Dept., NRC,

More information

CONTENT ADAPTIVE SCREEN IMAGE SCALING

CONTENT ADAPTIVE SCREEN IMAGE SCALING CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT

More information

Image and Video Quality Assessment Using Neural Network and SVM

Image and Video Quality Assessment Using Neural Network and SVM TSINGHUA SCIENCE AND TECHNOLOGY ISSN 1007-0214 18/19 pp112-116 Volume 13, Number 1, February 2008 Image and Video Quality Assessment Using Neural Network and SVM DING Wenrui (), TONG Yubing (), ZHANG Qishan

More information