17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references35

          • Record: found
          • Abstract: found
          • Article: not found

          FSIM: a feature similarity index for image quality assessment.

          Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            State-of-the-art in visual attention modeling.

            Modeling visual attention--particularly stimulus-driven, saliency-based attention--has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures

                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Image Processing
                IEEE Trans. on Image Process.
                Institute of Electrical and Electronics Engineers (IEEE)
                1057-7149
                1941-0042
                October 2014
                October 2014
                : 23
                : 10
                : 4270-4281
                Article
                10.1109/TIP.2014.2346028
                25122572
                32ceaf8e-c6f1-45ad-a82f-601dd4735a90
                © 2014
                History

                Comments

                Comment on this article