2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      A cross-media public sentiment analysis system for microblog

      , , ,
      Multimedia Systems
      Springer Nature

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Thumbs up?

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Large-scale visual sentiment ontology and detectors using adjective noun pairs

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Visual-textual joint relevance learning for tag-based social image search.

              Due to the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Comparative results of the experiments conducted on a dataset including 370+images are presented, which demonstrate the effectiveness of the proposed approach.
                Bookmark

                Author and article information

                Journal
                Multimedia Systems
                Multimedia Systems
                Springer Nature
                0942-4962
                1432-1882
                July 2016
                August 2014
                : 22
                : 4
                : 479-486
                Article
                10.1007/s00530-014-0407-8
                6b4d521d-a79a-421a-bf34-dcf86bef8940
                © 2016
                History

                Comments

                Comment on this article