2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Exploring Hate Speech Detection in Multimodal Publications

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research.

          Related collections

          Most cited references3

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Deep Visual-Semantic Alignments for Generating Image Descriptions

            We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks (RNN) over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions outperform retrieval baselines on both full images and on a new dataset of region-level annotations. Finally, we conduct large-scale analysis of our RNN language model on the Visual Genome dataset of 4.1 million captions and highlight the differences between image and region-level caption statistics.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book Chapter: not found

              Analyzing Labeled Cyberbullying Incidents on the Instagram Social Network

                Bookmark

                Author and article information

                Journal
                09 October 2019
                Article
                1910.03814
                989f9641-aac1-4a9b-85cc-4482d47b3e35

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.CV cs.CL

                Computer vision & Pattern recognition,Theoretical computer science
                Computer vision & Pattern recognition, Theoretical computer science

                Comments

                Comment on this article