33
views
0
recommends
+1 Recommend
2 collections
    0
    shares

      To submit to the journal, click here

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Do I sound American? : How message attributes of Internet Research Agency (IRA) disinformation relate to Twitter engagement

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Ongoing research into how states coordinate foreign disinformation campaign has raised concerns over social media’s influence on democracies. One example is the spread of Russian disinformation in the 2016 US presidential election. Russia’s Internet Research Agency (IRA) Twitter accounts have been known to deliver messages with strategic attempts and political goals. We use publicly available IRA Twitter data created during and after the 2016 US election campaign (2016 and 2017) to examine the nature of strategic message features of foreign-sponsored online disinformation and their social media sharing. We use computational approaches to identify unique syntactic features of online disinformation tweets from IRA compared to American Twitter corpora, reflecting their functional and situational differences. More importantly, we examine what message features in IRA tweets across syntax, topic, and sentiment were associated with more sharing (retweets). Implications are discussed.

          Related collections

          Most cited references94

          • Record: found
          • Abstract: found
          • Article: not found

          SMOTE: Synthetic Minority Over-sampling Technique

          An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of ``normal'' examples with only a small percentage of ``abnormal'' or ``interesting'' examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Lying words: predicting deception from linguistic styles.

            Telling lies often requires creating a story about an experience or attitude that does not exist. As a result, false stories may be qualitatively different from true stories. The current project investigated the features of linguistic style that distinguish between true and false stories. In an analysis of five independent samples, a computer-based text analysis program correctly classified liars and truth-tellers at a rate of 67% when the topic was constant and a rate of 61% overall. Compared to truth-tellers, liars showed lower cognitive complexity, used fewer self-references and other-references, and used more negative emotion words.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Supervised Learning of Universal Sentence Representations from Natural Language Inference Data

                Bookmark

                Author and article information

                Contributors
                Journal
                CCR
                Computational Communication Research
                Amsterdam University Press (Amsterdam )
                2665-9085
                2665-9085
                October 2022
                : 4
                : 2
                : 590-628
                Affiliations
                University of Connecticut
                The University of Texas at Austin
                University of Wisconsin-Madison
                University of Wisconsin-Madison
                Cornell University
                Amazon Alexa AI
                Curai
                Article
                CCR2022.2.008.SUK
                10.5117/CCR2022.2.008.SUK
                6dbf53ab-cc45-4061-9ff1-864585992ca9
                © Jiyoun Suk, Josephine Lukito, Min-Hsin Su, Sang Jung Kim, Chau Tong, Zhongkai Sun & Prathusha Sarma

                This is an open access article distributed under the terms of the CC BY-NC 4.0 license. http://creativecommons.org/licenses/by/4.0

                History
                Categories
                Article

                Twitter,Internet Research Agency,Disinformation,computational social science,corpus linguistics

                Comments

                Comment on this article