8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Characterizing the role of bots’ in polarized stance on social media

      research-article
      ,
      Social Network Analysis and Mining
      Springer Vienna
      Stance, Bots, Social media

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          There is a rising concern with social bots that imitate humans and manipulate opinions on social media. Current studies on assessing the overall effect of bots on social media users mainly focus on evaluating the diffusion of discussions on social networks by bots. Yet, these studies do not confirm the relationship between bots and users’ stances. This study fills in the gap by analyzing if these bots are part of the signals that formulated social media users’ stances towards controversial topics. We analyze users’ online interactions that are predictive to their stances and identify the bots within these interactions. We applied our analysis on a dataset of more than 4000 Twitter users who expressed a stance on seven different topics. We analyzed those users’ direct interactions and indirect exposures with more than 19 million accounts. We identify the bot accounts for supporting/against stances, and compare them to other types of accounts, such as the accounts of influential and famous users. Our analysis showed that bot interactions with users who had specific stances were minimal when compared to the influential accounts. Nevertheless, we found that the presence of bots was still connected to users’ stances, especially in an indirect manner, as users are exposed to the content of the bots they follow, rather than by directly interacting with them by retweeting, mentioning, or replying.

          Related collections

          Most cited references19

          • Record: found
          • Abstract: not found
          • Article: not found

          When Corrections Fail: The Persistence of Political Misperceptions

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate

            Objectives. To understand how Twitter bots and trolls (“bots”) promote online health content. Methods. We compared bots’ to average users’ rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity. Results. Compared with average users, Russian trolls (χ 2 (1) = 102.0; P  < .001), sophisticated bots (χ 2 (1) = 28.6; P  < .001), and “content polluters” (χ 2 (1) = 7.0; P  < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content (χ 2 (1) = 11.18; P  < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized (χ 2 (1) = 12.1; P  < .001) and antivaccine (χ 2 (1) = 35.9; P  < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive. Conclusions. Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The spread of low-credibility content by social bots

              The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
                Bookmark

                Author and article information

                Contributors
                a.aldayel@ed.ac.uk
                w.magdy@inf.ed.ac.uk
                Journal
                Soc Netw Anal Min
                Soc Netw Anal Min
                Social Network Analysis and Mining
                Springer Vienna (Vienna )
                1869-5450
                1869-5469
                4 February 2022
                4 February 2022
                2022
                : 12
                : 1
                : 30
                Affiliations
                GRID grid.4305.2, ISNI 0000 0004 1936 7988, School of Informatics, , University of Edinburgh, ; Edinburgh, UK
                Article
                858
                10.1007/s13278-022-00858-z
                8814794
                a9d3d170-95f6-431b-82ef-57ca7d68c3a8
                © The Author(s) 2022

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 16 May 2021
                : 6 January 2022
                : 8 January 2022
                Categories
                Original Article
                Custom metadata
                © Springer-Verlag GmbH Austria, part of Springer Nature 2022

                stance,bots,social media
                stance, bots, social media

                Comments

                Comment on this article