5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Humans perceive warmth and competence in artificial intelligence

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Summary

          Artificial intelligence (A.I.) increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and regulation. Nine studies ( N = 3,300) demonstrate that social-cognitive processes guide human interactions across a diverse range of real-world A.I. systems. Across studies, perceived warmth and competence emerge prominently in participants’ impressions of A.I. systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence and autonomy. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner’s dilemma game shows that warmth and competence judgments predict participants’ willingness to cooperate with a deep-learning system. These results underscore the generality of intent detection to perceptions of a broad array of algorithmic actors.

          Graphical abstract

          Highlights

          • Nine studies show that humans think of A.I. in social terms

          • Modern A.I. systems evoke perceptions of both warmth and competence

          • People perceive systems that pursue interests aligned with human interests as warmer

          • People see systems that operate independently from human oversight as more competent

          Abstract

          Artificial intelligence; Human-computer interaction; Psychology

          Related collections

          Most cited references96

          • Record: found
          • Abstract: found
          • Article: not found

          Mastering the game of Go with deep neural networks and tree search.

          The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition.

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown?

                Bookmark

                Author and article information

                Contributors
                Journal
                iScience
                iScience
                iScience
                Elsevier
                2589-0042
                04 July 2023
                18 August 2023
                04 July 2023
                : 26
                : 8
                : 107256
                Affiliations
                [1 ]DeepMind, N1C 4DN London, UK
                [2 ]Department of Psychology, Princeton University, Princeton, NJ 08540, USA
                [3 ]School of Public and International Affairs, Princeton University, Princeton, NJ 08540, USA
                Author notes
                []Corresponding author kevinrmckee@ 123456deepmind.com
                [4]

                Lead contact

                Article
                S2589-0042(23)01333-0 107256
                10.1016/j.isci.2023.107256
                10371826
                37520710
                01767684-2486-40d9-a598-e1b32675d0cc
                © 2023 The Author(s)

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

                History
                : 13 December 2022
                : 4 May 2023
                : 27 June 2023
                Categories
                Article

                artificial intelligence,human-computer interaction,psychology

                Comments

                Comment on this article