+1 Recommend
1 collections

      We invite you to submit your manuscript to APA's open access journal, Technology, Mind, and Behavior.

      Learn more and submit.

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Of Like Mind: The (Mostly) Similar Mentalizing of Robots and Humans


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          Mentalizing is the process of inferencing others’ mental states and it contributes to an inferential system known as Theory of Mind (ToM)—a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency—and are increasingly integrated into contemporary social life—it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

          Related collections

          Most cited references73

          • Record: found
          • Abstract: found
          • Article: not found

          On seeing human: a three-factor theory of anthropomorphism.

          Anthropomorphism describes the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions. Although surprisingly common, anthropomorphism is not invariant. This article describes a theory to explain when people are likely to anthropomorphize and when they are not, focused on three psychological determinants--the accessibility and applicability of anthropocentric knowledge (elicited agent knowledge), the motivation to explain and understand the behavior of other agents (effectance motivation), and the desire for social contact and affiliation (sociality motivation). This theory predicts that people are more likely to anthropomorphize when anthropocentric knowledge is accessible and applicable, when motivated to be effective social agents, and when lacking a sense of social connection to other humans. These factors help to explain why anthropomorphism is so variable; organize diverse research; and offer testable predictions about dispositional, situational, developmental, and cultural influences on anthropomorphism. Discussion addresses extensions of this theory into the specific psychological processes underlying anthropomorphism, applications of this theory into robotics and human-computer interaction, and the insights offered by this theory into the inverse process of dehumanization. PsycINFO Database Record (c) 2007 APA, all rights reserved.
            • Record: found
            • Abstract: not found
            • Article: not found

            Coding In-depth Semistructured Interviews: Problems of Unitization and Intercoder Reliability and Agreement

              • Record: found
              • Abstract: found
              • Article: not found

              Does the chimpanzee have a theory of mind?

              An individual has a theory of mind if he imputes mental states to himself and others. A system of inferences of this kind is properly viewed as a theory because such states are not directly observable, and the system can be used to make predictions about the behavior of others. As to the mental states the chimpanzee may infer, consider those inferred by our own species, for example, purpose or intention , as well as knowledge, belief, thinking, doubt, guessing, pretending, liking , and so forth. To determine whether or not the chimpanzee infers states of this kind, we showed an adult chimpanzee a series of videotaped scenes of a human actor struggling with a variety of problems. Some problems were simple, involving inaccessible food – bananas vertically or horizontally out of reach, behind a box, and so forth – as in the original Kohler problems; others were more complex, involving an actor unable to extricate himself from a locked cage, shivering because of a malfunctioning heater, or unable to play a phonograph because it was unplugged. With each videotape the chimpanzee was given several photographs, one a solution to the problem, such as a stick for the inaccessible bananas, a key for the locked up actor, a lit wick for the malfunctioning heater. The chimpanzee's consistent choice of the correct photographs can be understood by assuming that the animal recognized the videotape as representing a problem, understood the actor's purpose, and chose alternatives compatible with that purpose.

                Author and article information

                Technology, Mind, and Behavior
                American Psychological Association
                January 28, 2021
                : 1
                : 2
                [1]College of Media & Communication, Texas Tech University
                Author notes
                Action Editor: Danielle S. McNamara was the action editor for this article.
                The author has no conflicts of interest to report. This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0006. The author gratefully acknowledges the assistance of the following people for their support in the production of this article: Ambrosia Luzius (lab data collection), Zack Hlatky (lab data collection and data coding), Cloris Chen (lab data collection and data coding), Madison Wedge (lab data collection and survey video production), Shianne Ferrell (lab data collection and survey video production), Nicholas David Bowman (robot mechanics and analysis consulting), and Zachary Stiffler (robot animation). A portion of this work was conducted at West Virginia University, Department of Communication Studies, and the author thanks WVU for its support. Study materials (including stimuli, data, and analysis documentation) are available at: https://osf.io/9r73y
                Disclaimer: Interactive content is included in the online version of this article.
                Open Science Disclosures:

                The data are available at https://osf.io/9r73y

                The experiment materials are available at https://osf.io/9r73y

                [*] Dr. Jaime Banks, Texas Tech University, Box 43082, Lubbock, TX 79409, USA j.banks@ttu.edu
                Author information
                © 2020 The Author(s)

                This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND). This license permits copying and redistributing the work in any medium or format for noncommercial use provided the original authors and source are credited and a link to the license is included in attribution. No derivative works are permitted under this license.

                Self URI (journal-page): https://tmb.apaopen.org/

                Education,Psychology,Vocational technology,Engineering,Clinical Psychology & Psychiatry
                heuristics,human–machine communication,mentalizing,social presence,social robots


                Comment on this article