Artificial intelligence (A.I.) increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and regulation. Nine studies ( N = 3,300) demonstrate that social-cognitive processes guide human interactions across a diverse range of real-world A.I. systems. Across studies, perceived warmth and competence emerge prominently in participants’ impressions of A.I. systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence and autonomy. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner’s dilemma game shows that warmth and competence judgments predict participants’ willingness to cooperate with a deep-learning system. These results underscore the generality of intent detection to perceptions of a broad array of algorithmic actors.
Nine studies show that humans think of A.I. in social terms
Modern A.I. systems evoke perceptions of both warmth and competence
People perceive systems that pursue interests aligned with human interests as warmer
People see systems that operate independently from human oversight as more competent
Artificial intelligence; Human-computer interaction; Psychology