Speech interfaces are becoming a more common dialogue partner. With the growth of intelligent personal assistants, pervasive and wearable computing and robot-based technologies, the level of spoken interactions with technology is unprecedented. However, while the technological challenges around the production of natural synthetic voices have been widely researched, comparatively little is understood about how speech synthesis affects user experience and behaviour. The CogSIS Project examines the psychological and behavioural consequences of synthesis design decisions in human interactions with speech technology. In particular, we explore how design decisions around politeness, accent, naturalness and expressivity impact the assumptions we make about speech interfaces as communicative actors (i.e. our partner models). The project fuses knowledge, concepts and methods from psycholinguistics, experimental psychology, human-computer interaction and speech technology to 1) understand how synthesis design choices impact users’ partner models, 2) how these choices interact with partner models and impact user experience and evaluations and 3) how these choices impact users’ own language production. The project will lead to a set of theory-driven practical and actionable guidelines for speech synthesis and speech interface design.