Chatbots are increasingly popular, but state-of-the-art chatbots still struggle to meet user expectations, limiting their application in many domains. The factors affecting use have been studied extensively in laboratory contexts, resulting in context-independent requirements. However, user expectations and experiences of chat interfaces are affected by the context of use. Research efforts measuring experiences with chat interfaces need to shift from studies in controlled laboratory settings to studies in real-life settings in various domains. This paper explores this field of study by reporting on a small-scale real-life case study on the gap between expectations and experiences with an educational chatbot. More case studies in the wild, such as this one, could contribute to a deeper understanding of factors affecting acceptance and real use. We propose the use of the CIMO logic across these studies to build upon previous results.
V. Braun and V. Clarke (2012) Thematic Analysis. In: H. Cooperet al. (eds). APA handbooks in psychology. APA handbook of research methods in psychology, Vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological. Washington: American Psychological Association.
H. Candelloet al. (2019) Can Direct Address Affect Engagement with Chatbots Embodied in Physical Spaces? In: B.R. Cowan and L. Clark (eds). CUI ’19: Proceedings of the 1st International Conference on Conversational User Interfaces, Dublin, August 2019. New York: ACM. 1–3.
F.D. Davis (1989) Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13 (3), 319–340.
D. Denyer, D. Tranfield and J.E. van Aken (2008) Developing Design Propositions through Research Synthesis. Organisational studies, 29(3). 393–413.
Drillster BV (2019) The Science Behind Drillster. Available from https://drillster.com/wp-content/uploads/2019/08/White-Paper-_The-science-behind-Drillster_.pdf (24 October 2020).
M. Dubiel, A. Ceryone and G. Riccardi (2019) Inquisitive mind: a conversational news companion. In: B.R. Cowan and L. Clark (eds). CUI ’19: Proceedings of the 1st International Conference on Conversational User Interfaces, Dublin, August 2019. New York: ACM. 1–3.
H. Ebbinghaus (1885) Über das Gedächtnis. Untersuchungen zur experimentellen Psychologie. Leipzich: Decker & Humblot.
J.E. Fischeret al. (2019) Progressivity for voice interface design. Proceedings of the 1st International Conference on Conversational User Interfaces. Dublin, August 2019. New York: ACM. 1–8.
A. Følstad, C.B. Nordheim and C.A. Björkli (2018a) What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study. In: S. Bodrunovaet al. (eds). Internet Science. INSCI 2018. Lecture Notes in Computer Science, vol 11193. Cham: Springer.
A. Følstad, M. Skjuve and P.B. Brandtzaeg (2018b) Different Chatbots for Different Purposes: Towards a Typology of Chatbots to Understand Interaction Design. In: S. Bodrunovaet al. (eds). Internet Science. INSCI 2018. Lecture Notes in Computer Science, vol 11551. Cham: Springer.
M.E. Foster (2019) Face-to-Face Conversation: Why Embodiment Matters for Conversational User Interfaces. In: B.R. Cowan and L. Clark (eds). CUI ’19: Proceedings of the 1st International Conference on Conversational User Interfaces, Dublin, August 2019. New York: ACM. 1–3.
S. George (2019) From sex and therapy bots to virtual assistants and tutors: how emotional should artificially intelligent agents be? In: B.R. Cowan and L. Clark (eds). CUI ’19: Proceedings of the 1st International Conference on Conversational User Interfaces, Dublin, August 2019. New York: ACM. 1–3.
B. Laugwitz, T. Held and M. Schrepp (2008) Construction and Evaluation of a User Experience Questionnaire. In: A. Holzinger (eds). HCI and Usability for Education and Work. USAB 2008. Lecture Notes in Computer Science, vol 5298. Heidelberg: Springer. 63–76.
S. Leitner (1972) So lernt man lernen. Der Weg zum Erfolg. Freiburg im Breisgau: Herder.
E. Luger and A. Sellen (2016) “Like Having a Really Bad PA”: The Gulf Between User Expectation and Experience of Conversational Agents. In: J. Kaye and A. Druin (eds). CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. San Jose, May 2016. New York: ACM. 5286–5297.
P. Pimsleur (1967) A Memory Schedule. Modern Language Journal, 51 (2). 73–75.
M. Rheuet al. (2020) Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. International Journal of Human-Computer Interaction, 37 (4). 1–16.
C. Roda, A.A. Angehrn and T. Nabeth (2001) Conversational Agents for Advanced Learning: Applications and Research. Proceedings of BotShow 2001 Conference. Fontainebleau, 2001. Fontainebleau: INSEAD. 1–7.
M. Schrepp (2018) UEQ_Plus_Data_Analysis_Tool.xlsx. Available from http://ueqplus.ueqresearch.org/Material/UEQ_Plus_Data_Analysis _Tool.xlsx (18 December 2020).
M. Schrepp and T. Thomaschewski (2019) Handbook for the modular extension of the User Experience Questionnaire. Available from http://ueqplus.ueqresearch.org/Material/UEQ+_Handbook_V1.pdf (18 December 2020).
M.K. Seinet al. (2011) Action Design Research. MIS Quarterly, 35 (1). 37–56.
K. van Turnhoutet al. (2019) A Practical Take on Theory in HCI. White paper.
V. Venkatesh and F.D. Davis (2000) A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46 (2). 186–204.
V. Venkateshet al. (2003) User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27 (3). 425–478.
P. Weber and T. Ludwig (2020) (Non-)Interacting with conversational agents: perceptions and motivations of using chatbots and voice assistants. Proceedings of the Conference on Mensch und Computer, Magdeburg, September 2020. New York: ACM. 321–331.
J. Weizenbaum (1966) ELIZA–a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9 (1). 36–45.