Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

Best Practices for Designing Chatbots in Mental Healthcare – A Case Study on iHelpr

, , , , , , ,

Proceedings of the 32nd International BCS Human Computer Interaction Conference (HCI)

Human Computer Interaction Conference

4 - 6 July 2018

Chatbot, Microsoft Bot Framework, Mental healthcare, Screening instruments, Coping mechanisms, E-learning, Chatbot Usability, Chatbot Development, Chatbot Methodology, Ethical considerations

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      This paper outlines the design and development of a chatbot called iHelpr for mental healthcare that 1) administers self-assessment instruments/scales, 2) provides wellbeing and self-help guidance and information, all within a conversational interface. Chatbots are becoming more prevalent in our daily lives, with bots available to provide the user with daily weather forecasts, book holidays, and even converse with a virtual therapist. It is predicted that users may soon prefer to complete tasks using a conversational interface that are traditionally done through a webpage or mobile application. In the context of mental healthcare, demand exceeds supply, waiting lists are ever growing, and populations in rural communities still struggle to access mental healthcare. Chatbots can be utilised to improve and broaden access to mental healthcare. When designing chatbots for mental healthcare, there are further considerations, such as managing risk and ethical considerations. Furthermore, usability and the design of conversational flow are important factors to consider when developing chatbots for any domain. This paper outlines best practices and experiences extrapolated from developing the iHelpr chatbot.

      Related collections

      Most cited references 13

      • Record: found
      • Abstract: found
      • Article: not found

      A brief measure for assessing generalized anxiety disorder: the GAD-7.

      Generalized anxiety disorder (GAD) is one of the most common mental disorders; however, there is no brief clinical measure for assessing GAD. The objective of this study was to develop a brief self-report scale to identify probable cases of GAD and evaluate its reliability and validity. A criterion-standard study was performed in 15 primary care clinics in the United States from November 2004 through June 2005. Of a total of 2740 adult patients completing a study questionnaire, 965 patients had a telephone interview with a mental health professional within 1 week. For criterion and construct validity, GAD self-report scale diagnoses were compared with independent diagnoses made by mental health professionals; functional status measures; disability days; and health care use. A 7-item anxiety scale (GAD-7) had good reliability, as well as criterion, construct, factorial, and procedural validity. A cut point was identified that optimized sensitivity (89%) and specificity (82%). Increasing scores on the scale were strongly associated with multiple domains of functional impairment (all 6 Medical Outcomes Study Short-Form General Health Survey scales and disability days). Although GAD and depression symptoms frequently co-occurred, factor analysis confirmed them as distinct dimensions. Moreover, GAD and depression symptoms had differing but independent effects on functional impairment and disability. There was good agreement between self-report and interviewer-administered versions of the scale. The GAD-7 is a valid and efficient tool for screening for GAD and assessing its severity in clinical practice and research.
        Bookmark
        • Record: found
        • Abstract: found
        • Article: found
        Is Open Access

        Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial

        Background Web-based cognitive-behavioral therapeutic (CBT) apps have demonstrated efficacy but are characterized by poor adherence. Conversational agents may offer a convenient, engaging way of getting support at any time. Objective The objective of the study was to determine the feasibility, acceptability, and preliminary efficacy of a fully automated conversational agent to deliver a self-help program for college students who self-identify as having symptoms of anxiety and depression. Methods In an unblinded trial, 70 individuals age 18-28 years were recruited online from a university community social media site and were randomized to receive either 2 weeks (up to 20 sessions) of self-help content derived from CBT principles in a conversational format with a text-based conversational agent (Woebot) (n=34) or were directed to the National Institute of Mental Health ebook, “Depression in College Students,” as an information-only control group (n=36). All participants completed Web-based versions of the 9-item Patient Health Questionnaire (PHQ-9), the 7-item Generalized Anxiety Disorder scale (GAD-7), and the Positive and Negative Affect Scale at baseline and 2-3 weeks later (T2). Results Participants were on average 22.2 years old (SD 2.33), 67% female (47/70), mostly non-Hispanic (93%, 54/58), and Caucasian (79%, 46/58). Participants in the Woebot group engaged with the conversational agent an average of 12.14 (SD 2.23) times over the study period. No significant differences existed between the groups at baseline, and 83% (58/70) of participants provided data at T2 (17% attrition). Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47; P=.01) while those in the information control group did not. In an analysis of completers, participants in both groups significantly reduced anxiety as measured by the GAD-7 (F1,54= 9.24; P=.004). Participants’ comments suggest that process factors were more influential on their acceptability of the program than content factors mirroring traditional therapy. Conclusions Conversational agents appear to be a feasible, engaging, and effective way to deliver CBT.
          Bookmark
          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The Sleep Condition Indicator: a clinical screening tool to evaluate insomnia disorder

          Objective Describe the development and psychometric validation of a brief scale (the Sleep Condition Indicator (SCI)) to evaluate insomnia disorder in everyday clinical practice. Design The SCI was evaluated across five study samples. Content validity, internal consistency and concurrent validity were investigated. Participants 30 941 individuals (71% female) completed the SCI along with other descriptive demographic and clinical information. Setting Data acquired on dedicated websites. Results The eight-item SCI (concerns about getting to sleep, remaining asleep, sleep quality, daytime personal functioning, daytime performance, duration of sleep problem, nights per week having a sleep problem and extent troubled by poor sleep) had robust internal consistency (α≥0.86) and showed convergent validity with the Pittsburgh Sleep Quality Index and Insomnia Severity Index. A two-item short-form (SCI-02: nights per week having a sleep problem, extent troubled by poor sleep), derived using linear regression modelling, correlated strongly with the SCI total score (r=0.90). Conclusions The SCI has potential as a clinical screening tool for appraising insomnia symptoms against Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria.
            Bookmark

            Author and article information

            Affiliations
            Ulster University

            Belfast, N. Ireland
            Inspire Workplaces

            Belfast, N. Ireland
            Contributors
            Conference
            July 2018
            July 2018
            : 1-5
            10.14236/ewic/HCI2018.129
            © Cameron et al. Published by BCS Learning and Development Ltd. Proceedings of British HCI 2018. Belfast, UK.

            This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

            Proceedings of the 32nd International BCS Human Computer Interaction Conference
            HCI
            32
            Belfast, UK
            4 - 6 July 2018
            Electronic Workshops in Computing (eWiC)
            Human Computer Interaction Conference
            Product
            Product Information: 1477-9358 BCS Learning & Development
            Self URI (journal page): https://ewic.bcs.org/
            Categories
            Electronic Workshops in Computing

            Comments

            Comment on this article