5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Acoustic and Text Features Analysis for Adult ADHD Screening: A Data-Driven Approach Utilizing DIVA Interview

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder commonly seen in childhood that leads to behavioural changes in social development and communication patterns, often continues into undiagnosed adulthood due to a global shortage of psychiatrists, resulting in delayed diagnoses with lasting consequences on individual’s well-being and the societal impact. Recently, machine learning methodologies have been incorporated into healthcare systems to facilitate the diagnosis and enhance the potential prediction of treatment outcomes for mental health conditions. In ADHD detection, the previous research focused on utilizing functional magnetic resonance imaging (fMRI) or Electroencephalography (EEG) signals, which require costly equipment and trained personnel for data collection. In recent years, speech and text modalities have garnered increasing attention due to their cost-effectiveness and non-wearable sensing in data collection. In this research, conducted in collaboration with the Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust, we gathered audio data from both ADHD patients and normal controls based on the clinically popular Diagnostic Interview for ADHD in adults (DIVA). Subsequently, we transformed the speech data into text modalities through the utilization of the Google Cloud Speech API. We extracted both acoustic and text features from the data, encompassing traditional acoustic features (e.g., MFCC), specialized feature sets (e.g., eGeMAPS), as well as deep-learned linguistic and semantic features derived from pre-trained deep learning models. These features are employed in conjunction with a support vector machine for ADHD classification, yielding promising outcomes in the utilization of audio and text data for effective adult ADHD screening. Clinical impact: This research introduces a transformative approach in ADHD diagnosis, employing speech and text analysis to facilitate early and more accessible detection, particularly beneficial in areas with limited psychiatric resources. Clinical and Translational Impact Statement: The successful application of machine learning techniques in analyzing audio and text data for ADHD screening represents a significant advancement in mental health diagnostics, paving the way for its integration into clinical settings and potentially improving patient outcomes on a broader scale.

          Related collections

          Most cited references67

          • Record: found
          • Abstract: found
          • Article: not found

          BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

          We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The World Health Organization adult ADHD self-report scale (ASRS): a short screening scale for use in the general population

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences

                Bookmark

                Author and article information

                Contributors
                Journal
                IEEE J Transl Eng Health Med
                IEEE J Transl Eng Health Med
                0063400
                JTEHM
                IJTEBN
                IEEE Journal of Translational Engineering in Health and Medicine
                IEEE
                2168-2372
                2024
                26 February 2024
                : 12
                : 359-370
                Affiliations
                [1] divisionIntelligent Sensing and Communications Group, School of Engineering, institutionNewcastle University, institutionringgold 5994; NE1 7RU Newcastle Upon Tyne U.K
                [2] institutionAdult ADHD Services, Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust; NE3 3XT Newcastle Upon Tyne U.K
                Article
                JTEHM-00161-2023
                10.1109/JTEHM.2024.3369764
                11008805
                38606391
                6e373e1f-d905-42c5-9b59-006eddbea712
                © 2024 The Authors

                This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/

                History
                : 18 October 2023
                : 09 January 2024
                : 15 February 2024
                : 14 March 2024
                Page count
                Figures: 4, Tables: 7, Equations: 16, References: 68, Pages: 12
                Categories
                Article

                adults adhd,speech modality,text modality,feature study,machine learning

                Comments

                Comment on this article