14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Using Machine Learning in Psychiatry: The Need to Establish a Framework That Nurtures Trustworthiness

      1 , 2 , 3 , 4 , 5
      Schizophrenia Bulletin
      Oxford University Press (OUP)

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The rapid embracing of artificial intelligence in psychiatry has a flavor of being the current “wild west”; a multidisciplinary approach that is very technical and complex, yet seems to produce findings that resonate. These studies are hard to review as the methods are often opaque and it is tricky to find the suitable combination of reviewers. This issue will only get more complex in the absence of a rigorous framework to evaluate such studies and thus nurture trustworthiness. Therefore, our paper discusses the urgency of the field to develop a framework with which to evaluate the complex methodology such that the process is done honestly, fairly, scientifically, and accurately. However, evaluation is a complicated process and so we focus on three issues, namely explainability, transparency, and generalizability, that are critical for establishing the viability of using artificial intelligence in psychiatry. We discuss how defining these three issues helps towards building a framework to ensure trustworthiness, but show how difficult definition can be, as the terms have different meanings in medicine, computer science, and law. We conclude that it is important to start the discussion such that there can be a call for policy on this and that the community takes extra care when reviewing clinical applications of such models..

          Related collections

          Most cited references6

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          “Why Should I Trust You?”: Explaining the Predictions of Any Classifier

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            A machine learning approach to predicting psychosis using semantic density and latent content analysis

            Subtle features in people’s everyday language may harbor the signs of future mental illness. Machine learning offers an approach for the rapid and accurate extraction of these signs. Here we investigate two potential linguistic indicators of psychosis in 40 participants of the North American Prodrome Longitudinal Study. We demonstrate how the linguistic marker of semantic density can be obtained using the mathematical method of vector unpacking, a technique that decomposes the meaning of a sentence into its core ideas. We also demonstrate how the latent semantic content of an individual’s speech can be extracted by contrasting it with the contents of conversations generated on social media, here 30,000 contributors to Reddit. The results revealed that conversion to psychosis is signaled by low semantic density and talk about voices and sounds. When combined, these two variables were able to predict the conversion with 93% accuracy in the training and 90% accuracy in the holdout datasets. The results point to a larger project in which automated analyses of language are used to forecast a broad range of mental disorders well in advance of their emergence.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Towards algorithmic analytics for large-scale datasets

              The traditional goals of quantitative analytics cherish simple, transparent models to generate explainable insights. Large-scale data acquisition, enabled for instance by brain scanning and genomic profiling with microarray-type techniques, has prompted a wave of statistical inventions and innovative applications. Modern analysis approaches 1) tame large variable arrays capitalizing on regularization and dimensionality-reduction strategies, 2) are increasingly backed up by empirical model validations rather than justified by mathematical proofs, 3) will compare against and build on open data and consortium repositories, as well as 4) often embrace more elaborate, less interpretable models in order to maximize prediction accuracy. Here we review these trends in learning from “big data” and illustrate examples from imaging neuroscience.
                Bookmark

                Author and article information

                Journal
                Schizophrenia Bulletin
                Oxford University Press (OUP)
                0586-7614
                1745-1701
                November 01 2019
                November 01 2019
                Affiliations
                [1 ]Department of Computer Science, University of Colorado Boulder, Boulder, CO
                [2 ]Institute of Cognitive Science, University of Colorado Boulder
                [3 ]Pearson PLC, London, UK
                [4 ]Department of Clinical Medicine, University of Tromsø, Tromsø, Norway
                [5 ]Norwegian Centre for eHealth Research, Tromsø, Norway
                Article
                10.1093/schbul/sbz105
                7145638
                31901100
                9d6bc9b5-8e98-4116-b4c3-91446ebd64ec
                © 2019

                https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model

                History

                Comments

                Comment on this article