1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97, p < .001; I 2 = 91%; τ 2 = 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.

          Supplementary Information

          The online version contains supplementary material available at 10.1007/s11065-023-09582-7.

          Related collections

          Most cited references84

          • Record: found
          • Abstract: not found
          • Article: not found

          Measuring inconsistency in meta-analyses.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Quantifying heterogeneity in a meta-analysis.

            The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity. Copyright 2002 John Wiley & Sons, Ltd.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data.

              There currently does not exist guidance for authors aiming to undertake systematic reviews of observational epidemiological studies, such as those reporting prevalence and incidence information. These reviews are particularly useful to measure global disease burden and changes in disease over time. The aim of this article is to provide guidance for conducting these types of reviews.
                Bookmark

                Author and article information

                Contributors
                jeroen.roor@maastrichtuniversity.nl
                Journal
                Neuropsychol Rev
                Neuropsychol Rev
                Neuropsychology Review
                Springer US (New York )
                1040-7308
                1573-6660
                6 March 2023
                6 March 2023
                2024
                : 34
                : 1
                : 299-319
                Affiliations
                [1 ]GRID grid.416856.8, ISNI 0000 0004 0477 5022, Department of Medical Psychology, , VieCuri Medical Center, ; Venlo, The Netherlands
                [2 ]School for Mental Health and Neuroscience, Maastricht University, ( https://ror.org/02jz4aj89) Maastricht, The Netherlands
                [3 ]Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, ( https://ror.org/02jz4aj89) Maastricht, The Netherlands
                [4 ]GRID grid.509540.d, ISNI 0000 0004 6880 3010, Department of Medical Psychology, , Amsterdam University Medical Centres, location VU, ; Amsterdam, The Netherlands
                [5 ]GRID grid.36120.36, ISNI 0000 0004 0501 5439, Faculty of Psychology, , Open University, ; Heerlen, The Netherlands
                Author information
                http://orcid.org/0000-0003-3729-229X
                Article
                9582
                10.1007/s11065-023-09582-7
                10920461
                36872398
                ea79e8ff-e7c2-49eb-bc4c-9bd36fec5291
                © The Author(s) 2023

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 23 July 2022
                : 16 November 2022
                Categories
                Review
                Custom metadata
                © Springer Science+Business Media, LLC, part of Springer Nature 2024

                Clinical Psychology & Psychiatry
                prevalence,base rate,performance validity test,invalid performance,meta-analysis,clinical assessments

                Comments

                Comment on this article