3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Embedded validity indicators in Conners’ CPT-II: Do adult cutoffs work the same way in children?

      1 , 2 , 1 , 3
      Applied Neuropsychology: Child
      Informa UK Limited

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: found
          • Article: not found

          Refining clinical diagnosis with likelihood ratios.

          Likelihood ratios can refine clinical diagnosis on the basis of signs and symptoms; however, they are underused for patients' care. A likelihood ratio is the percentage of ill people with a given test result divided by the percentage of well individuals with the same result. Ideally, abnormal test results should be much more typical in ill individuals than in those who are well (high likelihood ratio) and normal test results should be most frequent in well people than in sick people (low likelihood ratio). Likelihood ratios near unity have little effect on decision-making; by contrast, high or low ratios can greatly shift the clinician's estimate of the probability of disease. Likelihood ratios can be calculated not only for dichotomous (positive or negative) tests but also for tests with multiple levels of results, such as creatine kinase or ventilation-perfusion scans. When combined with an accurate clinical diagnosis, likelihood ratios from ancillary tests improve diagnostic accuracy in a synergistic manner.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Detection of malingering using atypical performance patterns on standard neuropsychological tests.

            Cut-off scores defining clinically atypical patterns of performance were identified for five standard neuropsychological and psychological tests: Benton Visual Form Discrimination (VFD), Fingertapping (FT), WAIS-R Reliable Digit Span (RDS), Wisconsin Card Sorting Failure-to-Maintain Set (FMS), and the Lees-Haley Fake Bad Scale (FBS) from the MMPI-2. All possible pair-wise combinations of scores beyond cut-off (e.g., for VFD and FT; for RDS and FBS), correctly identified 21 of 24 subjects (87.5%) meeting criteria for definite malingered neurocognitive dysfunction, and 24 of 27 (88.9%) subjects with moderate to severe closed head injury. On cross-validation, 15 of 17 subjects (88.2%) meeting criteria for probable malingered neurocognitive dysfunction were correctly identified, with 13 of 13 nonlitigating neurologic patients, and 14 of 14 nonlitigating psychiatric patients correctly classified as having motivationally-preserved performance. Combining the derivation and cross-validation samples yielded a sensitivity of 87.8%, specificity of 94.4%, and combined hit rate of 91.6%.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Small-sample precision of ROC-related estimates.

              The receiver operator characteristic (ROC) curves are commonly used in biomedical applications to judge the performance of a discriminant across varying decision thresholds. The estimated ROC curve depends on the true positive rate (TPR) and false positive rate (FPR), with the key metric being the area under the curve (AUC). With small samples these rates need to be estimated from the training data, so a natural question arises: How well do the estimates of the AUC, TPR and FPR compare with the true metrics? Through a simulation study using data models and analysis of real microarray data, we show that (i) for small samples the root mean square differences of the estimated and true metrics are considerable; (ii) even for large samples, there is only weak correlation between the true and estimated metrics; and (iii) generally, there is weak regression of the true metric on the estimated metric. For classification rules, we consider linear discriminant analysis, linear support vector machine (SVM) and radial basis function SVM. For error estimation, we consider resubstitution, three kinds of cross-validation and bootstrap. Using resampling, we show the unreliability of some published ROC results. Companion web site at http://compbio.tgen.org/paper_supp/ROC/roc.html edward@mail.ece.tamu.edu.
                Bookmark

                Author and article information

                Journal
                Applied Neuropsychology: Child
                Applied Neuropsychology: Child
                Informa UK Limited
                2162-2965
                2162-2973
                August 02 2016
                October 02 2017
                July 06 2016
                October 02 2017
                : 6
                : 4
                : 355-363
                Affiliations
                [1 ] Department of Psychology, University of Windsor, Windsor, Ontario, Canada
                [2 ] Department of Psychiatry, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
                [3 ] Private Practice, Edmonton, Alberta, Canada
                Article
                10.1080/21622965.2016.1198908
                5a529d71-5f6d-4d28-8025-2a9bb91c466e
                © 2017
                History

                Comments

                Comment on this article