1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Waiting for baseline stability in single-case designs: Is it worth the time and effort?

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Researchers and practitioners often use single-case designs (SCDs), or n-of-1 trials, to develop and validate novel treatments. Standards and guidelines have been published to provide guidance as to how to implement SCDs, but many of their recommendations are not derived from the research literature. For example, one of these recommendations suggests that researchers and practitioners should wait for baseline stability prior to introducing an independent variable. However, this recommendation is not strongly supported by empirical evidence. To address this issue, we used Monte Carlo simulations to generate graphs with fixed, response-guided, and random baseline lengths while manipulating trend and variability. Then, our analyses compared the type I error rate and power produced by two methods of analysis: the conservative dual-criteria method (a structured visual aid) and a support vector classifier (a model derived from machine learning). The conservative dual-criteria method produced fewer errors when using response-guided decision-making (i.e., waiting for stability) and random baseline lengths. In contrast, waiting for stability did not reduce decision-making errors with the support vector classifier. Our findings question the necessity of waiting for baseline stability when using SCDs with machine learning, but the study must be replicated with other designs and graph parameters that change over time to support our results.

          Related collections

          Most cited references23

          • Record: found
          • Abstract: not found
          • Article: not found

          The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: the 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale.

            Recent literature suggests a revival of interest in single-case methodology (e.g., the randomised n-of-1 trial is now considered Level 1 evidence for treatment decision purposes by the Oxford Centre for Evidence-Based Medicine). Consequently, the availability of tools to critically appraise single-case reports is of great importance. We report on a major revision of our method quality instrument, the Single-Case Experimental Design Scale. Three changes resulted in a radically revised instrument, now entitled the Risk of Bias in N-of-1 Trials (RoBiNT) Scale: (i) item content was revised and increased to 15 items, (ii) two subscales were developed for internal validity (IV; 7 items) and external validity and interpretation (EVI; 8 items), and (iii) the scoring system was changed from a 2-point to 3-point scale to accommodate currently accepted standards. Psychometric evaluation indicated that the RoBiNT Scale showed evidence of construct (discriminative) validity. Inter-rater reliability was excellent, for pairs of both experienced and trained novice raters. Intraclass correlation coefficients of summary scores for individual (experienced) raters: ICC(TotalScore) = .90, ICC(IVSubscale) = .88, ICC(EVISubscale) = .87; individual (novice) raters: ICC(TotalScore)= .88, ICC(IVSubscale) = .87, ICC(EVISubscale) = .93; consensus ratings between experienced and novice raters (ICC(TotalScore) = .95, ICC(IVSubscale) = .93, ICC(EVISubscale) = .93. The RoBiNT Scale thus shows sound psychometric properties and provides a comprehensive yet efficient examination of important features of single-case methodology.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs.

              Because behavior analysis is a data-driven process, a critical skill for behavior analysts is accurate visual inspection and interpretation of single-case data. Study 1 was a basic study in which we increased the accuracy of visual inspection methods for A-B designs through two refinements of the split-middle (SM) method called the dual-criteria (DC) and conservative dual-criteria (CDC) methods. The accuracy of these visual inspection methods was compared with one another and with two statistical methods (Allison & Gorman, 1993; Gottman, 1981) using a computer-simulated Monte Carlo study. Results indicated that the DC and CDC methods controlled Type I error rates much better than the SM method and had considerably higher power (to detect real treatment effects) than the two statistical methods. In Study 2, brief verbal and written instructions with modeling were used to train 5 staff members to use the DC method, and in Study 3, these training methods were incorporated into a slide presentation and were used to rapidly (i.e., 15 min) train a large group of individuals (N = 87). Interpretation accuracy increased from a baseline mean of 55% to a treatment mean of 94% in Study 2 and from a baseline mean of 71% to a treatment mean of 95% in Study 3. Thus, Study 1 answered basic questions about the accuracy of several methods of interpreting A-B designs; Study 2 showed how that information could be used to increase the accuracy of human visual inspectors; and Study 3 showed how the training procedures from Study 2 could be modified into a format that would facilitate rapid training of large groups of individuals to interpret single-case designs.
                Bookmark

                Author and article information

                Contributors
                marc.lanovaz@umontreal.ca
                Journal
                Behav Res Methods
                Behav Res Methods
                Behavior Research Methods
                Springer US (New York )
                1554-351X
                1554-3528
                25 April 2022
                25 April 2022
                2023
                : 55
                : 2
                : 843-854
                Affiliations
                [1 ]GRID grid.14848.31, ISNI 0000 0001 2292 3357, École de psychoéducation, , Université de Montréal, ; C.P. 6128, succursale Centre-Ville, Montreal, QC, H3C 3J7 Canada
                [2 ]GRID grid.420732.0, ISNI 0000 0001 0621 4067, Centre de recherche de l’Institut universitaire en santé mentale de Montréal, ; Montreal, Canada
                Article
                1858
                10.3758/s13428-022-01858-9
                10027773
                35469087
                4348cb86-7de7-44a2-a773-1a80103116bc
                © The Author(s) 2022

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 5 April 2022
                Categories
                Article
                Custom metadata
                © The Psychonomic Society, Inc. 2023

                Clinical Psychology & Psychiatry
                ab design,baseline,data analysis,machine learning,n-of-1 trial,single-case design

                Comments

                Comment on this article