3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Qualitative evaluation of the Autism Behavior Inventory: use of cognitive interviewing to establish validity of a caregiver report scale for autism spectrum disorder

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          The Autism Behavior Inventory (ABI) is an observer-reported outcome scale measuring core and associated features of autism spectrum disorder (ASD). Extensive scale development (reported elsewhere) took place, in alignment with the Food and Drug Administration’s patient-reported outcome guidance, to address the need for instruments to measure change and severity of ASD symptoms.

          Methods

          Cognitive interviewing was used to confirm understanding and content validity of the scale prior to its use in clinical trials. Respondents were caregivers of individuals with ASD (N = 50). Interviews used a hybrid of the “think-aloud” and verbal probing approach to assess ABI’s content validity and participant understanding of the instrument, including: item clarity and relevance; item interpretation; appropriateness of response scales; and clarity of instructions. Audio-recordings of the interviews were transcribed for qualitative data analysis. The scale was revised based on participant feedback and tested in a second round of interviews (round 1 N = 38, round 2 N = 12).

          Results

          In total, 67/70 items reached ≥ 90% understandability across participants. Caregivers were able to select an appropriate response from the options available and reported finding the examples helpful. Based on participant feedback, instructions were simplified, 8 items were removed, and 10 items were reworded. The final revised 62-item scale was presented in round 2, where caregivers reported readily understanding the instructions, response options, and 61/62 items reached ≥ 90% understandability.

          Conclusions

          Cognitive interviews with caregivers of a diverse sample of individuals with ASD confirm the content validity and relevance of the ABI to assess core and associated symptoms of ASD.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: found
          • Article: not found

          Content validity--establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO Good Research Practices Task Force report: part 2--assessing respondent understanding.

          The importance of content validity in developing patient reported outcomes (PRO) instruments is stressed by both the US Food and Drug Administration and the European Medicines Agency. Content validity is the extent to which an instrument measures the important aspects of concepts developers or users purport it to assess. A PRO instrument measures the concepts most relevant and important to a patient's condition and its treatment. For PRO instruments, items and domains as reflected in the scores of an instrument should be important to the target population and comprehensive with respect to patient concerns. Documentation of target population input in item generation, as well as evaluation of patient understanding through cognitive interviewing, can provide the evidence for content validity. Part 1 of this task force report covers elicitation of key concepts using qualitative focus groups and/or interviews to inform content and structure of a new PRO instrument. Building on qualitative interviews and focus groups used to elicit concepts, cognitive interviews help developers craft items that can be understood by respondents in the target population and can ultimately confirm that the final instrument is appropriate, comprehensive, and understandable in the target population. Part 2 details: 1) the methods for conducting cognitive interviews that address patient understanding of items, instructions, and response options; and 2) the methods for tracking item development through the various stages of research and preparing this tracking for submission to regulatory agencies. The task force report's two parts are meant to be read together. They are intended to offer suggestions for good practice in planning, executing, and documenting qualitative studies that are used to support the content validity of PRO instruments to be used in medical product evaluation. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Content validity--establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO good research practices task force report: part 1--eliciting concepts for a new PRO instrument.

            The importance of content validity in developing patient reported outcomes (PRO) instruments is stressed by both the US Food and Drug Administration and the European Medicines Agency. Content validity is the extent to which an instrument measures the important aspects of concepts that developers or users purport it to assess. A PRO instrument measures the concepts most significant and relevant to a patient's condition and its treatment. For PRO instruments, items and domains as reflected in the scores of an instrument should be important to the target population and comprehensive with respect to patient concerns. Documentation of target population input in item generation, as well as evaluation of patient understanding through cognitive interviewing, can provide the evidence for content validity. Developing content for, and assessing respondent understanding of, newly developed PRO instruments for medical product evaluation will be discussed in this two-part ISPOR PRO Good Research Practices Task Force Report. Topics include the methods for generating items, documenting item development, coding of qualitative data from item generation, cognitive interviewing, and tracking item development through the various stages of research and preparing this tracking for submission to regulatory agencies. Part 1 covers elicitation of key concepts using qualitative focus groups and/or interviews to inform content and structure of a new PRO instrument. Part 2 covers the instrument development process, the assessment of patient understanding of the draft instrument using cognitive interviews and steps for instrument revision. The two parts are meant to be read together. They are intended to offer suggestions for good practices in planning, executing, and documenting qualitative studies that are used to support the content validity of PRO instruments to be used in medical product evaluation. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Pretesting survey instruments: an overview of cognitive methods.

              This article puts forward the case that survey questionnaires, which are a type of measuring instrument, can and should be tested to ensure they meet their purpose. Traditionally survey researchers have been pre-occupied with 'standardising' data collection instruments and procedures such as question wording and have assumed that experience in questionnaire design, coupled with pilot testing of questionnaires, will then ensure valid and reliable results. However, implicit in the notion of standardisation are the assumptions that respondents are able to understand the questions being asked, that questions are understood in the same way by all respondents, and that respondents are willing and able to answer such questions. The development of cognitive question testing methods has provided social researchers with a number of theories and tools to test these assumptions, and to develop better survey instruments and questionnaires. This paper describes some of these theories and tools, and argues that cognitive testing should be a standard part of the development process of any survey instrument.
                Bookmark

                Author and article information

                Contributors
                gpandina@its.jnj.com
                Journal
                Health Qual Life Outcomes
                Health Qual Life Outcomes
                Health and Quality of Life Outcomes
                BioMed Central (London )
                1477-7525
                20 January 2021
                20 January 2021
                2021
                : 19
                : 26
                Affiliations
                [1 ]GRID grid.497530.c, ISNI 0000 0004 0389 4927, Department of Neuroscience, , Janssen Research & Development, LLC, ; Pennington, NJ 08534 USA
                [2 ]GRID grid.497530.c, ISNI 0000 0004 0389 4927, Department of Neuroscience, , Janssen Research & Development, LLC, ; Titusville, NJ USA
                [3 ]Department of Patient Reported Outcomes, Janssen Global Services, Raritan, NJ USA
                [4 ]GRID grid.423257.5, ISNI 0000 0004 0510 2209, Evidera, Pharmaceutical Product Development, LLC, ; Bethesda, MD USA
                Author information
                http://orcid.org/0000-0003-0050-3575
                Article
                1665
                10.1186/s12955-020-01665-w
                7819236
                33472654
                3f920404-31ae-42c5-8a01-74c4d6ec004e
                © The Author(s) 2021

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 24 April 2020
                : 30 December 2020
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100005205, Janssen Research and Development;
                Categories
                Research
                Custom metadata
                © The Author(s) 2021

                Health & Social care
                autism,cognitive interview,caregiver-reported outcomes
                Health & Social care
                autism, cognitive interview, caregiver-reported outcomes

                Comments

                Comment on this article