19
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      ComprehENotes, an Instrument to Assess Patient Reading Comprehension of Electronic Health Record Notes: Development and Validation

      research-article
      , MS 1 , , PhD 2 , , MS 2 , , EdD 3 , , PhD, FACMI 1 , 4 , 5 , 6 ,
      (Reviewer), (Reviewer), (Reviewer), (Reviewer)
      Journal of Medical Internet Research
      JMIR Publications
      electronic health records, health literacy, psychometrics, crowdsourcing

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Patient portals are widely adopted in the United States and allow millions of patients access to their electronic health records (EHRs), including their EHR clinical notes. A patient’s ability to understand the information in the EHR is dependent on their overall health literacy. Although many tests of health literacy exist, none specifically focuses on EHR note comprehension.

          Objective

          The aim of this paper was to develop an instrument to assess patients’ EHR note comprehension.

          Methods

          We identified 6 common diseases or conditions (heart failure, diabetes, cancer, hypertension, chronic obstructive pulmonary disease, and liver failure) and selected 5 representative EHR notes for each disease or condition. One note that did not contain natural language text was removed. Questions were generated from these notes using Sentence Verification Technique and were analyzed using item response theory (IRT) to identify a set of questions that represent a good test of ability for EHR note comprehension.

          Results

          Using Sentence Verification Technique, 154 questions were generated from the 29 EHR notes initially obtained. Of these, 83 were manually selected for inclusion in the Amazon Mechanical Turk crowdsourcing tasks and 55 were ultimately retained following IRT analysis. A follow-up validation with a second Amazon Mechanical Turk task and IRT analysis confirmed that the 55 questions test a latent ability dimension for EHR note comprehension. A short test of 14 items was created along with the 55-item test.

          Conclusions

          We developed ComprehENotes, an instrument for assessing EHR note comprehension from existing EHR notes, gathered responses using crowdsourcing, and used IRT to analyze those responses, thus resulting in a set of questions to measure EHR note comprehension. Crowdsourced responses from Amazon Mechanical Turk can be used to estimate item parameters and select a subset of items for inclusion in the test set using IRT. The final set of questions is the first test of EHR note comprehension.

          Related collections

          Most cited references37

          • Record: found
          • Abstract: not found
          • Article: not found

          mirt: A Multidimensional Item Response Theory Package for theREnvironment

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Power analysis and determination of sample size for covariance structure modeling.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Development of a brief test to measure functional health literacy.

              We describe the development of an abbreviated version of the Test of Functional Health Literacy in Adults (TOFHLA) to measure patients' ability to read and understand health-related materials. The TOFHLA was reduced from 17 Numeracy items and 3 prose passages to 4 Numeracy items and 2 prose passages (S-TOFHLA). The maximum time for administration was reduced from 22 minutes to 12. In a group of 211 patients given the S-TOFHLA, Cronbach's alpha was 0.68 for the 4 Numeracy items and 0.97 for the 36 items in the 2 prose passages. The correlation (Spearman) between the S-TOFHLA and the Rapid Estimate of Adult Literacy in Medicine (REALM) was 0.80, although there were important disagreements between the two tests. The S-TOFHLA is a practical measure of functional health literacy with good reliability and validity that can be used by health educators to identify individuals who require special assistance to achieve learning goals.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                April 2018
                25 April 2018
                : 20
                : 4
                : e139
                Affiliations
                [1] 1 College of Information and Computer Sciences University of Massachusetts Amherst, MA United States
                [2] 2 Psychology Department Boston College Chestnut Hill, MA United States
                [3] 3 Meyers Primary Care Institute University of Massachusetts Medical School / Reliant Medical Group / Fallon Health Worcester, MA United States
                [4] 4 Department of Computer Science University of Massachusetts Lowell, MA United States
                [5] 5 Department of Medicine University of Massachusetts Medical School Worcester, MA United States
                [6] 6 Bedford Veterans Affairs Medical Center Center for Healthcare Organization and Implementation Research Bedford, MA United States
                Author notes
                Corresponding Author: Hong Yu hong.yu@ 123456umassmed.edu
                Author information
                http://orcid.org/0000-0003-0848-4786
                http://orcid.org/0000-0001-6471-1774
                http://orcid.org/0000-0001-6876-4398
                http://orcid.org/0000-0002-9491-9872
                http://orcid.org/0000-0001-9263-5035
                Article
                v20i4e139
                10.2196/jmir.9380
                5943623
                29695372
                a4a14af5-63d8-426c-b013-7023771c21ab
                ©John P Lalor, Hao Wu, Li Chen, Kathleen M Mazor, Hong Yu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.04.2018.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 10 November 2017
                : 7 December 2017
                : 6 February 2018
                : 20 February 2018
                Categories
                Original Paper
                Original Paper

                Medicine
                electronic health records,health literacy,psychometrics,crowdsourcing
                Medicine
                electronic health records, health literacy, psychometrics, crowdsourcing

                Comments

                Comment on this article