15
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Assessing Competencies Needed to Engage With Digital Health Services: Development of the eHealth Literacy Assessment Toolkit

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          To achieve full potential in user-oriented eHealth projects, we need to ensure a match between the eHealth technology and the user’s eHealth literacy, described as knowledge and skills. However, there is a lack of multifaceted eHealth literacy assessment tools suitable for screening purposes.

          Objective

          The objective of our study was to develop and validate an eHealth literacy assessment toolkit (eHLA) that assesses individuals’ health literacy and digital literacy using a mix of existing and newly developed scales.

          Methods

          From 2011 to 2015, scales were continuously tested and developed in an iterative process, which led to 7 tools being included in the validation study. The eHLA validation version consisted of 4 health-related tools (tool 1: “functional health literacy,” tool 2: “health literacy self-assessment,” tool 3: “familiarity with health and health care,” and tool 4: “knowledge of health and disease”) and 3 digitally-related tools (tool 5: “technology familiarity,” tool 6: “technology confidence,” and tool 7: “incentives for engaging with technology”) that were tested in 475 respondents from a general population sample and an outpatient clinic. Statistical analyses examined floor and ceiling effects, interitem correlations, item-total correlations, and Cronbach coefficient alpha (CCA). Rasch models (RM) examined the fit of data. Tools were reduced in items to secure robust tools fit for screening purposes. Reductions were made based on psychometrics, face validity, and content validity.

          Results

          Tool 1 was not reduced in items; it consequently consists of 10 items. The overall fit to the RM was acceptable (Anderson conditional likelihood ratio, CLR=10.8; df=9; P=.29), and CCA was .67. Tool 2 was reduced from 20 to 9 items. The overall fit to a log-linear RM was acceptable (Anderson CLR=78.4, df=45, P=.002), and CCA was .85. Tool 3 was reduced from 23 to 5 items. The final version showed excellent fit to a log-linear RM (Anderson CLR=47.7, df=40, P=.19), and CCA was .90. Tool 4 was reduced from 12 to 6 items. The fit to a log-linear RM was acceptable (Anderson CLR=42.1, df=18, P=.001), and CCA was .59. Tool 5 was reduced from 20 to 6 items. The fit to the RM was acceptable (Anderson CLR=30.3, df=17, P=.02), and CCA was .94. Tool 6 was reduced from 5 to 4 items. The fit to a log-linear RM taking local dependency (LD) into account was acceptable (Anderson CLR=26.1, df=21, P=.20), and CCA was .91. Tool 7 was reduced from 6 to 4 items. The fit to a log-linear RM taking LD and differential item functioning into account was acceptable (Anderson CLR=23.0, df=29, P=.78), and CCA was .90.

          Conclusions

          The eHLA consists of 7 short, robust scales that assess individual’s knowledge and skills related to digital literacy and health literacy.

          Related collections

          Most cited references19

          • Record: found
          • Abstract: found
          • Article: not found

          Coefficient alpha and the internal structure of tests

          Psychometrika, 16(3), 297-334
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Critical Values for Yen’s Q3: Identification of Local Dependence in the Rasch Model Using Residual Correlations

            The assumption of local independence is central to all item response theory (IRT) models. Violations can lead to inflated estimates of reliability and problems with construct validity. For the most widely used fit statistic Q3, there are currently no well-documented suggestions of the critical values which should be used to indicate local dependence (LD), and for this reason, a variety of arbitrary rules of thumb are used. In this study, an empirical data example and Monte Carlo simulation were used to investigate the different factors that can influence the null distribution of residual correlations, with the objective of proposing guidelines that researchers and practitioners can follow when making decisions about LD during scale development and validation. A parametric bootstrapping procedure should be implemented in each separate situation to obtain the critical value of LD applicable to the data set, and provide example critical values for a number of data structure situations. The results show that for the Q3 fit statistic, no single critical value is appropriate for all situations, as the percentiles in the empirical null distribution are influenced by the number of items, the sample size, and the number of response categories. Furthermore, the results show that LD should be considered relative to the average observed residual correlation, rather than to a uniform value, as this results in more stable percentiles for the null distribution of an adjusted fit statistic.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Development of the Digital Health Literacy Instrument: Measuring a Broad Spectrum of Health 1.0 and Health 2.0 Skills

              Background With the digitization of health care and the wide availability of Web-based applications, a broad set of skills is essential to properly use such facilities; these skills are called digital health literacy or eHealth literacy. Current instruments to measure digital health literacy focus only on information gathering (Health 1.0 skills) and do not pay attention to interactivity on the Web (Health 2.0). To measure the complete spectrum of Health 1.0 and Health 2.0 skills, including actual competencies, we developed a new instrument. The Digital Health Literacy Instrument (DHLI) measures operational skills, navigation skills, information searching, evaluating reliability, determining relevance, adding self-generated content, and protecting privacy. Objective Our objective was to study the distributional properties, reliability, content validity, and construct validity of the DHLI’s self-report scale (21 items) and to explore the feasibility of an additional set of performance-based items (7 items). Methods We used a paper-and-pencil survey among a sample of the general Dutch population, stratified by age, sex, and educational level (T1; N=200). The survey consisted of the DHLI, sociodemographics, Internet use, health status, health literacy and the eHealth Literacy Scale (eHEALS). After 2 weeks, we asked participants to complete the DHLI again (T2; n=67). Cronbach alpha and intraclass correlation analysis between T1 and T2 were used to investigate reliability. Principal component analysis was performed to determine content validity. Correlation analyses were used to determine the construct validity. Results Respondents (107 female and 93 male) ranged in age from 18 to 84 years (mean 46.4, SD 19.0); 23.0% (46/200) had a lower educational level. Internal consistencies of the total scale (alpha=.87) and the subscales (alpha range .70-.89) were satisfactory, except for protecting privacy (alpha=.57). Distributional properties showed an approximately normal distribution. Test-retest analysis was satisfactory overall (total scale intraclass correlation coefficient=.77; subscale intraclass correlation coefficient range .49-.81). The performance-based items did not together form a single construct (alpha=.47) and should be interpreted individually. Results showed that more complex skills were reflected in a lower number of correct responses. Principal component analysis confirmed the theoretical structure of the self-report scale (76% explained variance). Correlations were as expected, showing significant relations with age (ρ=–.41, P<.001), education (ρ=.14, P=.047), Internet use (ρ=.39, P<.001), health-related Internet use (ρ=.27, P<.001), health status (ρ range .17-.27, P<.001), health literacy (ρ=.31, P<.001), and the eHEALS (ρ=.51, P<.001). Conclusions This instrument can be accepted as a new self-report measure to assess digital health literacy, using multiple subscales. Its performance-based items provide an indication of actual skills but should be studied and adapted further. Future research should examine the acceptability of this instrument in other languages and among different populations.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                May 2018
                10 May 2018
                : 20
                : 5
                : e178
                Affiliations
                [1] 1 Department of Public Health University of Copenhagen Copenhagen Denmark
                [2] 2 Danish Multiple Sclerosis Society Valby Denmark
                [3] 3 Danish Cancer Research Center Danish Cancer Society Copenhagen Denmark
                [4] 4 Scias Copenhagen Denmark
                [5] 5 Steno Diabetes Center Copenhagen Gentofte Denmark
                Author notes
                Corresponding Author: Astrid Karnoe askn@ 123456sund.ku.dk
                Author information
                http://orcid.org/0000-0002-2571-1111
                http://orcid.org/0000-0003-0537-2121
                http://orcid.org/0000-0003-4518-5187
                http://orcid.org/0000-0002-1681-4338
                http://orcid.org/0000-0002-0909-4088
                Article
                v20i5e178
                10.2196/jmir.8347
                5968212
                29748163
                dce07bd1-cd07-4207-8926-958f5450d16f
                ©Astrid Karnoe, Dorthe Furstrand, Karl Bang Christensen, Ole Norgaard, Lars Kayser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 7 July 2017
                : 21 November 2017
                : 28 February 2018
                : 17 March 2018
                Categories
                Original Paper
                Original Paper

                Medicine
                health literacy,computer literacy,questionnaires,telemedicine,consumer health informatics
                Medicine
                health literacy, computer literacy, questionnaires, telemedicine, consumer health informatics

                Comments

                Comment on this article