12
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The French eHealth Acceptability Scale Using the Unified Theory of Acceptance and Use of Technology 2 Model: Instrument Validation Study

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Technology-based physical activity suggests new opportunities for public health initiatives. Yet only 45% of technology interventions are theoretically based, and the acceptability mechanisms have been insufficiently studied. Acceptability and acceptance theories have provided interesting insights, particularly the unified theory of acceptance and use of technology 2 (UTAUT2). In several studies, the psychometric qualities of acceptability scales have not been well demonstrated.

          Objective

          The aim of this study was to adapt the UTAUT2 to the electronic health (eHealth) context and provide a preliminary validation of the eHealth acceptability scale in a French sample.

          Methods

          In line with the reference validation methodologies, we carried out the following stages of validating the scale with a total of 576 volunteers: translation and adaptation, dimensionality tests, reliability tests, and construct validity tests. We used confirmatory factor analysis to validate a 22-item instrument with 7 subscales: Performance Expectancy, Effort Expectancy, Social Influence, Facilitating Conditions, Hedonic Motivation, Price Value, and Habit.

          Results

          The dimensionality tests showed that the bifactor confirmatory model presented the best fit indexes: χ 2 173=434.86 ( P<.001), χ 2/ df=2.51, comparative fit index=.97, Tucker-Lewis index=.95, and root mean square error of approximation=.053 (90% CI .047-.059). The invariance tests of the eHealth acceptability factor structure by sex demonstrated no significant differences between models, except for the strict model. The partial strict model demonstrated no difference from the strong model. Cronbach alphas ranged from .77 to .95 for the 7 factors. We measured the internal reliability with a 4-week interval. The intraclass correlation coefficients for each subscale ranged from .62 to .88, and there were no significant differences in the t tests from time 1 to time 2. Assessments for convergent validity demonstrated that the eHealth acceptability constructs were significantly and positively related to behavioral intention, usage, and constructs from the technology acceptance model and the theory of planned behavior.

          Conclusions

          The 22-item French-language eHealth acceptability scale, divided into 7 subscales, showed good psychometric qualities. This scale is thus a valid and reliable tool to assess the acceptability of eHealth technology in French-speaking samples and offers promising avenues in research, clinical practice, and marketing.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          Coefficient alpha and the internal structure of tests

          Psychometrika, 16(3), 297-334
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            How can I deal with missing data in my study?

            Missing data in medical research is a common problem that has long been recognised by statisticians and medical researchers alike. In general, if the effect of missing data is not taken into account the results of the statistical analyses will be biased and the amount of variability in the data will not be correctly estimated. There are three main types of missing data pattern: Missing Completely At Random (MCAR), Missing At Random (MAR) and Not Missing At Random (NMAR). The type of missing data that a researcher has in their dataset determines the appropriate method to use in handling the missing data before a formal statistical analysis begins. The aim of this practice note is to describe these patterns of missing data and how they can occur, as well describing the methods of handling them. Simple and more complex methods are described, including the advantages and disadvantages of each method as well as their availability in routine software. It is good practice to perform a sensitivity analysis employing different missing data techniques in order to assess the robustness of the conclusions drawn from each approach.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Do self-report instruments allow meaningful comparisons across diverse population groups? Testing measurement invariance using the confirmatory factor analysis framework.

              Comparative public health research makes wide use of self-report instruments. For example, research identifying and explaining health disparities across demographic strata may seek to understand the health effects of patient attitudes or private behaviors. Such personal attributes are difficult or impossible to observe directly and are often best measured by self-reports. Defensible use of self-reports in quantitative comparative research requires not only that the measured constructs have the same meaning across groups, but also that group comparisons of sample estimates (eg, means and variances) reflect true group differences and are not contaminated by group-specific attributes that are unrelated to the construct of interest. Evidence for these desirable properties of measurement instruments can be established within the confirmatory factor analysis (CFA) framework; a nested hierarchy of hypotheses is tested that addresses the cross-group invariance of the instrument's psychometric properties. By name, these hypotheses include configural, metric (or pattern), strong (or scalar), and strict factorial invariance. The CFA model and each of these hypotheses are described in nontechnical language. A worked example and technical appendices are included.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                April 2020
                15 April 2020
                : 22
                : 4
                : e16520
                Affiliations
                [1 ] Laboratoire Motricité Humaine Expertise Sport Santé Université Côte d'Azur Nice France
                [2 ] Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales Université Côte d'Azur Nice France
                Author notes
                Corresponding Author: Meggy Hayotte meggy.hayotte@ 123456etu.univ-cotedazur.fr
                Author information
                https://orcid.org/0000-0003-3418-3485
                https://orcid.org/0000-0001-5238-3275
                https://orcid.org/0000-0001-8194-5812
                https://orcid.org/0000-0001-8672-4955
                https://orcid.org/0000-0002-1176-3279
                Article
                v22i4e16520
                10.2196/16520
                7191343
                32293569
                820b7a29-a2bb-4118-b137-3cace69bfc6c
                ©Meggy Hayotte, Pierre Thérouanne, Laura Gray, Karine Corrion, Fabienne D'Arripe-Longueville. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.04.2020.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 6 October 2019
                : 10 November 2019
                : 22 November 2019
                : 15 December 2019
                Categories
                Original Paper
                Original Paper

                Medicine
                telemedicine,validation study,factor analysis, statistical,surveys and questionnaires,acceptability

                Comments

                Comment on this article