+1 Recommend
1 collections

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The French eHealth Acceptability Scale Using the Unified Theory of Acceptance and Use of Technology 2 Model: Instrument Validation Study


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.



          Technology-based physical activity suggests new opportunities for public health initiatives. Yet only 45% of technology interventions are theoretically based, and the acceptability mechanisms have been insufficiently studied. Acceptability and acceptance theories have provided interesting insights, particularly the unified theory of acceptance and use of technology 2 (UTAUT2). In several studies, the psychometric qualities of acceptability scales have not been well demonstrated.


          The aim of this study was to adapt the UTAUT2 to the electronic health (eHealth) context and provide a preliminary validation of the eHealth acceptability scale in a French sample.


          In line with the reference validation methodologies, we carried out the following stages of validating the scale with a total of 576 volunteers: translation and adaptation, dimensionality tests, reliability tests, and construct validity tests. We used confirmatory factor analysis to validate a 22-item instrument with 7 subscales: Performance Expectancy, Effort Expectancy, Social Influence, Facilitating Conditions, Hedonic Motivation, Price Value, and Habit.


          The dimensionality tests showed that the bifactor confirmatory model presented the best fit indexes: χ 2 173=434.86 ( P<.001), χ 2/ df=2.51, comparative fit index=.97, Tucker-Lewis index=.95, and root mean square error of approximation=.053 (90% CI .047-.059). The invariance tests of the eHealth acceptability factor structure by sex demonstrated no significant differences between models, except for the strict model. The partial strict model demonstrated no difference from the strong model. Cronbach alphas ranged from .77 to .95 for the 7 factors. We measured the internal reliability with a 4-week interval. The intraclass correlation coefficients for each subscale ranged from .62 to .88, and there were no significant differences in the t tests from time 1 to time 2. Assessments for convergent validity demonstrated that the eHealth acceptability constructs were significantly and positively related to behavioral intention, usage, and constructs from the technology acceptance model and the theory of planned behavior.


          The 22-item French-language eHealth acceptability scale, divided into 7 subscales, showed good psychometric qualities. This scale is thus a valid and reliable tool to assess the acceptability of eHealth technology in French-speaking samples and offers promising avenues in research, clinical practice, and marketing.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          Coefficient alpha and the internal structure of tests

          Psychometrika, 16(3), 297-334
            • Record: found
            • Abstract: found
            • Article: not found

            How can I deal with missing data in my study?

            Missing data in medical research is a common problem that has long been recognised by statisticians and medical researchers alike. In general, if the effect of missing data is not taken into account the results of the statistical analyses will be biased and the amount of variability in the data will not be correctly estimated. There are three main types of missing data pattern: Missing Completely At Random (MCAR), Missing At Random (MAR) and Not Missing At Random (NMAR). The type of missing data that a researcher has in their dataset determines the appropriate method to use in handling the missing data before a formal statistical analysis begins. The aim of this practice note is to describe these patterns of missing data and how they can occur, as well describing the methods of handling them. Simple and more complex methods are described, including the advantages and disadvantages of each method as well as their availability in routine software. It is good practice to perform a sensitivity analysis employing different missing data techniques in order to assess the robustness of the conclusions drawn from each approach.
              • Record: found
              • Abstract: found
              • Article: not found

              Do self-report instruments allow meaningful comparisons across diverse population groups? Testing measurement invariance using the confirmatory factor analysis framework.

              Comparative public health research makes wide use of self-report instruments. For example, research identifying and explaining health disparities across demographic strata may seek to understand the health effects of patient attitudes or private behaviors. Such personal attributes are difficult or impossible to observe directly and are often best measured by self-reports. Defensible use of self-reports in quantitative comparative research requires not only that the measured constructs have the same meaning across groups, but also that group comparisons of sample estimates (eg, means and variances) reflect true group differences and are not contaminated by group-specific attributes that are unrelated to the construct of interest. Evidence for these desirable properties of measurement instruments can be established within the confirmatory factor analysis (CFA) framework; a nested hierarchy of hypotheses is tested that addresses the cross-group invariance of the instrument's psychometric properties. By name, these hypotheses include configural, metric (or pattern), strong (or scalar), and strict factorial invariance. The CFA model and each of these hypotheses are described in nontechnical language. A worked example and technical appendices are included.

                Author and article information

                J Med Internet Res
                J. Med. Internet Res
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                April 2020
                15 April 2020
                : 22
                : 4
                : e16520
                [1 ] Laboratoire Motricité Humaine Expertise Sport Santé Université Côte d'Azur Nice France
                [2 ] Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales Université Côte d'Azur Nice France
                Author notes
                Corresponding Author: Meggy Hayotte meggy.hayotte@ 123456etu.univ-cotedazur.fr
                Author information
                ©Meggy Hayotte, Pierre Thérouanne, Laura Gray, Karine Corrion, Fabienne D'Arripe-Longueville. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.04.2020.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                : 6 October 2019
                : 10 November 2019
                : 22 November 2019
                : 15 December 2019
                Original Paper
                Original Paper

                telemedicine,validation study,factor analysis, statistical,surveys and questionnaires,acceptability


                Comment on this article