300
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      CONSORT-EHEALTH: Improving and Standardizing Evaluation Reports of Web-based and Mobile Health Interventions

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Web-based and mobile health interventions (also called “Internet interventions” or "eHealth/mHealth interventions") are tools or treatments, typically behaviorally based, that are operationalized and transformed for delivery via the Internet or mobile platforms. These include electronic tools for patients, informal caregivers, healthy consumers, and health care providers. The Consolidated Standards of Reporting Trials (CONSORT) statement was developed to improve the suboptimal reporting of randomized controlled trials (RCTs). While the CONSORT statement can be applied to provide broad guidance on how eHealth and mHealth trials should be reported, RCTs of web-based interventions pose very specific issues and challenges, in particular related to reporting sufficient details of the intervention to allow replication and theory-building.

          Objective

          To develop a checklist, dubbed CONSORT-EHEALTH (Consolidated Standards of Reporting Trials of Electronic and Mobile HEalth Applications and onLine TeleHealth), as an extension of the CONSORT statement that provides guidance for authors of eHealth and mHealth interventions.

          Methods

          A literature review was conducted, followed by a survey among eHealth experts and a workshop.

          Results

          A checklist instrument was constructed as an extension of the CONSORT statement. The instrument has been adopted by the Journal of Medical Internet Research (JMIR) and authors of eHealth RCTs are required to submit an electronic checklist explaining how they addressed each subitem.

          Conclusions

          CONSORT-EHEALTH has the potential to improve reporting and provides a basis for evaluating the validity and applicability of eHealth trials. Subitems describing how the intervention should be reported can also be used for non-RCT evaluation reports. As part of the development process, an evaluation component is essential; therefore, feedback from authors will be solicited, and a before-after study will evaluate whether reporting has been improved.

          Related collections

          Most cited references10

          • Record: found
          • Abstract: found
          • Article: not found

          The Effectiveness of Web-Based vs. Non-Web-Based Interventions: A Meta-Analysis of Behavioral Change Outcomes

          Background A primary focus of self-care interventions for chronic illness is the encouragement of an individual's behavior change necessitating knowledge sharing, education, and understanding of the condition. The use of the Internet to deliver Web-based interventions to patients is increasing rapidly. In a 7-year period (1996 to 2003), there was a 12-fold increase in MEDLINE citations for “Web-based therapies.” The use and effectiveness of Web-based interventions to encourage an individual's change in behavior compared to non-Web-based interventions have not been substantially reviewed. Objective This meta-analysis was undertaken to provide further information on patient/client knowledge and behavioral change outcomes after Web-based interventions as compared to outcomes seen after implementation of non-Web-based interventions. Methods The MEDLINE, CINAHL, Cochrane Library, EMBASE, ERIC, and PSYCHInfo databases were searched for relevant citations between the years 1996 and 2003. Identified articles were retrieved, reviewed, and assessed according to established criteria for quality and inclusion/exclusion in the study. Twenty-two articles were deemed appropriate for the study and selected for analysis. Effect sizes were calculated to ascertain a standardized difference between the intervention (Web-based) and control (non-Web-based) groups by applying the appropriate meta-analytic technique. Homogeneity analysis, forest plot review, and sensitivity analyses were performed to ascertain the comparability of the studies. Results Aggregation of participant data revealed a total of 11,754 participants (5,841 women and 5,729 men). The average age of participants was 41.5 years. In those studies reporting attrition rates, the average drop out rate was 21% for both the intervention and control groups. For the five Web-based studies that reported usage statistics, time spent/session/person ranged from 4.5 to 45 minutes. Session logons/person/week ranged from 2.6 logons/person over 32 weeks to 1008 logons/person over 36 weeks. The intervention designs included one-time Web-participant health outcome studies compared to non-Web participant health outcomes, self-paced interventions, and longitudinal, repeated measure intervention studies. Longitudinal studies ranged from 3 weeks to 78 weeks in duration. The effect sizes for the studied outcomes ranged from -.01 to .75. Broad variability in the focus of the studied outcomes precluded the calculation of an overall effect size for the compared outcome variables in the Web-based compared to the non-Web-based interventions. Homogeneity statistic estimation also revealed widely differing study parameters (Qw16 = 49.993, P ≤ .001). There was no significant difference between study length and effect size. Sixteen of the 17 studied effect outcomes revealed improved knowledge and/or improved behavioral outcomes for participants using the Web-based interventions. Five studies provided group information to compare the validity of Web-based vs. non-Web-based instruments using one-time cross-sectional studies. These studies revealed effect sizes ranging from -.25 to +.29. Homogeneity statistic estimation again revealed widely differing study parameters (Qw4 = 18.238, P ≤ .001). Conclusions The effect size comparisons in the use of Web-based interventions compared to non-Web-based interventions showed an improvement in outcomes for individuals using Web-based interventions to achieve the specified knowledge and/or behavior change for the studied outcome variables. These outcomes included increased exercise time, increased knowledge of nutritional status, increased knowledge of asthma treatment, increased participation in healthcare, slower health decline, improved body shape perception, and 18-month weight loss maintenance.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Online Interventions for Social Marketing Health Behavior Change Campaigns: A Meta-Analysis of Psychological Architectures and Adherence Factors

            Background Researchers and practitioners have developed numerous online interventions that encourage people to reduce their drinking, increase their exercise, and better manage their weight. Motivations to develop eHealth interventions may be driven by the Internet’s reach, interactivity, cost-effectiveness, and studies that show online interventions work. However, when designing online interventions suitable for public campaigns, there are few evidence-based guidelines, taxonomies are difficult to apply, many studies lack impact data, and prior meta-analyses are not applicable to large-scale public campaigns targeting voluntary behavioral change. Objectives This meta-analysis assessed online intervention design features in order to inform the development of online campaigns, such as those employed by social marketers, that seek to encourage voluntary health behavior change. A further objective was to increase understanding of the relationships between intervention adherence, study adherence, and behavioral outcomes. Methods Drawing on systematic review methods, a combination of 84 query terms were used in 5 bibliographic databases with additional gray literature searches. This resulted in 1271 abstracts and papers; 31 met the inclusion criteria. In total, 29 papers describing 30 interventions were included in the primary meta-analysis, with the 2 additional studies qualifying for the adherence analysis. Using a random effects model, the first analysis estimated the overall effect size, including groupings by control conditions and time factors. The second analysis assessed the impacts of psychological design features that were coded with taxonomies from evidence-based behavioral medicine, persuasive technology, and other behavioral influence fields. These separate systems were integrated into a coding framework model called the communication-based influence components model. Finally, the third analysis assessed the relationships between intervention adherence and behavioral outcomes. Results The overall impact of online interventions across all studies was small but statistically significant (standardized mean difference effect size d = 0.19, 95% confidence interval [CI] = 0.11 - 0.28, P < .001, number of interventions k = 30). The largest impact with a moderate level of efficacy was exerted from online interventions when compared with waitlists and placebos (d = 0.28, 95% CI = 0.17 - 0.39, P < .001, k = 18), followed by comparison with lower-tech online interventions (d = 0.16, 95% CI = 0.00 - 0.32, P = .04, k = 8); no significant difference was found when compared with sophisticated print interventions (d = –0.11, 95% CI = –0.34 to 0.12, P = .35, k = 4), though online interventions offer a small effect with the advantage of lower costs and larger reach. Time proved to be a critical factor, with shorter interventions generally achieving larger impacts and greater adherence. For psychological design, most interventions drew from the transtheoretical approach and were goal orientated, deploying numerous influence components aimed at showing users the consequences of their behavior, assisting them in reaching goals, and providing normative pressure. Inconclusive results suggest a relationship between the number of influence components and intervention efficacy. Despite one contradictory correlation, the evidence suggests that study adherence, intervention adherence, and behavioral outcomes are correlated. Conclusions These findings demonstrate that online interventions have the capacity to influence voluntary behaviors, such as those routinely targeted by social marketing campaigns. Given the high reach and low cost of online technologies, the stage may be set for increased public health campaigns that blend interpersonal online systems with mass-media outreach. Such a combination of approaches could help individuals achieve personal goals that, at an individual level, help citizens improve the quality of their lives and at a state level, contribute to healthier societies.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Missing Data Approaches in eHealth Research: Simulation Study and a Tutorial for Nonmathematically Inclined Researchers

              Background Missing data is a common nuisance in eHealth research: it is hard to prevent and may invalidate research findings. Objective In this paper several statistical approaches to data “missingness” are discussed and tested in a simulation study. Basic approaches (complete case analysis, mean imputation, and last observation carried forward) and advanced methods (expectation maximization, regression imputation, and multiple imputation) are included in this analysis, and strengths and weaknesses are discussed. Methods The dataset used for the simulation was obtained from a prospective cohort study following participants in an online self-help program for problem drinkers. It contained 124 nonnormally distributed endpoints, that is, daily alcohol consumption counts of the study respondents. Missingness at random (MAR) was induced in a selected variable for 50% of the cases. Validity, reliability, and coverage of the estimates obtained using the different imputation methods were calculated by performing a bootstrapping simulation study. Results In the performed simulation study, the use of multiple imputation techniques led to accurate results. Differences were found between the 4 tested multiple imputation programs: NORM, MICE, Amelia II, and SPSS MI. Among the tested approaches, Amelia II outperformed the others, led to the smallest deviation from the reference value (Cohen’s d = 0.06), and had the largest coverage percentage of the reference confidence interval (96%). Conclusions The use of multiple imputation improves the validity of the results when analyzing datasets with missing observations. Some of the often-used approaches (LOCF, complete cases analysis) did not perform well, and, hence, we recommend not using these. Accumulating support for the analysis of multiple imputed datasets is seen in more recent versions of some of the widely used statistical software programs making the use of multiple imputation more readily available to less mathematically inclined researchers.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                JMIR
                Journal of Medical Internet Research
                Gunther Eysenbach (JMIR Publications Inc., Toronto, Canada )
                1438-8871
                Oct-Dec 2011
                31 December 2011
                : 13
                : 4
                : e126
                Affiliations
                [1] 1simpleUniversity Health Network simpleCentre for Global eHealth Innovation & Techna Institute Toronto, ONCanada
                [2] 2simpleInstitute for Health Policy, Management, and Evaluation simpleUniversity of Toronto Toronto, ONCanada
                [3] 3simpleJMIR Publications Inc. Toronto, ONCanada
                [4] 4simplesee Acknowledgments for contributors
                Article
                v13i4e126
                10.2196/jmir.1923
                3278112
                22209829
                06c3aba7-137b-42ba-902c-469263fef536
                ©Gunther Eysenbach, CONSORT-EHEALTH Group. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.12.2011.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 29 August 2011
                : 08 September 2011
                : 31 December 2011
                Categories
                Editorial

                Medicine
                evaluation,internet,mobile health,reporting standards,publishing standards,guidelines,quality control,randomized controlled trials as topic,medical informatics

                Comments

                Comment on this article