Blog
About

8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Do Physicians Respond to Web-Based Patient Ratings? An Analysis of Physicians’ Responses to More Than One Million Web-Based Ratings Over a Six-Year Period

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Physician-rating websites (PRWs) may lead to quality improvements in case they enable and establish a peer-to-peer communication between patients and physicians. Yet, we know little about whether and how physicians respond on the Web to patient ratings.

          Objective

          The objective of this study was to describe trends in physicians’ Web-based responses to patient ratings over time, to identify what physician characteristics influence Web-based responses, and to examine the topics physicians are likely to respond to.

          Methods

          We analyzed physician responses to more than 1 million patient ratings displayed on the German PRW, jameda, from 2010 to 2015. Quantitative analysis contained chi-square analyses and the Mann-Whitney U test. Quantitative content techniques were applied to determine the topics physicians respond to based on a randomly selected sample of 600 Web-based ratings and corresponding physician responses.

          Results

          Overall, physicians responded to 1.58% (16,640/1,052,347) of all Web-based ratings, with an increasing trend over time from 0.70% (157/22,355) in 2010 to 1.88% (6377/339,919) in 2015. Web-based ratings that were responded to had significantly worse rating results than ratings that were not responded to (2.15 vs 1.74, P<.001). Physicians who respond on the Web to patient ratings differ significantly from nonresponders regarding several characteristics such as gender and patient recommendation results ( P<.001 each). Regarding scaled-survey rating elements, physicians were most likely to respond to the waiting time within the practice (19.4%, 99/509) and the time spent with the patient (18.3%, 110/600). Almost one-third of topics in narrative comments were answered by the physicians (30.66%, 382/1246).

          Conclusions

          So far, only a minority of physicians have taken the chance to respond on the Web to patient ratings. This is likely because of (1) the low awareness of PRWs among physicians, (2) the fact that only a few PRWs enable physicians to respond on the Web to patient ratings, and (3) the lack of an active moderator to establish peer-to-peer communication. PRW providers should foster more frequent communication between the patient and the physician and encourage physicians to respond on the Web to patient ratings. Further research is needed to learn more about the motivation of physicians to respond or not respond to Web-based patient ratings.

          Related collections

          Most cited references 39

          • Record: found
          • Abstract: found
          • Article: not found

          Three approaches to qualitative content analysis.

          Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed, or summative. All three approaches are used to interpret meaning from the content of text data and, hence, adhere to the naturalistic paradigm. The major differences among the approaches are coding schemes, origins of codes, and threats to trustworthiness. In conventional content analysis, coding categories are derived directly from the text data. With a directed approach, analysis starts with a theory or relevant research findings as guidance for initial codes. A summative content analysis involves counting and comparisons, usually of keywords or content, followed by the interpretation of the underlying context. The authors delineate analytic procedures specific to each approach and techniques addressing trustworthiness with hypothetical examples drawn from the area of end-of-life care.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The qualitative content analysis process.

            This paper is a description of inductive and deductive content analysis. Content analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method. When using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge. Inductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Effect size estimates: current use, calculations, and interpretation.

              The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                July 2017
                26 July 2017
                : 19
                : 7
                Affiliations
                1 Institute of Management, School of Business and Economics Health Services Management Friedrich-Alexander-University Erlangen-Nuremberg Nuremberg Germany
                2 Media, Information and Design Department of Information and Communication University of Applied Sciences and Arts Hannover Germany
                Author notes
                Corresponding Author: Martin Emmert Martin.Emmert@ 123456fau.de
                Article
                v19i7e275
                10.2196/jmir.7538
                5550732
                28747292
                ©Martin Emmert, Lisa Sauter, Lisa Jablonski, Uwe Sander, Fatemeh Taheri-Zadeh. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.07.2017.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                Categories
                Original Paper
                Original Paper

                Medicine

                transparency, internet, online ratings, doctor-patient communication, public reporting

                Comments

                Comment on this article