Blog
About

38
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Changing Landscape of Physician Quality Reporting: Analysis of Patients’ Online Ratings of Their Physicians Over a 5-Year Period

      , PhD, MBA , 1 , , PhD 2 , , PhD, MS, MBA 1 , , MPH, MD 3

      (Reviewer), (Reviewer), (Reviewer), (Reviewer), (Reviewer)

      Journal of Medical Internet Research

      Gunther Eysenbach

      Physician quality, online reviews, patient empowerment, quality transparency, public reporting

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Americans increasingly post and consult online physician rankings, yet we know little about this new phenomenon of public physician quality reporting. Physicians worry these rankings will become an outlet for disgruntled patients.

          Objective

          To describe trends in patients’ online ratings over time, across specialties, to identify what physician characteristics influence online ratings, and to examine how the value of ratings reflects physician quality.

          Methods

          We used data from RateMDs.com, which included over 386,000 national ratings from 2005 to 2010 and provided insight into the evolution of patients’ online ratings. We obtained physician demographic data from the US Department of Health and Human Services’ Area Resource File. Finally, we matched patients’ ratings with physician-level data from the Virginia Medical Board and examined the probability of being rated and resultant rating levels.

          Results

          We estimate that 1 in 6 practicing US physicians received an online review by January 2010. Obstetrician/gynecologists were twice as likely to be rated ( P < .001) as other physicians. Online reviews were generally quite positive (mean 3.93 on a scale of 1 to 5). Based on the Virginia physician population, long-time graduates were more likely to be rated, while physicians who graduated in recent years received higher average ratings ( P < .001). Patients gave slightly higher ratings to board-certified physicians ( P = .04), those who graduated from highly rated medical schools ( P = .002), and those without malpractice claims ( P = .1).

          Conclusion

          Online physician rating is rapidly growing in popularity and becoming commonplace with no evidence that they are dominated by disgruntled patients. There exist statistically significant correlations between the value of ratings and physician experience, board certification, education, and malpractice claims, suggesting a positive correlation between online ratings and physician quality. However, the magnitude is small. The average number of ratings per physician is still low, and most rating variation reflects evaluations of punctuality and staff. Understanding whether they truly reflect better care and how they are used will be critically important.

          Related collections

          Most cited references 29

          • Record: found
          • Abstract: found
          • Article: not found

          Patients' perception of hospital care in the United States.

          Patients' perceptions of their care, especially in the hospital setting, are not well known. Data from the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey provide a portrait of patients' experiences in U.S. hospitals. We assessed the performance of hospitals across multiple domains of patients' experiences. We examined whether key characteristics of hospitals that are thought to enhance patients' experiences (i.e., a high ratio of nurses to patient-days, for-profit status, and nonacademic status) were associated with a better experience for patients. We also examined whether a hospital's performance on the HCAHPS survey was related to its performance on indicators of the quality of clinical care. We found moderately high levels of satisfaction with care (e.g., on average, 67.4% of a hospital's patients said that they would definitely recommend the hospital), with a high degree of correlation among the measures of patients' experiences (Cronbach's alpha, 0.94). As compared with hospitals in the bottom quartile of the ratio of nurses to patient-days, those in the top quartile had a somewhat better performance on the HCAHPS survey (e.g., 63.5% vs. 70.2% of patients responded that they "would definitely recommend" the hospital; P<0.001). Hospitals with a high level of patient satisfaction provided clinical care that was somewhat higher in quality for all conditions examined. For example, those in the top quartile of HCAHPS ratings performed better than those in the bottom quartile with respect to the care that patients received for acute myocardial infarction (actions taken to provide appropriate care as a proportion of all opportunities for providing such actions, 95.8% vs. 93.1% in unadjusted analyses; P<0.001) and for pneumonia (90.5% vs. 88.6% in unadjusted analyses, P<0.001). This portrait of patients' experiences in U.S. hospitals offers insights into areas that need improvement, suggests that the same characteristics of hospitals that lead to high nurse-staffing levels may be associated with better experiences for patients, and offers evidence that hospitals can provide both a high quality of clinical care and a good experience for the patient. 2008 Massachusetts Medical Society
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Patients' evaluations of health care providers in the era of social networking: an analysis of physician-rating websites.

            Internet-based social networking tools that allow users to share content have enabled a new form of public reporting of physician performance: the physician-rating website. To describe the structure and content of physician-rating websites and to assess the extent to which a patient might find them valuable. We searched Google for websites that allowed patients to review physicians in the US. We included websites that met predetermined criteria, identified common elements of these websites, and recorded website characteristics. We then searched the websites for reviews of a random sample of 300 Boston physicians. Finally, we separately analyzed quantitative and narrative reviews. We identified 33 physician-rating websites, which contained 190 reviews for 81 physicians. Most reviews were positive (88%). Six percent were negative, and six percent were neutral. Generalists and subspecialists did not significantly differ in number or nature of reviews. We identified several narrative reviews that appeared to be written by the physicians themselves. Physician-rating websites offer patients a novel way to provide feedback and obtain information about physician performance. Despite controversy surrounding these sites, their use by patients has been limited to date, and a majority of reviews appear to be positive.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Analysis of 4999 Online Physician Ratings Indicates That Most Patients Give Physicians a Favorable Rating

              Background Many online physician-rating sites provide patients with information about physicians and allow patients to rate physicians. Understanding what information is available is important given that patients may use this information to choose a physician. Objectives The goals of this study were to (1) determine the most frequently visited physician-rating websites with user-generated content, (2) evaluate the available information on these websites, and (3) analyze 4999 individual online ratings of physicians. Methods On October 1, 2010, using Google Trends we identified the 10 most frequently visited online physician-rating sites with user-generated content. We then studied each site to evaluate the available information (eg, board certification, years in practice), the types of rating scales (eg, 1–5, 1–4, 1–100), and dimensions of care (eg, recommend to a friend, waiting room time) used to rate physicians. We analyzed data from 4999 selected physician ratings without identifiers to assess how physicians are rated online. Results The 10 most commonly visited websites with user-generated content were HealthGrades.com, Vitals.com, Yelp.com, YP.com, RevolutionHealth.com, RateMD.com, Angieslist.com, Checkbook.org, Kudzu.com, and ZocDoc.com. A total of 35 different dimensions of care were rated by patients in the websites, with a median of 4.5 (mean 4.9, SD 2.8, range 1–9) questions per site. Depending on the scale used for each physician-rating website, the average rating was 77 out of 100 for sites using a 100-point scale (SD 11, median 76, range 33–100), 3.84 out of 5 (77%) for sites using a 5-point scale (SD 0.98, median 4, range 1–5), and 3.1 out of 4 (78%) for sites using a 4-point scale (SD 0.72, median 3, range 1–4). The percentage of reviews rated ≥75 on a 100-point scale was 61.5% (246/400), ≥4 on a 5-point scale was 57.74% (2078/3599), and ≥3 on a 4-point scale was 74.0% (740/1000). The patient’s single overall rating of the physician correlated with the other dimensions of care that were rated by patients for the same physician (Pearson correlation, r = .73, P < .001). Conclusions Most patients give physicians a favorable rating on online physician-rating sites. A single overall rating to evaluate physicians may be sufficient to assess a patient’s opinion of the physician. The optimal content and rating method that is useful to patients when visiting online physician-rating sites deserves further study. Conducting a qualitative analysis to compare the quantitative ratings would help validate the rating instruments used to evaluate physicians.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                Gunther Eysenbach (JMIR Publications Inc., Toronto, Canada )
                1439-4456
                1438-8871
                Jan-Feb 2012
                24 February 2012
                : 14
                : 1
                Affiliations
                1simpleCenter for Health Information and Decision Systems simpleRobert H Smith School of Business simpleUniversity of Maryland College Park, MDUnited States
                2simpleDivision of Health Policy & Management simpleSchool of Public Health simpleUniversity of Minnesota Minneapolis, MNUnited States
                3simpleDepartment of Health Policy and Management simpleHarvard School of Public Health simpleHarvard University Boston, MAUnited States
                Article
                v14i1e38
                10.2196/jmir.2003
                3374528
                22366336
                ©Guodong Gordon Gao, Jeffrey S McCullough, Ritu Agarwal, Ashish K Jha. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 24.02.2012.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                Categories
                Original Paper

                Comments

                Comment on this article