32
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      General Practitioners’ Concerns About Online Patient Feedback: Findings From a Descriptive Exploratory Qualitative Study in England

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          The growth in the volume of online patient feedback, including online patient ratings and comments, suggests that patients are embracing the opportunity to review online their experience of receiving health care. Very little is known about health care professionals’ attitudes toward online patient feedback and whether health care professionals are comfortable with the public nature of the feedback.

          Objective

          The aim of the overall study was to explore and describe general practitioners’ attitudes toward online patient feedback. This paper reports on the findings of one of the aims of the study, which was to explore and understand the concerns that general practitioners (GPs) in England have about online patient feedback. This could then be used to improve online patient feedback platforms and help to increase usage of online patient feedback by GPs and, by extension, their patients.

          Methods

          A descriptive qualitative approach using face-to-face semistructured interviews was used in this study. A topic guide was developed following a literature review and discussions with key stakeholders. GPs (N=20) were recruited from Cambridgeshire, London, and Northwest England through probability and snowball sampling. Interviews were transcribed verbatim and analyzed in NVivo using the framework method, a form of thematic analysis.

          Results

          Most participants in this study had concerns about online patient feedback. They questioned the validity of online patient feedback because of data and user biases and lack of representativeness, the usability of online patient feedback due to the feedback being anonymous, the transparency of online patient feedback because of the risk of false allegations and breaching confidentiality, and the resulting impact of all those factors on them, their professional practice, and their relationship with their patients.

          Conclusions

          The majority of GPs interviewed had reservations and concerns about online patient feedback and questioned its validity and usefulness among other things. Based on the findings from the study, recommendations for online patient feedback website providers in England are given. These include suggestions to make some specific changes to the platform and the need to promote online patient feedback more among both GPs and health care users, which may help to reduce some of the concerns raised by GPs about online patient feedback in this study.

          Related collections

          Most cited references 55

          • Record: found
          • Abstract: found
          • Article: not found

          Patient satisfaction revisited: a multilevel approach.

          Patient satisfaction surveys are increasingly used for benchmarking purposes. In the Netherlands, the results of these surveys are reported at the univariate level without taking case mix factors into account. The first objective of the present study was to determine whether differences in patient satisfaction are attributed to the hospital, department or patient characteristics. Our second aim was to investigate which case mix variables could be taken into account when satisfaction surveys are carried out for benchmarking purposes. Patients who either were discharged from eight academic and fourteen general Dutch hospitals or visited the outpatient departments of the same hospitals in 2005 participated in cross-sectional satisfaction surveys. Satisfaction was measured on six dimensions of care and one general dimension. We used multilevel analysis to estimate the proportion of variance in satisfaction scores determined by the hospital and department levels by calculating intra-class correlation coefficients (ICCs). Hospital size, hospital type, population density and response rate are four case mix variables we investigated at the hospital level. We also measured the effects of patient characteristics (gender, age, education, health status, and mother language) on satisfaction. We found ICCs on hospital and department levels ranging from 0% to 4% for all dimensions. This means that only a minor part of the variance in patient satisfaction scores is attributed to the hospital and department levels. Although all patient characteristics had some statistically significant influence on patient satisfaction, age, health status and education appeared to be the most important determinants of patient satisfaction and could be considered for case mix correction. Gender, mother language, hospital type, hospital size, population density and response rate seemed to be less important determinants. The explained variance of the patient and hospital characteristics ranged from 3% to 5% for the different dimensions. Our conclusions are, first, that a substantial part of the variance is on the patient level, while only a minor part of the variance is at the hospital and department levels. Second, patient satisfaction outcomes in the Netherlands can be corrected by the case mix variables age, health status and education.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Use of Sentiment Analysis for Capturing Patient Experience From Free-Text Comments Posted Online

            Background There are large amounts of unstructured, free-text information about quality of health care available on the Internet in blogs, social networks, and on physician rating websites that are not captured in a systematic way. New analytical techniques, such as sentiment analysis, may allow us to understand and use this information more effectively to improve the quality of health care. Objective We attempted to use machine learning to understand patients’ unstructured comments about their care. We used sentiment analysis techniques to categorize online free-text comments by patients as either positive or negative descriptions of their health care. We tried to automatically predict whether a patient would recommend a hospital, whether the hospital was clean, and whether they were treated with dignity from their free-text description, compared to the patient’s own quantitative rating of their care. Methods We applied machine learning techniques to all 6412 online comments about hospitals on the English National Health Service website in 2010 using Weka data-mining software. We also compared the results obtained from sentiment analysis with the paper-based national inpatient survey results at the hospital level using Spearman rank correlation for all 161 acute adult hospital trusts in England. Results There was 81%, 84%, and 89% agreement between quantitative ratings of care and those derived from free-text comments using sentiment analysis for cleanliness, being treated with dignity, and overall recommendation of hospital respectively (kappa scores: .40–.74, P<.001 for all). We observed mild to moderate associations between our machine learning predictions and responses to the large patient survey for the three categories examined (Spearman rho 0.37-0.51, P<.001 for all). Conclusions The prediction accuracy that we have achieved using this machine learning process suggests that we are able to predict, from free-text, a reasonably accurate assessment of patients’ opinion about different performance aspects of a hospital and that these machine learning predictions are associated with results of more conventional surveys.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Not another questionnaire! Maximizing the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs.

              Non-response is an important potential source of bias in survey research. With evidence of falling response rates from GPs, it is of increasing importance when undertaking postal questionnaire surveys of GPs to seek to maximize response rates and evaluate the potential for non-response bias. Our aim was to investigate the effectiveness of follow-up procedures when undertaking a postal questionnaire study of GPs, the use of publicly available data in assessing non-response bias and the development of regression models predicting responder behaviour. A postal questionnaire study was carried out of a random sample of 600 GPs in Wales concerning their training and knowledge in palliative care. A cumulative response rate graph permitted optimal timing of follow-up mailings: a final response rate of 67.6% was achieved. Differences were found between responders and non-responders on several parameters and between sample and population on some parameters: some of these may bias the sample data. Logistic regression analysis indicated medical school of qualification and current membership of the Royal College of General Practitioners to be the only significant predictors of responders. Late responders were significantly more likely to have been qualified for longer. This study has several implications for future postal questionnaire studies of GPs. The optimal timing of reminders may be judged from plotting the cumulative response rate: it is worth sending at least three reminders. There are few parameters that significantly predict GPs who are unlikely to respond; more of these may be included in the sample, or they may be targeted for special attention. Publicly available data may be used readily in the analysis of non-response bias and generalizability.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications Inc. (Toronto, Canada )
                1439-4456
                1438-8871
                December 2015
                08 December 2015
                : 17
                : 12
                Affiliations
                1WMG University of Warwick CoventryUnited Kingdom
                Author notes
                Corresponding Author: Salma Patel salma.patel@ 123456warwick.ac.uk
                Article
                v17i12e276
                10.2196/jmir.4989
                4704896
                26681299
                62c0870d-bd7e-47e0-8981-8343affa3948
                ©Salma Patel, Rebecca Cain, Kevin Neailey, Lucy Hooberman. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 08.12.2015.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                Categories
                Original Paper
                Original Paper

                Comments

                Comment on this article