+1 Recommend
1 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Embedding patient and public involvement: Managing tacit and explicit expectations

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.



          Evidencing well‐planned and implemented patient and public involvement ( PPI) in a research project is increasingly required in funding bids and dissemination activities. There is a tacit expectation that involving people with experience of the condition under study will improve the integrity and quality of the research. This expectation remains largely unproblematized and unchallenged.


          To critically evaluate the implementation of PPI activity, including co‐research in a programme of research exploring ways to enhance the independence of people with dementia.


          Using critical cases, we make visible and explicate theoretical and moral challenges of PPI.


          Case 1 explores the challenges of undertaking multiple PPI roles in the same study making explicit different responsibilities of being a co‐applicant, PPI advisory member and a co‐researcher. Case 2 explores tensions which arose when working with carer co‐researchers during data collection; here the co‐researcher's wish to offer support and advice to research participants, a moral imperative, was in conflict with assumptions about the role of the objective interviewer. Case 3 defines and examines co‐research data coding and interpretation activities undertaken with people with dementia, reporting the theoretical outputs of the activity and questioning whether this was co‐researcher analysis or PPI validation.


          Patient and public involvement activity can empower individual PPI volunteers and improve relevance and quality of research but it is a complex activity which is socially constructed in flexible ways with variable outcomes. It cannot be assumed to be simple or universal panacea for increasing the relevance and accessibility of research to the public.

          Related collections

          Most cited references 32

          • Record: found
          • Abstract: found
          • Article: not found

          Member Checking

          The trustworthiness of results is the bedrock of high quality qualitative research. Member checking, also known as participant or respondent validation, is a technique for exploring the credibility of results. Data or results are returned to participants to check for accuracy and resonance with their experiences. Member checking is often mentioned as one in a list of validation techniques. This simplistic reporting might not acknowledge the value of using the method, nor its juxtaposition with the interpretative stance of qualitative research. In this commentary, we critique how member checking has been used in published research, before describing and evaluating an innovative in-depth member checking technique, Synthesized Member Checking. The method was used in a study with patients diagnosed with melanoma. Synthesized Member Checking addresses the co-constructed nature of knowledge by providing participants with the opportunity to engage with, and add to, interview and interpreted data, several months after their semi-structured interview.
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            ‘Is it worth doing?’ Measuring the impact of patient and public involvement in research

            Abstract Much of the current debate around the impact of patient/public involvement on research focuses on the lack of empirical data. While a number of systematic literature reviews have reported the various ways in which involvement makes a difference to research and the people involved, this evidence has been criticised as being weak and anecdotal. It is argued that robust evidence is still required. This review reflects on the use of quantitative approaches to evaluating impact. It concludes that the statistical evidence is weakened by not paying sufficient attention to the context in which involvement takes place and the way it is carried out. However, if scientific (systematic, quantitative, empirical) approaches are designed in a way to take these factors into account, they might not generate knowledge that is useful beyond the original context. Such approaches might not therefore enhance our understanding of when, why and how involvement makes a difference. In the context of individual research projects where researchers collaborate with patients/the public, researchers often acquire ‘new’ knowledge about life with a health condition. This new understanding can be described as experiential knowledge—‘knowledge in context’—that researchers gain through direct experience of working with patients/the public. On this basis, researchers’ accounts of their experience potentially provide a source of insight and learning to influence others, in the same way that the patient experience helps to shape research. These accounts could be improved by increasing the detail provided about context and mechanism. One of the most important contextual factors that influence the outcome of involvement is the researchers themselves and the skills, assumptions, values and priorities they start with. At the beginning of any research project, the researchers ‘don’t know what they don’t know’ until they involve patients/the public. This means that the impact of involvement within any particular project is somewhat unpredictable. The answer to the question ‘Is involvement worth doing?’ will always be ‘It depends’. Further exploration of the contextual and mechanistic factors which influence outcomes could give a stronger steer to researchers but may never accurately predict any specific impact. Plain English summary In recent years, there has been considerable interest in finding out what difference patient and public involvement makes to research projects. The evidence published so far has been criticised for being weak and anecdotal. Some people argue we need robust evidence of impact from scientific studies of involvement. In this review, I consider examples of where impact has been measured using statistical methods. I conclude that the statistical evidence is weak, if the studies do not consider the context in which involvement takes place and the way that it is done. Studies designed to take this into account give us more confidence that the involvement did make a difference to that particular project. They do not tell us whether the same impact will occur in the same way in other projects and therefore have limited value. Researchers gain an understanding of involvement through their direct experience of working with patients and the public. This is ‘knowledge in context’ or ‘insight’ gained in the same way that patients gain expertise through their direct experience of a health condition. This means that detailed accounts of involvement from researchers already provide valuable learning to others, in the same way that patients’ insights help shape research. However, the impact of involvement will always be somewhat unpredictable, because at the start of any project researchers ‘don’t know what they don’t know’—they do not know precisely what problems they might anticipate, until the patients/public tell them.
              • Record: found
              • Abstract: not found
              • Article: not found

              Fifty years of the critical incident technique: 1954-2004 and beyond


                Author and article information

                Role: Professor of Social Research Methodology
                Role: Senior Lecturer
                Role: Research Associate
                Role: Senior Research AssociateLinda.birt@uea.ac.uk
                Health Expect
                Health Expect
                Health Expectations : An International Journal of Public Participation in Health Care and Health Policy
                John Wiley and Sons Inc. (Hoboken )
                20 September 2019
                December 2019
                : 22
                : 6 ( doiID: 10.1111/hex.v22.6 )
                : 1231-1239
                [ 1 ] School of Health Sciences Faculty of Medicine and Health Sciences University of East Anglia Norwich UK
                [ 2 ] Research Department of Clinical, Educational and Health Psychology University College London London UK
                [ 3 ] Research and Development Department North East London NHS Foundation Trust London UK
                [ 4 ] Division of Psychiatry University College London London UK
                Author notes
                [* ] Correspondence

                Linda Birt, Faculty of Medicine and Health Sciences, School of Health Sciences, University of East Anglia, Norwich Research Park, The Queen's Building room 1.26, Norwich NR4 7TJ, UK.

                Email: Linda.birt@ 123456uea.ac.uk

                © 2019 The Authors Health Expectations published by John Wiley & Sons Ltd

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                Page count
                Figures: 1, Tables: 1, Pages: 9, Words: 7191
                Funded by: Economic and Social Research Council/National Institute of Health
                Award ID: ES/L001802/2
                Funded by: National Institute for Health Research (NIHR)
                Original Research Paper
                Original Research Papers
                Custom metadata
                December 2019
                Converter:WILEY_ML3GV2_TO_NLMPMC version:5.7.2 mode:remove_FC converted:28.11.2019


                Comment on this article