46
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Frameworks for supporting patient and public involvement in research: Systematic review and co‐design pilot

        1 , 1 , 1 , 2 , 1 , 1 , 1
      Health Expectations
      Wiley

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Abstract Background Numerous frameworks for supporting, evaluating and reporting patient and public involvement in research exist. The literature is diverse and theoretically heterogeneous. Objectives To identify and synthesize published frameworks, consider whether and how these have been used, and apply design principles to improve usability. Search strategy Keyword search of six databases; hand search of eight journals; ancestry and snowball search; requests to experts. Inclusion criteria Published, systematic approaches (frameworks) designed to support, evaluate or report on patient or public involvement in health‐related research. Data extraction and synthesis Data were extracted on provenance; collaborators and sponsors; theoretical basis; lay input; intended user(s) and use(s); topics covered; examples of use; critiques; and updates. We used the Canadian Centre for Excellence on Partnerships with Patients and Public (CEPPP) evaluation tool and hermeneutic methodology to grade and synthesize the frameworks. In five co‐design workshops, we tested evidence‐based resources based on the review findings. Results Our final data set consisted of 65 frameworks, most of which scored highly on the CEPPP tool. They had different provenances, intended purposes, strengths and limitations. We grouped them into five categories: power‐focused; priority‐setting; study‐focused; report‐focused; and partnership‐focused. Frameworks were used mainly by the groups who developed them. The empirical component of our study generated a structured format and evidence‐based facilitator notes for a “build your own framework” co‐design workshop. Conclusion The plethora of frameworks combined with evidence of limited transferability suggests that a single, off‐the‐shelf framework may be less useful than a menu of evidence‐based resources which stakeholders can use to co‐design their own frameworks.

          Related collections

          Most cited references66

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research

          Background While the patient and public involvement (PPI) evidence base has expanded over the past decade, the quality of reporting within papers is often inconsistent, limiting our understanding of how it works, in what context, for whom, and why. Objective To develop international consensus on the key items to report to enhance the quality, transparency, and consistency of the PPI evidence base. To collaboratively involve patients as research partners at all stages in the development of GRIPP2. Methods The EQUATOR method for developing reporting guidelines was used. The original GRIPP (Guidance for Reporting Involvement of Patients and the Public) checklist was revised, based on updated systematic review evidence. A three round Delphi survey was used to develop consensus on items to be included in the guideline. A subsequent face-to-face meeting produced agreement on items not reaching consensus during the Delphi process. Results One hundred forty-three participants agreed to participate in round one, with an 86% (123/143) response for round two and a 78% (112/143) response for round three. The Delphi survey identified the need for long form (LF) and short form (SF) versions. GRIPP2-LF includes 34 items on aims, definitions, concepts and theory, methods, stages and nature of involvement, context, capture or measurement of impact, outcomes, economic assessment, and reflections and is suitable for studies where the main focus is PPI. GRIPP2-SF includes five items on aims, methods, results, outcomes, and critical perspective and is suitable for studies where PPI is a secondary focus. Conclusions GRIPP2-LF and GRIPP2-SF represent the first international evidence based, consensus informed guidance for reporting patient and public involvement in research. Both versions of GRIPP2 aim to improve the quality, transparency, and consistency of the international PPI evidence base, to ensure PPI practice is based on the best evidence. In order to encourage its wide dissemination this article is freely accessible on The BMJ and Research Involvement and Engagement journal websites. Electronic supplementary material The online version of this article (doi:10.1186/s40900-017-0062-2) contains supplementary material, which is available to authorized users.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Time to challenge the spurious hierarchy of systematic over narrative reviews?

            Key point Systematic reviews are generally placed above narrative reviews in an assumed hierarchy of secondary research evidence We argue that systematic reviews and narrative reviews serve different purposes and should be viewed as complementary Conventional systematic reviews address narrowly focused questions; their key contribution is summarising data Narrative reviews provide interpretation and critique; their key contribution is deepening understanding 1 BACKGROUND Cynthia Mulrow's important paper calling for literature reviews to be undertaken more systematically (and hence be more informative and reliable) is now 30 years old.1 A recent paper in BMC Medical Research Methodology compared the proportion of reviews that were systematic (as opposed to narrative) in five leading biomedical journals.2 The authors found significant diversity: from New England Journal of Medicine (0%) and Lancet (11%) to Annals of Internal Medicine (72%). Systematic reviews were assumed by the authors to be superior because they are (i) more likely to have a focused research question, (ii) more methodologically explicit and (iii) less likely to be biased than narrative reviews. This stance reflects the raison d’être of the Cochrane Collaboration, whose use of explicit and auditable quality criteria for undertaking systematic reviews has inspired a weighty methodological handbook,3 numerous tools and checklists4, 5 and structured reporting criteria.6 There is strong emphasis on methodological reproducibility, with the implication that a different review team, using the same search criteria, quality checklists and synthesis tools, should obtain the same result.3 Yet leading medical journals regularly publish clinical topic reviews that may lack a focused research question, methods section or statement on how studies were selected and analysed (see for example7, 8, 9). These narrative reviews typically draw on expert opinion by deliberately recruiting leading names in the field (eg “The aim of this Commission is to provide the strongest evidence base through involvement of experts from a wide cross‐section of disciplines…”—page 1953, emphasis added8). Reviews crafted through the experience and judgement of experts are often viewed as untrustworthy (“eminence‐based” is a pejorative term). Yet the classical definition of the EBM as “the conscientious, explicit, and judicious use of current best evidence …” (page 71, emphasis added)10 suggests a key role for judgement in the selection and interpretation of evidence. In short, there appears to be a growing divergence between the assumed “hierarchy” of evidence in secondary research, which defines systematic reviews as superior,11 and what some leading academic journals view as a state‐of‐the‐art (that is, expert‐led narrative) review. We believe this is partly because the systematic review format has been erroneously defined as a universal gold standard and partly because the term “narrative review” is frequently misunderstood, misapplied and unfairly dismissed. Systematic reviews in the Cochrane sense use a highly technical approach to identification, appraisal and synthesis of evidence and typically (although not invariably) privilege randomised controlled trials or previous systematic reviews over other forms of evidence.11 This may be entirely appropriate—especially when the primary purpose is to answer a very specific question about how to treat a particular disease in a particular target group. But the doctor in the clinic, the nurse on the ward or the social worker in the community will encounter patients with a wide diversity of health states, cultural backgrounds, illnesses, sufferings and resources.12 And those who gather around the policymaking table will find multiple calls on their attention—including burden of need, local availability of different treatments, personal testimony, strength of public opinion and budgetary realities. To produce a meaningful synthesis of research evidence relevant to such complex situations, the reviewer must (i) incorporate a broad range of knowledge sources and strategies for knowing and (ii) undertake multi‐level interpretation using creativity and judgement.12, 13 We align with previous authors, who, drawing on Wittgenstein, distinguish between puzzles or problems that require data (for which a conventional systematic review, with meta‐analysis where appropriate, may be the preferred methodology) and those that require clarification and insight (for which a more interpretive and discursive synthesis of existing literature is needed).14, 15 Below, we explore both strengths, limitations and conceptual confusions of systematic and narrative reviews. We consider three questions: what makes a review systematic; what is a narrative review and whether these different kinds of review should be viewed as competing or complementary. 2 WHAT MAKES A REVIEW SYSTEMATIC? The defining characteristic of a systematic review in the Cochrane sense is the use of a predetermined structured method to search, screen, select, appraise and summarise study findings to answer a narrowly focused research question.3, 16 Using an exhaustive search methodology, the reviewer extracts all possibly relevant primary studies, and then limits the dataset using explicit inclusion and exclusion criteria. The review focus is highly circumscribed and quality criteria are tightly enforced. Typically, a body of hundreds or thousands of potential studies identified in the initial search is whittled down to a mere handful before the reviewer even begins to consider what they collectively mean. The term “systematic” is thus by no means synonymous with “high‐quality”. Rather, it can be viewed as a set of methodologies characterised by tight focus, exhaustive search, high rejection‐to‐inclusion ratio and an emphasis on technical rather than interpretive synthesis methods. The conflation of the quality of a review with the assiduousness of such tasks as searching, applying inclusion and exclusion criteria, creating tables of extracted data and mathematically summing effect sizes (rather than, for example, with the level of critical analysis of the papers’ unstated assumptions and discussion sections) has, we believe, led to a proliferation of systematic reviews that represent aggregations of findings within the narrow body of work that has met the authors’ eligibility criteria.17, 18, 19 Such studies may sometimes add value, especially when additional meta‐analysis confirms whether a clinically significant effect is or is not also statistically significant.20 But sometimes, the term “systematic review” allows a data aggregation to claim a more privileged position within the knowledge hierarchy than it actually deserves.11 We acknowledge that the science of systematic review within the Cochrane and Campbell Collaborations is evolving to embrace a wider range of primary studies and methodologies, with recommended procedures for sampling, assessment and synthesis of evidence compliant with the question asked and the context explored. The adjective “systematic” is thus coming to acquire a broader meaning in terms of the transparency and appropriateness of methods, rather than signifying strict adherence to a particular pre‐defined tool or checklist or a privileging of randomised trials (see for example methodological work by Lewin et al,21 Petticrew et al22 and Pluye et al23, 24, 25). All these approaches, however, remain focused on answering a relatively narrow question that is predefined at the outset and with a primary focus on the extraction, tabulation and summation of empirical data. 3 WHAT IS A NARRATIVE REVIEW? A narrative review is a scholarly summary along with interpretation and critique.26 It can be conducted using a number of distinctive methodologies. While principles and procedures may diverge from the classic methodology of systematic review, they are not unsystematic (in the sense of being ad hoc or careless), and may certainly be conducted and presented in a systematic way, depending on purpose, method and context. Different kinds of reviews offer different kinds of truth: the conventional systematic review with meta‐analysis deals in probabilistic (typically, Bayesian) truth; it is concerned mainly with producing generalisable “facts” to aid prediction. The narrative review, in contrast, deals in plausible truth. Its goal is an authoritative argument, based on informed wisdom that is convincing to an audience of fellow experts. To that end, the author of a narrative review must authentically represent in the written product both the underpinning evidence (including but not limited to primary research) and how this evidence has been drawn upon and drawn together to inform the review's conclusions. A hermeneutic review takes as its reference point the notion of verstehen, or the process of creating an interpretive understanding.14 It capitalises on the continual deepening of insight that can be obtained by critical reflection on particular elements of a dataset—in this case, individual primary studies—in the context of a wider body of work. It may or may not define its reference body of studies using systematic search methods and inclusion/exclusion criteria, but its primary focus is on the essential tasks of induction and interpretation in relation to the defined sample for the purpose of advancing theoretical understanding.17 A realist review considers the “generative causality,” in which particular mechanisms (for example, peer influence) produce particular outcomes (for example, smoking cessation) in some circumstances (for example, when societal disapproval of smoking is high) but not others (for example, in cultures where smoking is still widely viewed as a mark of sophistication).27 A meta‐narrative review maps the storyline of a research tradition over time.28 Shifting the focus away from comparing findings of studies published at different times, it orients critical reflection to discern how ideas have waxed and waned within different scholarly communities at different points in the development of thinking (see an early example of how the term “diffusion of innovations” was differently defined and explored in different academic disciplines29). Each of these forms of narrative review (along with other specialist approaches to combining primary studies in qualitative research30, 31) reflects an explicit lens that is expected to shape the understandings that will arise from the review process, through analysis and synthesis processes that may be highly systematic. Narrative reviews also include a number of more generic styles such as integrative32, 33 and critical,34 the former being the approach generally taken by narrative reviews in clinical journals. All these approaches play an important role in expanding our understanding not only of the topic in question but also of the reasons why it has been studied in a particular way, the interpretations that have been variously made with respect to what we know about it, and the nature of the knowledge base that informs or might inform clinical practice. Because hermeneutic, realist and meta‐narrative reviews have explicit methodologies and accepted standards and criteria for judging their quality,14, 27, 28 a minority of scholars include such approaches within the (broadly defined) category of systematic reviews. However, we have had experience of journal editors rejecting reviews based on these techniques on the grounds that they were “not systematic”. Also of note is the emergence of “how‐to” guides for narrative reviews, which (misleadingly in our view) exhort the reviewer to focus carefully on such tasks as starting with an explicit search strategy and defining strict inclusion and exclusion criteria for primary studies.35, 36 In other words, the boundaries between systematic and narrative reviews are both fuzzy and contested. 4 SYSTEMATIC OR NARRATIVE OR SYSTEMATIC AND NARRATIVE? The conflation of “systematic” with superior quality (and “narrative” with inferior quality) has played a major role in the muddying of methodological waters in secondary research. This implicit evidence hierarchy (or pyramid) elevates the mechanistic processes of exhaustive search, wide exclusion and mathematical averaging over the thoughtful, in‐depth, critically reflective processes of engagement with ideas. The emphasis on thinking and interpretation in narrative review has prompted some authors to use the term “evidence‐informed” rather than “evidence‐based”15, 37: the narrative review is both less and more than a methods‐driven exercise in extracting and summating data. Training in systematic reviews has produced a generation of scholars who are skilled in the technical tasks of searching, sorting, checking against inclusion criteria, tabulating extracted data and generating “grand means” and confidence intervals.3 These skills are important, but as the recent article by Faggion et al illustrates, critics may incorrectly assume that they override and make redundant the generation of understanding. To the extent that the term “systematic review” privileges only that which is common in the findings amongst a rigidly defined subset of the available body of work, we risk losing sight of the marvellous diversities and variations that ought to intrigue us. In excluding those aspects of scholarship, systematic reviews hold the potential to significantly skew our knowledge landscape. While there are occasions when systematic review is the ideal approach to answering specific types of question, the absence of thoughtful, interpretive critical reflection can render such products hollow, misleading and potentially harmful. The argument that systematic reviews are less biased than narrative reviews begs the question of what we mean by bias. Bias is an epidemiological construct, which refers to something that distorts the objective comparisons between groups.20 It presupposes the dispassionate, instrumental and universal “view from nowhere” that has long defined the scientific method.38 When we speak of interpretation, we refer to an analysis that is necessarily perspectival, with the interpreter transparently positioned in order that the reader can understand why this particular perspective, selection process and interpretive methodology was selected in relation to the question at hand.14, 17, 29, 37, 39 Systematic and transparent reflection upon and sharing of such aspects of the research process adds to the scientific quality of interpretive research. The question of whether “systematic” review techniques can eliminate bias in secondary research is in any case questionable. The privileging of freedom from bias over relevance of question and findings wrongly assumes that how the topic is framed, and which questions should be explored is somehow self‐evident. A recent review of systematic reviews generated by a national knowledge centre to inform policymaking in Norway showed that in most cases, the evidence base addressed only a fraction of relevant policy questions.40 More generally, there is growing evidence that the science of systematic reviews is becoming increasingly distorted by commercial and other conflicts of interest, leading to reviews, which—often despite ticking the boxes on various quality checklists—are unnecessary, misleading or partisan.19, 41 The holy grail of a comprehensive database of unambiguous and unbiased evidence summaries (in pursuit of which the Cochrane Collaboration was founded42) continues to recede into the future. A legitimate criticism of narrative reviews is that they may “cherry pick” evidence to bolster a particular perspective. But this must be weighed against the counter‐argument that the narrative reviewer selects evidence judiciously and purposively with an eye to what is relevant for key policy questions—including the question of which future research programmes should be funded. Whilst we accept that narrative reviews can be performed well or badly, we believe the undervaluing of such reviews is a major contributor to research waste. In the absence of an interpretive overview of a topic that clearly highlights the state of knowledge, ignorance and uncertainty (explaining how we know what we know, and where the intriguing unanswered questions lie), research funding will continue to be ploughed into questions that are of limited importance, and which have often already been answered.40 This principle was illustrated in a recent hermeneutic review of telehealth in heart failure by one of us.43 It identified 7 systematic reviews of systematic reviews, 32 systematic reviews (including 17 meta‐analyses) covering hundreds of primary studies, as well as six mega‐trials—almost all of which had concluded that more research (addressing the same narrow question with yet more randomised trials intended to establish an effect size for telehealth) was needed. The hermeneutic approach revealed numerous questions that had remained under‐explored as researchers had pursued this narrow question—including the complex and changing nature of the co‐morbidities and social determinants associated with heart failure, the varied experiences and priorities of patients with heart failure, the questionable nature of up‐titration as a guiding principle in heart failure management, and the numerous organisational, regulatory and policy‐level complexities associated with introducing telehealth programmes. The review concluded that: “The limited adoption of telehealth for heart failure has complex clinical, professional and institutional causes, which are unlikely to be elucidated by adding more randomised trials of technology‐on versus technology‐off to an already‐crowded literature. An alternative approach is proposed, based on naturalistic study designs, application of social and organisational theory, and co‐design of new service models based on socio‐technical principles” (page 156). 5 CONCLUSION As many authors and journal editors are well aware, the narrative review is not a poor cousin of the systematic review but a different and potentially complementary form of scholarship.22, 44 Nevertheless, the simplistic hierarchy “systematic review good; narrative review less good” persists in some circles. The under‐acknowledged limitations of systematic reviews, along with missed opportunities for undertaking and using narrative reviews to extend understanding within a field, risks legitimising and perpetuating a narrow and unexciting research agenda and contributing to research waste. We call upon policymakers and clinicians (who seek to ensure that their decisions are evidence‐based, but who may have been seduced by a spurious hierarchy of secondary evidence) and on research commissioners (whose decisions will shape the generation of the future evidence base) to re‐evaluate the low status currently afforded to narrative reviews. AUTHORS’ CONTRIBUTIONS TG was invited to submit a paper on a topic of her choice to EJCI by the editor. She suggested this topic to ST and KM and wrote an initial outline for the paper. All authors then contributed iteratively and equally to the development of ideas and refinement of the paper.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Evaluating patient and stakeholder engagement in research: moving from theory to practice.

              Despite the growing demand for research that engages stakeholders, there is limited evidence in the literature to demonstrate its value - or return on investment. This gap indicates a general lack of evaluation of engagement activities. To adequately inform engagement activities, we need to further investigate the dividends of engaged research, and how to evaluate these effects. This paper synthesizes the literature on hypothesized impacts of engagement, shares what has been evaluated and identifies steps needed to reduce the gap between engagement's promises and the underlying evidence supporting its practice. This assessment provides explicit guidance for better alignment of engagement's promised benefits with evaluation efforts and identifies specific areas for development of evaluative measures and better reporting processes.
                Bookmark

                Author and article information

                Journal
                Health Expectations
                Health Expect
                Wiley
                1369-6513
                1369-7625
                April 22 2019
                April 22 2019
                Affiliations
                [1 ]Nuffield Department of Primary Care Health Sciences University of Oxford Oxford UK
                [2 ]North Central London Academic Foundation Programme London UK
                Article
                10.1111/hex.12888
                f17b1723-c182-48dd-8fca-945b8499ac46
                © 2019

                http://doi.wiley.com/10.1002/tdm_license_1.1

                History

                Comments

                Comment on this article