+1 Recommend
0 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Sicily statement on classification and development of evidence-based practice learning assessment tools

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.



          Teaching the steps of evidence-based practice (EBP) has become standard curriculum for health professions at both student and professional levels. Determining the best methods for evaluating EBP learning is hampered by a dearth of valid and practical assessment tools and by the absence of guidelines for classifying the purpose of those that exist. Conceived and developed by delegates of the Fifth International Conference of Evidence-Based Health Care Teachers and Developers, the aim of this statement is to provide guidance for purposeful classification and development of tools to assess EBP learning.


          This paper identifies key principles for designing EBP learning assessment tools, recommends a common taxonomy for new and existing tools, and presents the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework for classifying such tools. Recommendations are provided for developers of EBP learning assessments and priorities are suggested for the types of assessments that are needed. Examples place existing EBP assessments into the CREATE framework to demonstrate how a common taxonomy might facilitate purposeful development and use of EBP learning assessment tools.


          The widespread adoption of EBP into professional education requires valid and reliable measures of learning. Limited tools exist with established psychometrics. This international consensus statement strives to provide direction for developers of new EBP learning assessment tools and a framework for classifying the purposes of such tools .

          Related collections

          Most cited references 24

          • Record: found
          • Abstract: found
          • Article: not found

          Effectiveness and efficiency of guideline dissemination and implementation strategies.

          To undertake a systematic review of the effectiveness and costs of different guideline development, dissemination and implementation strategies. To estimate the resource implications of these strategies. To develop a framework for deciding when it is efficient to develop and introduce clinical guidelines. MEDLINE, Healthstar, Cochrane Controlled Trial Register, EMBASE, SIGLE and the specialised register of the Cochrane Effective Practice and Organisation of Care (EPOC) group. Single estimates of dichotomous process variables were derived for each study comparison based upon the primary end-point or the median measure across several reported end-points. Separate analyses were undertaken for comparisons of different types of intervention. The study also explored whether the effects of multifaceted interventions increased with the number of intervention components. Studies reporting economic data were also critically appraised. A survey to estimate the feasibility and likely resource requirements of guideline dissemination and implementation strategies in UK settings was carried out with key informants from primary and secondary care. In total, 235 studies reporting 309 comparisons met the inclusion criteria; of these 73% of comparisons evaluated multifaceted interventions, although the maximum number of replications of a specific multifaceted intervention was 11 comparisons. Overall, the majority of comparisons reporting dichotomous process data observed improvements in care; however, there was considerable variation in the observed effects both within and across interventions. Commonly evaluated single interventions were reminders, dissemination of educational materials, and audit and feedback. There were 23 comparisons of multifaceted interventions involving educational outreach. The majority of interventions observed modest to moderate improvements in care. No relationship was found between the number of component interventions and the effects of multifaceted interventions. Only 29.4% of comparisons reported any economic data. The majority of studies only reported costs of treatment; only 25 studies reported data on the costs of guideline development or guideline dissemination and implementation. The majority of studies used process measures for their primary end-point, despite the fact that only three guidelines were explicitly evidence based (and may not have been efficient). Respondents to the key informant survey rarely identified existing budgets to support guideline dissemination and implementation strategies. In general, the respondents thought that only dissemination of educational materials and short (lunchtime) educational meetings were generally feasible within current resources. There is an imperfect evidence base to support decisions about which guideline dissemination and implementation strategies are likely to be efficient under different circumstances. Decision makers need to use considerable judgement about how best to use the limited resources they have for clinical governance and related activities to maximise population benefits. They need to consider the potential clinical areas for clinical effectiveness activities, the likely benefits and costs required to introduce guidelines and the likely benefits and costs as a result of any changes in provider behaviour. Further research is required to: develop and validate a coherent theoretical framework of health professional and organisational behaviour and behaviour change to inform better the choice of interventions in research and service settings, and to estimate the efficiency of dissemination and implementation strategies in the presence of different barriers and effect modifiers.
            • Record: found
            • Abstract: not found
            • Article: not found

            The theory of planned behaviors

             I Azjen,  I Ajzen,  I Ajzenn (1991)
              • Record: found
              • Abstract: found
              • Article: not found

              Instruments for evaluating education in evidence-based practice: a systematic review.

              Evidence-based practice (EBP) is the integration of the best research evidence with patients' values and clinical circumstances in clinical decision making. Teaching of EBP should be evaluated and guided by evidence of its own effectiveness. To appraise, summarize, and describe currently available EBP teaching evaluation instruments. We searched the MEDLINE, EMBASE, CINAHL, HAPI, and ERIC databases; reference lists of retrieved articles; EBP Internet sites; and 8 education journals from 1980 through April 2006. For inclusion, studies had to report an instrument evaluating EBP, contain sufficient description to permit analysis, and present quantitative results of administering the instrument. Two raters independently abstracted information on the development, format, learner levels, evaluation domains, feasibility, reliability, and validity of the EBP evaluation instruments from each article. We defined 3 levels of instruments based on the type, extent, methods, and results of psychometric testing and suitability for different evaluation purposes. Of 347 articles identified, 115 were included, representing 104 unique instruments. The instruments were most commonly administered to medical students and postgraduate trainees and evaluated EBP skills. Among EBP skills, acquiring evidence and appraising evidence were most commonly evaluated, but newer instruments evaluated asking answerable questions and applying evidence to individual patients. Most behavior instruments measured the performance of EBP steps in practice but newer instruments documented the performance of evidence-based clinical maneuvers or patient-level outcomes. At least 1 type of validity evidence was demonstrated for 53% of instruments, but 3 or more types of validity evidence were established for only 10%. High-quality instruments were identified for evaluating the EBP competence of individual trainees, determining the effectiveness of EBP curricula, and assessing EBP behaviors with objective outcome measures. Instruments with reasonable validity are available for evaluating some domains of EBP and may be targeted to different evaluation needs. Further development and testing is required to evaluate EBP attitudes, behaviors, and more recently articulated EBP skills.

                Author and article information

                BMC Med Educ
                BMC Medical Education
                BioMed Central
                5 October 2011
                : 11
                : 78
                [1 ]Division of Biokinesiology and Physical Therapy, University of Southern California, Los Angeles, CA, USA
                [2 ]Doctoral Programs in Physical Therapy, Dept. of Rehabilitation and Movement Sciences, University of Medicine and Dentistry of New Jersey, Newark, NJ, USA
                [3 ]School of Health & Related Research, University of Sheffield, UK
                [4 ]National Prescribing Centre, National Institute for Health and Clinical Excellence, Liverpool, UK
                [5 ]Department of Epidemiology & Preventive Medicine, School of Public Health & Preventive Medicine, Monash University, VIC, Australia
                [6 ]Center for Evidence-Based Dentistry, The Forsyth Institute, Boston, MA, USA
                [7 ]Palacky University Medical Library, Olomouc, Czech Republic
                [8 ]Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, the Netherlands
                Copyright ©2011 Tilson et al; licensee BioMed Central Ltd.

                This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.




                Comment on this article