+1 Recommend
0 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      International Multispecialty Consensus on How to Evaluate Ultrasound Competence: A Delphi Consensus Survey


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.



          To achieve international consensus across multiple specialties on a generic ultrasound rating scale using a Delphi technique.


          Ultrasound experts from Obstetrics-Gynaecology, Surgery, Urology, Radiology, Rheumatology, Emergency Medicine, and Gastro-Enterology representing North America, Australia, and Europe were identified. A multi-round survey was conducted to obtain consensus between these experts. Of 60 invited experts, 44 experts agreed to participate in the first Delphi round, 41 remained in the second round, and 37 completed the third Delphi round. Seven key elements of the ultrasound examination were identified from existing literature and recommendations from international ultrasound societies. All experts rated the importance of these seven elements on a five-point Likert scale in the first round and suggested potential new elements for the assessment of ultrasound skills. In the second round, the experts re-rated all elements and a third round was conducted to allow final comments. Agreement on which elements to include in the final rating scale was pre-defined as more than 80% of the experts rating an element four or five, on importance to the ultrasound examination.


          Two additional elements were suggested by more than 10% of the experts in the first Delphi round. Consensus was obtained to include these two new elements along with five of the original elements in the final assessment instrument: 1) Indication for the examination 2) Applied knowledge of ultrasound equipment 3) Image optimization 4) Systematic examination 5) Interpretation of images 6) Documentation of examination and 7) Medical decision making.


          International multispecialty consensus was achieved on the content of a generic ultrasound rating scale. This is the first step to ensure valid assessment of clinicians in different medical specialties using ultrasound.

          Related collections

          Most cited references19

          • Record: found
          • Abstract: found
          • Article: not found

          Consulting the oracle: ten lessons from using the Delphi technique in nursing research.

          The aim of this paper was to provide insight into the Delphi technique by outlining our personal experiences during its use over a 10-year period in a variety of applications. As a means of achieving consensus on an issue, the Delphi research method has become widely used in healthcare research generally and nursing research in particular. The literature on this technique is expanding, mainly addressing what it is and how it should be used. However, there is still much confusion and uncertainty surrounding it, particularly about issues such as modifications, consensus, anonymity, definition of experts, how 'experts' are selected and how non-respondents are pursued. This issues that arise when planning and carrying out a Delphi study include the definition of consensus; the issue of anonymity vs. quasi-anonymity for participants; how to estimate the time needed to collect the data, analyse each 'round', feed back results to participants, and gain their responses to this feedback; how to define and select the 'experts' who will be asked to participate; how to enhance response rates; and how many 'rounds' to conduct. Many challenges and questions are raised when using the Delphi technique, but there is no doubt that it is an important method for achieving consensus on issues where none previously existed. Researchers need to adapt the method to suit their particular study.
            • Record: found
            • Abstract: found
            • Article: not found

            OSCE checklists do not capture increasing levels of expertise.

            To evaluate the effectiveness of binary content checklists in measuring increasing levels of clinical competence. Fourteen clinical clerks, 14 family practice residents, and 14 family physicians participated in two 15-minute standardized patient interviews. An examiner rated each participant's performance using a binary content checklist and a global process rating. The participants provided a diagnosis two minutes into and at the end of the interview. On global scales, the experienced clinicians scored significantly better than did the residents and clerks, but on checklists, the experienced clinicians scored significantly worse than did the residents and clerks. Diagnostic accuracy increased for all groups between the two-minute and 15-minute marks without significant differences between the groups. These findings are consistent with the hypothesis that binary checklists may not be valid measures of increasing clinical competence.
              • Record: found
              • Abstract: found
              • Article: not found

              Analytic global OSCE ratings are sensitive to level of training.

              There are several reasons for using global ratings in addition to checklists for scoring objective structured clinical examination (OSCE) stations. However, there has been little evidence collected regarding the validity of these scales. This study assessed the construct validity of an analytic global rating with 4 component subscales: empathy, coherence, verbal and non-verbal expression. A total of 19 Year 3 and 38 Year 4 clinical clerks were scored on content checklists and these global ratings during a 10-station OSCE. T-tests were used to assess differences between groups for overall checklist and global scores, and for each of the 4 subscales. The mean global rating was significantly higher for senior clerks (75.5% versus 71.3%, t55 = 2.12, P < 0.05) and there were significant differences by level of training for the coherence (t55 = 3.33, P < 0.01) and verbal communication (t55 = 2.33, P < 0.05) subscales. Interstation reliability was 0.70 for the global rating and ranged from 0.58 to 0.65 for the subscales. Checklist reliability was 0.54. In this study, a summated analytic global rating demonstrated construct validity, as did 2 of the 4 scales measuring specific traits. In addition, the analytic global rating showed substantially higher internal consistency than did the checklists, a finding consistent with that seen in previous studies cited in the literature. Global ratings are an important element of OSCE measurement and can have good psychometric properties. However, OSCE researchers should clearly describe the type of global ratings they use. Further research is needed to define the most effective global rating scales.

                Author and article information

                Role: Editor
                PLoS One
                PLoS ONE
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                28 February 2013
                : 8
                : 2
                [1 ]Department of Obstetrics, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
                [2 ]Centre for Clinical Education, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
                [3 ]Department of Anesthesia, University of Toronto and The Wilson Centre, University Health Network and UoT, Toronto, Canada
                [4 ]Department of Radiology, Herlev University Hospital, Herlev, Denmark
                [5 ]Juliane Marie Centre, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
                [6 ]Department of Obstetrics, Juliane Marie Centre, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
                University of South Australia, Australia
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                Conceived and designed the experiments: MGT TT JLS CR TL BO AT. Performed the experiments: MGT TT JLS CR TL BO AT. Analyzed the data: MGT TT JLS CR TL BO AT. Contributed reagents/materials/analysis tools: MGT TT JLS CR TL BO AT. Wrote the paper: MGT TT JLS CR TL BO AT.


                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                Page count
                Pages: 8
                This study was funded by the Juliane Marie Centre, Copenhagen University Hospital Rigshospitalet. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Research Article
                Non-Clinical Medicine
                Health Care Policy
                Health Education and Awareness
                Health Systems Strengthening
                Health Care Providers
                Academic Medicine
                Medical Education
                Obstetrics and Gynecology
                Diagnostic Radiology



                Comment on this article