2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Post-interview Thank-you Communications Influence Both Applicant and Residency Program Rank Lists in Emergency Medicine

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction

          The National Residency Matching Program (NRMP) allows post-interview contact between residency applicants and residency programs. Thank-you communications represent one of the most common forms, but data on their value to applicants and program directors (PD) are limited. The objective of this study was to assess the effect of thank-you communications on applicant- and residency-program rank lists.

          Methods

          Two anonymous, voluntary surveys were sent after the 2018 NRMP Match, one to applicants who were offered an interview at a single academic site in the 2017–2018 Match cycle, and one to EM PDs nationwide. The surveys were designed in conjunction with a nationally-recognized survey center and piloted and revised based on feedback from residents and faculty.

          Results

          Of 196 residency applicants, 97 (49.5%) responded to the survey. Of these, 73/95 (76.8%) reported sending thank-you communications. Twenty-two of 73 (30%) stated that they sent thank-you communications to improve their spot on a program’s rank list; and 16 of 73 (21.9%) reported that they changed their rank list based upon the responses they received to their thank-you communications. Of 163 PDs, 99 (60.7%) responded to the survey. Of those PDs surveyed, 22.6% reported that an applicant could be moved up their program’s rank list and 10.8% reported that an applicant could move down a program’s rank list based on their thank-you communications (or lack thereof).

          Conclusion

          The majority of applicants to EM are sending thank-you communications. A significant minority of applicants and PDs changed their rank list due to post-interview thank-you communications.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: found
          • Article: not found

          Factors that influence medical student selection of an emergency medicine residency program: implications for training programs.

          An understanding of student decision-making when selecting an emergency medicine (EM) training program is essential for program directors as they enter interview season. To build upon preexisting knowledge, a survey was created to identify and prioritize the factors influencing candidate decision-making of U.S. medical graduates.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Factors important to anesthesiology residency applicants during recruitment.

            The United States residency application and interview process is expensive and time consuming. The purpose of this study is to better understand and improve the effectiveness and efficiency of the anesthesiology residency application and interview process.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Proper Applications for Surveys as a Study Methodology

              A survey instrument is any series of pre-defined questions intended to collect information from people, whether in person, by Internet, or any other media.1,2 Surveys are ubiquitous in health professions education research, used in approximately half of recently published articles,3 likely because of their low cost, relative speed, and (often misguided) perception that they are simple to use. A survey instrument is merely the tool used for survey methodology, which encompasses the entire application of the survey instrument, such as selecting a sampling frame, maximizing the response rate, and accounting for nonresponse bias.4 The distinction is important because survey methodology is a research method like any of the various other methodology options (e.g. observational cohorts and randomized controlled trials), and there are specific situations for which a particular method is indicated or contraindicated. The goal of this article is to provide guidance to researchers about when a survey is the appropriate methodology for a given research question. The importance of methodology choice is second only to choosing the primary research question itself. For comprehensive survey methodology reviews, readers are encouraged to review dedicated references.1,2,5 The rest of this article will address the fundamental question: When should I use a new survey? WHEN TO USE A NEW SURVEY (INDICATIONS) The best use of survey methodology is to investigate human phenomena, such as emotions and opinions.2 These are data that are neither directly observable, nor available in documents. Moreover, a new survey instrument is only indicated when a prior instrument does not exist or is determined empirically to have insufficient validity and reliability evidence for the sampling frame of interest.1,2 When properly constructed, a survey—regardless of topic and whether exploring an emotion or opinion—has the equivalent rigor of a psychometric instrument.5,6 A psychometric instrument can even be used as a survey to explore emotion. For example, the Maslach Burnout Inventory (MBI) was created to address the novel (at the time) construct of burnout.7 As a construct, burnout is a cohesive idea, explained by supportive ideas (subscales that represent domains), but not fully explained by observable data. Burnout is a human quality and so must be addressed by a survey. Similarly, an opinion is a human quality and must be addressed by a survey, such as a preference for a product or teaching method. It is worth stressing that opinion surveys also require the same rigor as psychometric instruments. WHEN NOT TO USE A SURVEY (CONTRAINDICATIONS) (Relative) Contraindication #1: Observable or Recorded Data Already Exist Using a survey when observable or recorded data exist is a relative contraindication because—although direct observation or a primary source is the most accurate method—sometimes a survey is the only practical way to obtain the data. A survey, however, should be the last resort because it is subject to interpretation and recall bias. For example, daily activity (e.g. amount of time spent with patients versus a computer) is more accurately recorded by a third-party observer than self-reporting on surveys.8 If direct measurement is not a reasonable possibility, then frequent journal entries, which could be considered a repeated measures survey method, is the next best option. Circulation has a good decision tree for researchers studying physical activity, and the principles can be applied to any difficult-to-measure activity.9 Another example of observable data is how much students learned. Actual learning gains (i.e. learning something new) are not equivalent to learners’ opinions of their learning gains.10–12 Learners’ opinions are a real entity and sometimes important for a study question. However, researchers should not substitute a survey of learners’ opinions for tangibly measurable learning gains (e.g. test score improvements or patient outcomes) if the study question is about actual learning gains. Survey methodology can also be used when it is unreasonable to obtain the primary records themselves. For example, a researcher may ask an office of medical education to complete a survey with data such as total number of residents, how their elective time is used, and how many residents required remediation. Although obtaining the primary documents for each of these questions would be best, it would likely be improbable to obtain the information from all of the different specialties. Thus, the graduate medical education office can complete the survey instrument for the researcher. However, it is important that the survey is completed using the records, not an individual’s recollection. It bears repeating that a survey should be the last resort for observable and recorded data. One of the most common misuses of survey methodology is to obtain observable and recorded data. Alternative Approach: Use Direct Observation or Records When Possible Researchers should carefully evaluate the most accurate way to measure the variable(s) of interest. Offices of medical education or the Association of American Medical Colleges, for example, can be primary sources for population data. Using the most accurate source for different questions within a study may require combining data from an external source and data from a survey. Example: Straus CM et al. Medical student radiology education: Summary and recommendations from a national survey of medical school and radiology department leadership. J Am Coll Radiol. 2014.11(6):606–610.13 Note how Straus and colleagues surveyed radiology department chairs for opinions but requested numerical information (e.g. number of students matching in radiology each year) from records held by the offices of medical education.13 Contraindication #2: A Pre-Existing Survey Exists Often a similar—if not exactly the same—concept has been surveyed by other researchers. Although the primary research question may warrant a survey methodology, a suitable existing survey is a contraindication to create and apply a new survey.* We as researchers are limiting greater concept understanding because we cannot combine findings, such as in a meta-analysis, 14 if we do not use pre-existing surveys when they are available. The Figure contains a list of resources to find pre-existing survey instruments. Alternative Approach An early search for pre-existing surveys is essential if a researcher plans to use survey methodology. Use the exact same survey—word for word—if possible, and investigate reliability and validity evidence in the new cohort of interest, even if the exact same survey is used (word for word).2,15 Example: Galan F et al. Burnout risk in medical students in Spain using the Maslach Burnout Inventory-Student survey. Int Arch Occup Environ Health. 2011.84:453–459.16 Galan and colleagues defend their need to alter individual words for what they believed to be a unique cohort and successfully re-demonstrated reliability and validity evidence before using the survey. Contraindication #3: The Concept Is Ill-Defined Survey methods range from a researcher personally asking respondents each question—with great ability to further explore respondent answers—to third-party questionnaires—without any ability to explore or clarify respondent answers. It is important to recognize the differences in data obtained from each survey format and apply the methodology appropriately. An ill-defined concept is a contraindication to use a survey, and qualitative grounded theory interviews or ethnography should be strongly considered. This especially applies to designing potential responses for survey questions.2 Researchers who use a questionnaire for a poorly defined concept run the risk of omitting options that respondents would have selected if they had been available because a questionnaire limits response options.† The results become artificially narrow and do not adequately represent the sampling frame. Alternative Approach A questionnaire limits response options and should only be used when a concept is understood well enough to supply a full range of response options. Researchers should start with qualitative method interviews or focus groups17 to explore a wide range of concept interpretations and opinions.2 Example: McLeod et al. Using focus groups to design a valid questionnaire. Academic Medicine. 2000. 75(6):671.18 The authors in this example set out to explore a concept that had been previously overlooked. Since no prior data existed, they started with focus groups to first define the construct, then built a questionnaire to explore the construct in the cohort of interest.18 Contradiction #4: The Sampling Frame Is Not Qualified The accuracy of a survey is only as strong as the accuracy that each respondent can provide. Although a survey method may be indicated, it may be contraindicated in a certain sampling frame. For example, the meaning of learner evaluations of faculty has long been questioned. Are learners qualified to judge instructors? Are instructor evaluations by learners meaningful?19,20 Researchers who assert that learners are not qualified to evaluate instructors would also assert that a class survey about an instructor’s abilities would be inappropriate (although this practice is ubiquitous). Another example of an unqualified sampling frame is when speculative questions are asked, such as, “What do your peers think?” Although a different context, the underlying principle remains the same since respondents are unqualified to present data for what others may think. Alternative Approach Consider the qualifications of a given sampling frame for the particular question of interest. If the primary research question requires the respondents to have expertise, consider a sampling frame with that specific expertise or use a different study methodology, such as observation or testing. Example: Grover PL. Evaluation of instructional skills of medical teachers: the participant observer in the medical school. Med Educ. 1980; 14:12–15.21 Grover introduces the idea of a trained third-party observer to evaluate medical student instructors. Depending on the primary research question (opinion of lecturing abilities versus learning outcomes), student examinations may be more accurate as well. CONCLUSION Survey methodology is an important medical education research tool but should mainly be used to characterize unobservable, human phenomena such as emotions and opinions. Researchers should use methods other than surveys to gather observable data whenever possible. Moreover, many research questions are well suited to using mixed methods that include a survey in addition to other data collection methods.
                Bookmark

                Author and article information

                Journal
                West J Emerg Med
                West J Emerg Med
                WestJEM
                Western Journal of Emergency Medicine
                Department of Emergency Medicine, University of California, Irvine School of Medicine
                1936-900X
                1936-9018
                January 2020
                09 December 2019
                : 21
                : 1
                : 96-101
                Affiliations
                University of Wisconsin School of Medicine and Public Health, Berbee Walsh Department of Emergency Medicine, Madison, Wisconsin
                Author notes
                Address for Correspondence: Corlin M. Jewell, MD, University of Wisconsin School of Medicine and Public Health, BerbeeWalsh Department of Emergency Medicine, 800 University Bay Dr., Madison, WI 53705. Email: cjewell@ 123456uwhealth.org .
                Article
                wjem-21-96
                10.5811/westjem.2019.10.44031
                6948692
                31913827
                29e08183-682b-4807-82d3-fd19d801e30f
                Copyright: © 2020 Jewell et al.

                This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/licenses/by/4.0/

                History
                : 09 June 2019
                : 11 September 2019
                : 07 October 2019
                Categories
                Educational Advances
                Original Research

                Emergency medicine & Trauma
                Emergency medicine & Trauma

                Comments

                Comment on this article