Blog
About

16
views
0
recommends
+1 Recommend
1 collections
    4
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Scaling Up: Adapting a Phage-Hunting Course to Increase Participation of First-Year Students in Research

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          To offer a research experience to all students taking introductory biology, the authors modified the traditional two-semester Science Education Alliance Phage Hunters Advancing Genomics and Evolutionary Science (SEA-PHAGES) course by streamlining the first semester Phage Discovery lab and integrating research from the second SEA-PHAGES semester into other courses in the biology curriculum.

          Abstract

          Authentic research experiences are valuable components of effective undergraduate education. Research experiences during the first years of college are especially critical to increase persistence in science, technology, engineering, and mathematics fields. The Science Education Alliance Phage Hunters Advancing Genomics and Evolutionary Science (SEA-PHAGES) model provides a high-impact research experience to first-year students but is usually available to a limited number of students, and its implementation is costly in faculty time and laboratory space. To offer a research experience to all students taking introductory biology at Gonzaga University ( n = 350/yr), we modified the traditional two-semester SEA-PHAGES course by streamlining the first-semester Phage Discovery lab and integrating the second SEA-PHAGES semester into other courses in the biology curriculum. Because most students in the introductory course are not biology majors, the Phage Discovery semester may be their only encounter with research. To discover whether students benefit from the first semester alone, we assessed the effects of the one-semester Phage Discovery course on students’ understanding of course content. Specifically, students showed improvement in knowledge of bacteriophages, lab math skills, and understanding experimental design and interpretation. They also reported learning gains and benefits comparable with other course-based research experiences. Responses to open-ended questions suggest that students experienced this course as a true undergraduate research experience.

          Related collections

          Most cited references 31

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          A Broadly Implementable Research Course in Phage Discovery and Genomics for First-Year Undergraduate Students

          INTRODUCTION In 2012, the President’s Council of Advisors on Science and Technology (PCAST) reported that there is a need for an additional one million science, technology, engineering, and mathematics (STEM) graduates in the United States over the next decade to meet U.S. economic demands (1). It was noted that even a modest increase in the persistence of STEM students in the first 2 years of their undergraduate education would alleviate much of this shortfall (1). Replacing conventional introductory laboratory courses with discovery-based research courses is a key recommendation that is expected to lead to enhanced retention. Providing authentic research experiences to undergraduate students and directing them toward careers in STEM is a priority of science education in the 21st century (1 – 4). An abundance of evidence shows that involvement of undergraduate students in authentic research experiences has strong benefits for their engagement and interest in science (5 – 7) and that this often increases student interest in STEM careers (8). It is common for undergraduate students at research colleges and universities to participate in faculty-led research programs—especially during their last 2 years—with graduate students and postdoctoral researchers participating in their mentorship (9). Research experiences promote college retention (10), but the capacity for high-quality mentored undergraduate research within faculty research programs is limited, and this route is unlikely alone to satisfy the economic demands of the coming decade. There have been many successful efforts to develop classroom undergraduate research experiences (11–14; see also http://www.sciencemag.org/site/special/ibi/ and http://www.curenet.org/), but identifying authentic research experiences that scale to larger numbers of undergraduate students often proves elusive (4). Bioinformatic approaches engaging substantial numbers of students at diverse institutions have been described (15, 16) and are successful in providing research experiences (14) but do not include a wet-bench laboratory component. Taking advantage of research infrastructures at research-intensive institutions to advance missions in undergraduate education is desirable, and community-oriented approaches have been developed (17, 18), although the potential is largely untapped. Some research projects are likely to be more suitable for undergraduate involvement than others, and identifying those both rich in discovery and accessible to early-career students is challenging (19). The Phage Hunters Integrating Research and Education (PHIRE) program, in which undergraduate and high school students isolate novel bacteriophages, sequence their genomes, annotate them, and analyze them from a comparative genomics perspective, is one response to this challenge (19 – 21). The approach takes advantage of the large, dynamic, old, and highly genetically diverse nature of the bacteriophage population (22, 23). Moreover, although phages play key roles in bacterial pathogenesis (24) and the global climate and ecology (25), we know remarkably little about them outside a few well-studied prototypes. Phages can be easily isolated from the environment, and their relatively small genomes (40 to 150 kbp) are readily sequenced and annotated (26). Phage isolation requires little prior expert knowledge or technical skill, providing an accessible entry point for students from all backgrounds to engage in inquiry-based science (21). Each isolated phage is new, students can name their own phage, and a sense of ownership in their discovery helps to motivate them to explore the secrets of their phage by isolating genomic DNA, determining its sequence, annotating gene predictions, and comparing the sequence to that of other known viruses (21). This programmatic transition from a broadly accessible and concrete introduction to sophisticated genomic analysis provides a rich and structured education platform (27), applicable to STEM and non-STEM students, including first-year undergraduates (28 – 30). To investigate whether the PHIRE approach can be extended to environments beyond the expert phage-focused research laboratory, the Howard Hughes Medical Institute (HHMI), the University of Pittsburgh, and James Madison University investigated a framework enabling broad usage at diverse institutions, involving large numbers of undergraduate students and nonexpert instructors, and assessed its impact. The approach proved to be scalable (4,800 students at 73 schools over 5 years), it was implementable at research-intensive and research-poor institutions, generated gains in phage biology research, and enhanced student retention, and the student-reported gains were equivalent to those from an intense summer research experience. RESULTS The attributes of the PHIRE program at the University of Pittsburgh demonstrate that phage discovery and genomics are a platform that supports engagement of students in authentic research without requiring prior mastery of anything other than very basic concepts and content material (21). We therefore examined whether this could be broadly implemented at institutions with a wide spectrum of missions and demographics, without a requirement for resident expertise in bacteriophage biology. Our core hypothesis was that student participation in this research would generate new insights into phage diversity and evolution while simultaneously elevating student engagement in science, stimulating overall academic performance, and encouraging persistence in STEM fields. Below, we report the structure of the HHMI Science Education Alliance Phage Hunters Advancing Genomics and Evolutionary Science (SEA-PHAGES) course and its impacts on both research advances and student learning. The SEA-PHAGES course. The SEA-PHAGES course (formerly called the National Genomics Research Initiative) is a yearlong research experience targeted at beginning college students. Classes typically enroll 18 to 24 students and are taught by one or two faculty members together with a student teaching assistant. In the first term, students isolate phages from locally collected soil samples using Mycobacterium smegmatis as the primary bacterial host, a nonpathogenic strain relevant to understanding Mycobacterium tuberculosis. Students purify and characterize their phages, visualize them with electron microscopy, and extract and purify the DNA. The genome of one phage isolate is sequenced between terms, and in the second term, students annotate the genome using bioinformatics tools to define putative genes, understand genomic arrangements, and predict protein functions. Sequence and annotation quality is expertly reviewed and collated on the PhagesDB database (http://www.phagesdb.org) and submitted to GenBank. The Phamerator program (31) is used to explore genome relationships, and all phage samples are archived for use by the research community. The SEA-PHAGES course curriculum aims to introduce students to research methods and approaches, experimental design, and data interpretation but does not seek to instruct students in content matter outside the immediate biological context. But, as students are direct participants in scientific discovery, the goal is to engage, excite, increase the confidence of, and draw students into a cycle of self-motivation. If successful, we predicted that this would translate into enhanced performance in other STEM classes, greater retention within STEM training, and an increase in the numbers of students seeking continued research experiences beyond their freshman year. Program faculty and teaching assistants are trained at two weeklong workshops, one for each term of the course. Detailed manuals are provided, and community discussions are facilitated by a wiki site. Students and faculty present their findings at an annual SEA-PHAGES Research Symposium, at regional and national meetings, and through peer-reviewed publications. In the 5 years of the program, more than 4,800 students have participated (1,800 in 2012–2013), including STEM majors, non-STEM majors, honors students, and “typical” students. The number of participating schools has grown to more than 70 institutions (see Table S1 in the supplemental material), ranging from community colleges to research universities (Table 1). As can be seen from these program design features, the educational model of the SEA-PHAGES program integrates course-based learning within a framework of scientific activity, including a real-world scientific research agenda, professional networking, and scientific dissemination of results. In this way, the cost-effectiveness of course-based learning is combined with professional science with mutual benefits. TABLE 1  Diversity of institutions participating in SEA-PHAGES Carnegie classification a No. of schools Research universities; very high/high research activity 30 Master’s degree-granting colleges and universities 18 Baccalaureate colleges 22 Associate’s degree-granting colleges 3 a Schools offering the SEA-PHAGES course are organized according to their classification by the Carnegie Foundation for the Advancement of Teaching (2010). Gains in understanding viral diversity. The contributions of the SEA-PHAGES students have been essential to our current understanding of the diversity of mycobacteriophages, demonstrating the substantial impact of the distributed approach compared to what would be accomplished by a single laboratory, and have resulted in several publications with student authors (29, 31 – 39). Since the start of the program in 2008, SEA-PHAGES students have isolated 3,000 new phages (with global positioning system [GPS] coordinates recorded) and characterized their phages by DNA restriction analysis and electron microscopy. More than 450 mycobacteriophage genomes have been sequenced and annotated, and more than 350 sequences have been deposited in GenBank (Fig. 1). These genomes include many distinctly different types and numerous complex variants (40), and the entire genome collection codes for over 48,000 genes representing 3,780 sequence phamilies (a group of proteins sharing similarity to at least one other above threshold BlastP and Clustal values [31]). Correlations between genome and geography or time of isolation have been explored (35, 41), as well as the evolutionary mechanisms contributing to the pervasive genome mosaicism (33). The genomes contain numerous examples of biological intrigue, including novel inteins, introns, mobile elements, immunity systems, and regulatory schemes (33 – 35, 42 – 45), as well as potential for developing new tools for understanding tuberculosis (46 – 49). FIG 1  SEA-PHAGES students contribute to scientific knowledge. Results are from the first 5 years of the SEA-PHAGES program isolating new phages, showing the cumulative numbers of phages isolated (blue), cumulative numbers of genomes sequenced (orange), cumulative numbers of gene phamilies (purple), and total numbers of mycobacteriophages in GenBank (green). Not all genomes sequenced and annotated in year 5 are yet available in GenBank. The diversity of phages known to infect a single common host is remarkable; there are many thousands of potential bacterial hosts for phage isolation, and host range studies suggest that simply using a different strain of the same bacterial species will result in distinct profiles of diversity (38). With an estimated 1031 phage particles in the biosphere and a population that turns over every few days (23), there is an inexhaustible reservoir for discovery. Impacts on student education and retention. The Survey of Undergraduate Research Experience (SURE) and the Classroom Undergraduate Research Experience (CURE) measure the students’ assessment of their understanding of science and scientists, confidence in their ability to perform research, and their perceived gains in skills (50). The self-perceptions of learning gains, motivation and attitude, and career aspirations of the SEA-PHAGES course participants were assessed with pre- and postcourse SURE-like surveys (see Fig. S1 in the supplemental material). Twenty of the SEA-PHAGES survey items are shared with the regular SURE and CURE surveys, allowing the comparison of the SEA-PHAGES students’ learning gains with those of students who engaged in a dedicated summer research experience (SURE) and students who completed traditional science courses with no research element (CURE) (Fig. 2). The SEA-PHAGES students scored as well as or better on all 20 learning gains compared to the SURE students, reflecting benefits at least equivalent to those accrued through a summer-long apprentice-based undergraduate research experience. The increase in scientific self-efficacy reported by the SEA-PHAGES students is likely to be directly related to their retention in science (51). FIG 2  Student evaluation of learning gains. Mean learning gains for common survey items on the SURE (green diamonds), CURE (blue squares), and the SEA-PHAGES (red triangles) assessment instruments are shown. The SURE survey data represent 2,358 students who completed summer research in 2009; the CURE survey data represent 476 students who were enrolled in science courses that were described by their instructors as without a research element (data collected for fall 2007 through spring 2009); the SEA-PHAGES data represent 121 students who evaluated their course following the academic year 2008–2009. Error bars represent 2 standard errors around the mean. To analyze the effect of the SEA-PHAGES course on student persistence, we compared retention of students enrolled in the SEA-PHAGES course (77% first-year students and 95% STEM majors) with two benchmark statistics: the retention of all students and the retention of STEM majors with the same number of years of college experience and enrolled at the same school (Fig. 3A), important parameters given the typical rates for student attrition between first- and second-year STEM undergraduates (52). Data were from 27 comparisons from 20 institutions and show clearly that SEA-PHAGES students matriculated into the second year at significantly higher rates than did either benchmark group. Thus, early engagement in a research experience improves student retention into the second year. The positive impacts of this course-based research experience are similar to what has been reported for apprentice-based research experiences (5, 53), represent an effective response to the call to action in the National Science Foundation (NSF) Vision and Change and PCAST reports (1, 4), and provide validation for this educational model on a larger scale. FIG 3  (A) Retention of SEA-PHAGES participants (red) compared to other students at the same institution (blue), year 1 to year 2 of their college experience. Retention data were gathered from 20 institutions, with some institutions contributing data from multiple years, resulting in 27 sets of comparison data. Retention data were analyzed with a between-group analysis of variance with 3 levels of the independent variable (all majors, STEM majors, and SEA-PHAGES students) for 171 reports. The result was interpreted as significant at the 0.05 level. (B) SEA-PHAGES students (red) perform better than peers (blue) in traditional laboratory sections in the introductory lecture course. Results are for 127 SEA-PHAGES students and 1,120 students in the traditional laboratory course from six institutions. In the lecture course, SEA-PHAGES students averaged 2.95 on a 4.0 scale, compared to the 2.58 average of students in traditional lab sections. This difference was significant (t = 2.64; P < 0.05). Anticipating that research-stimulated motivation will influence student performance in other courses, we selected six schools that substituted the SEA-PHAGES course for a regular biology laboratory and compared the grades of participating students in the accompanying biology lecture course (Fig. 3B). We limited this analysis to schools that enrolled “typical” students into the PHAGES lab sections rather than those aimed at honors students or students at academic risk. The biology lecture course grades of SEA-PHAGES students were compared directly to those of peers enrolled in the same lecture course but in the regular biology laboratory. As is the case with most applied research, students were not randomly assigned to conditions, and even among these “typical” students, there may have been some self-selection for registration in the SEA-PHAGES course. We observed substantial differences in both the average grades and the grade distribution of SEA-PHAGES students relative to those of students in traditional lab sections (Fig. 3B), and although these data are preliminary and warrant further study, they suggest that there could be broad educational benefits to the SEA-PHAGES experience. Because of the concern that SEA-PHAGES students might suffer from lack of exposure to a broader coverage of subject matter in the regular laboratory course, we developed a 25-item pre- and postcourse survey of biological concepts (see Fig. S2 in the supplemental material) which was administered to students before and after the laboratory courses. There was no significant difference in performances on the test between SEA-PHAGES students and the comparison group of students (see Fig. S3). Both groups improved from pretest to posttest, and there was no significant difference between the groups in terms of the extent of their improvement. The lack of exposure to additional topics in the SEA-PHAGES course thus had no obvious detrimental effect. DISCUSSION The HHMI SEA-PHAGES program provides a general model for accomplishing improvements in the persistence of students in science by transforming a small-scale scientific inquiry into a cross-institution education platform that engages first-year students. The outcomes are consistent and robust, benefitting diverse groups of students across a variety of institutions. The materials costs are similar to those of other inquiry-based courses, and many institutions have implemented the course without external support, other than assistance with sequencing costs and programmatic and scientific support from HHMI and the University of Pittsburgh (some schools received direct external support for materials during their first 3 years in the program). The size and diversity of the phage population provide an inexhaustible wealth of biological novelty that imposes no obvious limit on the number of students who can participate. Future opportunities include further broadening the implementation of the SEA-PHAGES course as well as extending the model to development of similar projects in which scientific discovery, project ownership, and simple entry points can be implemented at the first-year college level. Meeting these opportunities will lead to a broad and sustainable enhancement of undergraduate science education, an advancement of scientific knowledge, and an increase of student persistence in science. MATERIALS AND METHODS Participants. The study was conducted with SEA-PHAGES faculty and students in the United States and the Commonwealth of Puerto Rico. David Lopatto and participant institutions obtained appropriate institutional review board (IRB) approval. SEA-PHAGES faculty are trained in a weeklong workshop focusing on in situ procedures and pedagogy in preparation for the fall semester and a weeklong workshop focusing on in silico bioinformatics tools in preparation for the spring semester. Faculty and students are invited to a SEA-PHAGES National Symposium to present their scientific findings. The SEA office conducts annual site visits and provides continuous technical support for institutions year-round. The SEA Wiki maintains an up-to-date depository for announcements, communication forums for faculty and students, curriculum resources, instructional materials, and research archives. SEA-PHAGES faculty members recruited comparison group students on a volunteer basis to enhance the validity of statistical analysis. The comparison group students were recruited among students taking introductory laboratory courses. Except for the student grade analysis, comparison group students cannot be matched to SEA-PHAGES students on each campus, so statistical analysis was limited to quasiexperimental analysis based on a nonequivalent comparison group. Systemic Research sent out invitations to all consenting students’ e-mail addresses individually. Analysis. During academic year 2009–2010, different aspects of the SEA-PHAGES and comparison group were measured. White/Caucasian students made up the majority of each group, 66% of SEA-PHAGES students and 76% of comparison group students. The majority of both groups lived in suburban communities (66% SEA-PHAGES and 64% comparison group students), attended public high schools (83% SEA-PHAGES and 83% comparison group students), and were in their first year in college (SEA-PHAGES, 77% first-year students, 18% sophomores; comparison group, 70% first-year students, 20% sophomores). There were a higher percentage of male students in the SEA-PHAGES course (38%) than in the comparison group (29%), but in both groups, female students were the clear majority. Retention rates. The Institutional Annual Survey measures student retention rates by tracking full-time, first-time entering students who are seeking bachelor’s degrees. The Institutional Annual Survey was conducted among institutions during November to December. Retention rates were calculated for students returning in fall 2008 and fall 2009. An analysis of variance was performed over 3 groups (all majors, STEM majors, and SEA-PHAGES students). The data were reported by institution and category, including 63 reports for all majors, 43 reports for STEM majors, and 65 reports for SEA-PHAGES students. The SEA CURE survey. The Classroom Undergraduate Research Experience (CURE) survey was specially adapted to the SEA-PHAGES program by David Lopatto (Grinnell College, Grinnell, IA). The CURE survey consists of multiple sections, including institution, class, demographics, science-related activities, major and minor concentration, postgraduate academic goals, experiences in laboratory course elements, experience in research, engagement in activities or endeavors, course benefit, learning experience in laboratory experiments and tools, overall course evaluation, and opinions about science. Systemic Research added a few questions to the postcourse CURE survey to collect data regarding students’ SEA-PHAGES course satisfaction, SEA Wiki access and utilization, SEA-PHAGES research paper and presentation experience, and general comments. The survey was administered twice a year: the presurvey at the beginning of the fall semester and the postsurvey at the end of the spring semester. As with the Biological Concepts Survey (BCS), Systemic Research developed the online survey forms using the Vovici EFM Community Professional website. The pre- and postcourse survey invitations were e-mailed to individual students according to their academic calendars. Using Vovici’s survey follow-up feature, three reminder e-mails were sent after the initial invitations. The collected survey responses were securely saved in a dedicated Vovici HHMI website and Systemic Research’s NGRI student database. The SURE survey data represent 2,358 students who completed summer research in 2009; the CURE survey data represent 476 students who evaluated science courses that were described by their instructors as without a research element (data collected fall 2007 through spring 2009); the SEA-PHAGES data represent 121 students who evaluated their course following the academic year 2008–2009. Mean learning gains were calculated for each category of the 20 items common to both the CURE and SURE surveys. Grades. Eleven institutions submitted their SEA-PHAGES students’ laboratory and introductory biology course performance data for fall 2008 and spring 2009 in the academic year 2008–2009 and fall 2009 and spring 2010 in the academic year 2009–2010. Letter grade distributions for both SEA-PHAGES and comparison students were collected. Six institutions had matched data that were utilized in the analysis, with 127 SEA-PHAGES and 1,120 comparison student grades. For statistical analysis, the letter grades were assigned numerical values from 4 (grade A) to 0 (grade F). t tests were performed comparing the mean grades received by SEA-PHAGES students and comparison group students in the biology lecture course. Biological methods. Mycobacteriophage isolation was performed using Mycobacterium smegmatis mc2155 as a host, and phages were identified as PFU either by direct plating on bacterial lawns or after enrichment in the presence of M. smegmatis. Following purification and amplification, DNA was isolated and sequenced using Sanger, 454, or Ion Torrent technologies, using a shotgun approach followed by targeted sequencing to validate ambiguities and determine genome ends. Genome annotations were performed using various software platforms, including GBrowse (54), Apollo (55), DNAMaster (http://cobamide2.bio.pitt.edu/), Glimmer (56), GeneMark (57), and analysis programs available at the National Center for Biotechnology Information (NCBI). Comparative genomics used Phamerator (31) and Gepard (58). Assembled genome sequences and genome annotations were subjected to expert review prior to submission to GenBank. Detailed methods for phage isolation, sequencing, and analysis are available on PhagesDB (http://phagesdb.org). SUPPLEMENTAL MATERIAL Figure S1 SEA-CURE postsurvey. Download Figure S1, PDF file, 0.2 MB Figure S2 Biological Concepts Survey (BCS) questions. Download Figure S2, PDF file, 0.3 MB Figure S3 Biological Concepts Survey results. Download Figure S3, PDF file, 0.1 MB Table S1 Institutions offering the SEA-PHAGES course from 2008 to 2013. Table S1, PDF file, 0.1 MB.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessment of Course-Based Undergraduate Research Experiences: A Meeting Report

            Students can work with the same data at the same time and with the same tools as research scientists. iPlant Education, Outreach & Training Group (2008, personal communication) INTRODUCTION Numerous calls for reform in undergraduate biology education have emphasized the value of undergraduate research (e.g., American Association for the Advancement of Science [AAAS], 2011). These calls are based on a growing body of research that documents how students benefit from research experiences (Kremer and Bringle, 1990; Kardash, 2000; Rauckhorst et al., 2001; Hathaway et al., 2002; Bauer and Bennett, 2003; Lopatto, 2004, 2007; Lopatto and Tobias, 2010; Seymour et al., 2004; Hunter et al., 2007; Russell et al., 2007; Laursen et al., 2010; Thiry and Laursen, 2011). Undergraduates who participate in research internships (also called research apprenticeships, undergraduate research experiences, or research experiences for undergraduates [REUs]) report positive outcomes, such as learning to think like a scientist, finding research exciting, and intending to pursue graduate education or careers in science (Kardash, 2000; Laursen et al., 2010; Lopatto and Tobias, 2010). Research experiences are thought to be especially beneficial for women and underrepresented minority students, presumably because they support the development of relationships with more senior scientists and with peers who can offer critical support to students who might otherwise leave the sciences (Gregerman et al., 1998; Barlow and Villarejo, 2004; Eagan et al., 2011). Yet most institutions lack the resources to involve all or even most undergraduates in a research internship (Wood, 2003; Desai et al., 2008; Harrison et al., 2011). Faculty members have developed alternative approaches to engage students in research with the aim of offering these educational benefits to many more students (Wei and Woodin, 2011). One approach that is garnering increased attention is what we call a course-based undergraduate research experience, or CURE. CUREs involve whole classes of students in addressing a research question or problem that is of interest to the scientific community. As such, CUREs have the potential to expand undergraduates’ access to and involvement in research. We illustrate this in Table 1 by comparing CUREs with research internships, in which undergraduates work one-on-one with a mentor, either a graduate student, technician, postdoctoral researcher, or faculty member. Table 1. Features of CUREs compared with research internships CUREs Research internships Scale Many students Few students Mentorship structure One instructor to many students One instructor to one student Enrollment Open to all students in a course Open to a selected or self-selecting few Time commitment Students invest time primarily in class Students invest time primarily outside class Setting Teaching lab Faculty research lab CUREs offer the capacity to involve many students in research (e.g., Rowland et al., 2012) and can serve all students who enroll in a course—not only self-selecting students who seek out research internships or who participate in specialized programs, such as honors programs or programs that support research participation by disadvantaged students. Moreover, CUREs can be integrated into introductory-level courses (Dabney-Smith, 2009; Harrison et al., 2011) and thus have the potential to exert a greater influence on students’ academic and career paths than research internships that occur late in an undergraduate's academic program and thus serve primarily to confirm prior academic or career choices (Hunter et al., 2007). Entry into CUREs is logistically straightforward; students simply enroll in the course. Research internships often require an application (e.g., to REU sites funded by the National Science Foundation [NSF]) or searching and networking to find faculty interested in involving undergraduates in research. For students, CUREs may reduce the stress associated with balancing a research internship with course work during a regular academic term (Rowland et al., 2012). CUREs may also offer different types of opportunities for students to develop ownership of projects, as they ask their own questions or analyze their own samples. Although this can be the case for research internships, it may be less common, given the pressure on research groups to complete and publish the work outlined in grant proposals. In both environments, beginning undergraduate researchers more often contribute to ongoing projects rather than developing their own independent projects. Opportunities for the latter are important, as work from Hanauer and colleagues (2012) suggests that students’ development of a sense of ownership can contribute to their persistence in science. The Course-Based Undergraduate Research Experiences Network (CUREnet; http://curenet.franklin.uga.edu) was initiated in 2012 with funding from NSF to support CURE instruction by addressing topics, problems, and opportunities inherent to integrating research experiences into undergraduate courses. During early discussions, the CUREnet community identified a need for a clearer definition of what constitutes a CURE and a need for systematic exploration of how students are affected by participating in CUREs. Thus, a small working group with expertise in CURE design and assessment was assembled in September 2013 to: Draft an operational definition of a CURE; Summarize research on CUREs, as well as findings from studies of undergraduate research internships that would be useful for thinking about how students are influenced by participating in CUREs; and Identify areas of greatest need with respect to evaluation of CUREs and assessment of CURE outcomes. In this paper, we summarize the meeting discussion and offer recommendations for next steps in the assessment of CUREs. CUREs DEFINED The first aim of the meeting was to define a CURE. We sought to answer the question: How can a CURE be distinguished from other laboratory learning experiences? This allows us to make explicit to students how a CURE may differ from their other science course work and to distinguish a CURE from other types of learning experiences for the purposes of education research and evaluation. We began by discussing what we mean by “research.” We propose that CUREs involve students in the following: Use of scientific practices. Numerous policy documents, as well as an abundance of research on the nature and practice of science, indicate that science research involves the following activities: asking questions, building and evaluating models, proposing hypotheses, designing studies, selecting methods, using the tools of science, gathering and analyzing data, identifying meaningful variation, navigating the messiness of real-world data, developing and critiquing interpretations and arguments, and communicating findings (National Research Council [NRC], 1996; Singer et al., 2006; Duschl et al., 2007; Bruck et al., 2008; AAAS, 2011; Quinn et al., 2011). Individuals engaged in science make use of a variety of techniques, such as visualization, computation, modeling, and statistical analysis, with the aim of generating new scientific knowledge and understanding (Duschl et al., 2007; AAAS, 2011). Although it is unrealistic to expect students to meaningfully participate in all of these practices during a single CURE, we propose that the opportunity to engage in multiple scientific practices (e.g., not only data collection) is a CURE hallmark. Discovery. Discovery is the process by which new knowledge or insights are obtained. Science research aims to generate new understanding of the natural world. As such, discovery in the context of a CURE implies that the outcome of an investigation is unknown to both the students and the instructor. When the outcomes of their work are not predetermined, students must make decisions such as how to interpret their data, when to track down an anomaly and when to ignore it as “noise,” or when results are sufficiently convincing to draw conclusions (Duschl et al., 2007; Quinn et al., 2011). Discovery carries with it the risk of unanticipated outcomes and ambiguous results because the work has not been done before. Discovery also necessitates exploration and evidence-based reasoning. Students and instructors must have some familiarity with the current body of knowledge in order to contribute to it and must determine whether the new evidence gathered is sufficient to support the assertion that new knowledge has been generated (Quinn et al., 2011). We propose that discovery in the context of a CURE means that students are addressing novel scientific questions aimed at generating and testing new hypotheses. In addition, when their work is considered collectively, students’ findings offer some new insight into how the natural world works. Broadly relevant or important work. Because CUREs provide opportunities for students to build on and contribute to current science knowledge, they also present opportunities for impact and action beyond the classroom. In some CUREs, this may manifest as authorship or acknowledgment in a science research publication (e.g., Leung et al., 2010; Pope et al., 2011). In other CUREs, students may develop reports of interest to the local community, such as a report on local water quality or evidence-based recommendations for community action (e.g., Savan and Sider, 2003). We propose that CUREs involve students in work that fits into a broader scientific endeavor that has meaning beyond the particular course context. (We choose the language of “broader relevance or importance” rather than the term “authenticity” because views on the authenticity of a learning experience may shift over time [Rahm et al., 2003] and may differ among students, instructors, and the broader scientific community.) Collaboration. Science research increasingly involves teams of scientists who contribute diverse skills to tackling large and complex problems (Quinn et al., 2011). We propose that group work is not only a common practical necessity but also an important pedagogical element of CUREs because it exposes students to the benefits of bringing together many minds and hands to tackle a problem (Singer et al., 2006). Through collaboration, students can improve their work in response to peer feedback. Collaboration also develops important intellectual and communication skills as students verbalize their thinking and practice communicating biological ideas and interpretations either to fellow students in the same discipline or to students in other disciplines. This may also encourage students’ metacognition—solidifying their thinking and helping them to recognize shortcomings in their knowledge and reasoning (Chi et al., 1994; Lyman, 1996; Smith et al., 2009; Tanner, 2009). Iteration. Science research is inherently iterative because new knowledge builds on existing knowledge. Hypotheses are tested and theories are developed through the accumulation of evidence over time by repeating studies and by addressing research questions using multiple approaches with diverse methods. CUREs generally involve students in iterative work, which can occur at multiple levels. Students may design, conduct, and interpret an investigation and, based on their results, repeat or revise aspects of their work to address problems or inconsistencies, rule out alternative explanations, or gather additional data to support assertions (NRC, 1996; Quinn et al., 2011). Students may also build on and revise aspects of other students’ investigations, whether within a single course to accumulate a sufficiently large data set for analysis or across successive offerings of the course to measure and manage variation, further test preliminary hypotheses, or increase confidence in previous findings. Students learn by trying, failing, and trying again, and by critiquing one another's work, especially the extent to which claims can be supported by evidence (NRC, 1996; Duschl et al., 2007; Quinn et al., 2011). These activities, when considered in isolation, are not unique to CUREs. Rather, we propose that it is the integration of all five dimensions that makes a learning experience a CURE. Of course, CUREs will vary in the frequency and intensity of each type of activity. We present the dimensions in Table 2 and delineate how they are useful for distinguishing between the following four laboratory learning environments: Table 2. Dimensions of different laboratory learning contexts Dimension Traditional Inquiry CURE Internship Use of science practices Students engage in … Few scientific practices Multiple scientific practices Multiple scientific practices Multiple scientific practices Study design and methods are … Instructor driven Student driven Student or instructor driven Student or instructor driven Discovery Purpose of the investigation is … Instructor defined Student defined Student or instructor defined Student or instructor defined Outcome is … Known to students and instructors Varied Unknown Unknown Findings are … Previously established May be novel Novel Novel Broader relevance or importance Relevance of students’ work … Is limited to the course Is limited to the course Extends beyond the course Extends beyond the course Students’ work presents opportunities for action … Rarely Rarely Often Often Collaboration Collaboration occurs … Among students in a course Among students in a course Among students, teaching assistants, instructor in a course Between student and mentor in a research group Instructor's role is … Instruction Facilitation Guidance and mentorship Guidance and mentorship Iteration Risk of generating “messy” data are … Minimized Significant Inherent Inherent Iteration is built into the process … Not typically Occasionally Often Often A traditional laboratory course, in which the topic and methods are instructor defined; there are clear “cookbook” directions and a predetermined outcome that is known to students and to the instructor (Domin, 1999; Weaver et al., 2008); An inquiry laboratory course, in which students participate in many of the cognitive and behavioral practices that are commonly performed by scientists; typically, the outcome is unknown to students, and they may be challenged to generate their own methods. The motivation for the inquiry is to challenge the students, rather than contribute to a larger body of knowledge (Domin 1999; Olson and Loucks-Horsley, 2000; Weaver et al., 2008); A CURE, in which students address a research question or problem that is of interest to the broader community with an outcome that is unknown both to the students and to the instructor (Domin 1999; Bruck et al., 2008; Weaver et al., 2008); and A research internship, in which a student is apprenticed to a senior researcher (faculty, postdoc, grad student, etc.) to help advance a science research project (Seymour et al., 2004). The five dimensions comprise a framework that can be tested empirically by characterizing how a particular dimension is manifested in a program, developing scales to measure the degree or intensity of each dimension, and determining whether the dimensions in part or as a whole are useful for distinguishing CUREs from other laboratory learning experiences. Once tested, we believe that this framework will be useful to instructors, institutional stakeholders, education researchers, and evaluators. Instructors may use the framework to delineate their instructional approach, clarify what students will be expected to do, and articulate their learning objectives. For example, in traditional laboratory instruction, students may collect and analyze data but generally do not build or evaluate models or communicate their findings to anyone except the instructor. During inquiry laboratory instruction, students may be able to complete a full inquiry cycle and thus engage at some level in the full range of scientific practices. Students in CUREs and research internships may engage in some scientific practices in depth, but neglect others, depending on the particular demands of the research and the structure of the project. As instructors define how their course activities connect to desired student outcomes, they can also identify directions for formative and summative assessment. Education researchers and evaluators may use the framework to characterize particular instructional interventions with the aim of determining which dimensions, to what degree and intensity, correlate with desired student outcomes. For instance, students who engage in the full range of scientific practices could reasonably be expected to improve their skills across the range of practices, while students who participate in only a subset of practices can only be expected to improve in those specific practices. Similarly, the extent to which students have control over the methods they employ may influence their sense of ownership over the investigation, thus increasing their motivation and perhaps contributing to their self-identification as scientists. Using this framework to identify critical elements of CUREs and how they relate (or not) to important student outcomes can inform both the design of CUREs and their placement in a curriculum. CURRENT KNOWLEDGE FROM ASSESSMENT OF CUREs With this definition in mind, the meeting then turned to summarizing what is known from the study of CUREs, primarily in biology and chemistry. Assessment and evaluation of CUREs has been limited to a handful of multisite programs (e.g., Goodner et al., 2003; Hatfull et al., 2006; Lopatto et al., 2008, Caruso et al., 2009; Shaffer et al., 2010; Harrison et al., 2011) and projects led by individual instructors (e.g., Drew and Triplett 2008; Siritunga et al., 2011). For the most part, these studies have emphasized student perceptions of the outcomes they realize from participating in course-based research, such as the gains they have made in research skills or clarification of their intentions to pursue further education or careers in science. To date, very few studies of student learning during CUREs have been framed according to learning theories. With a few exceptions, studies of CUREs have not described pathways that students take to arrive at specific outcomes—in other words, what aspects of the CURE are important for students to achieve both short- and long-term gains. Some studies have compared CURE instruction with research internships and have found, in general, that students report many of the same gains (e.g., Shaffer et al., 2010). A handful of studies have compared student outcomes from CUREs with those from other laboratory learning experiences. For example, Russell and Weaver (2011) compared students’ views of the nature of science after completing a traditional laboratory, an inquiry laboratory, or a CURE. The researchers used an established approach developed by Lederman and colleagues (2002) to assess students’ views of the nature of science, but it is not clear whether students in this study chose to enroll in a traditional or CURE course or whether the groups differed in other ways that might influence the extent to which their views changed following their lab experiences. Students in all three environments—traditional, inquiry, and CURE—made gains in their views of the nature of scientific knowledge as experimental and theory based, but only students in the CURE showed progress in their views of science as creative and process based. When students who participated in a CURE or a traditional lab were queried 2 or 3 yr afterward, they continued to differ in their perceptions of the gains they made in understanding how to do research and in their confidence in doing research (Szteinberg and Weaver, 2013). In another study, Rowland and colleagues (2012) compared student reports of outcomes from what they called an active-learning laboratory undergraduate research experience (ALLURE, which is similar to a CURE) with those from a traditional lab course. Students could choose the ALLURE or traditional instruction, which may have resulted in a self-selection bias. Students in both environments reported increased confidence in their lab skills, including technical skills (e.g., pipetting) and analytical skills (e.g., deciding whether one experimental approach is better than another). Generally, students reported similar skill gains in both environments, indicating that students can develop confidence in their lab skills during both traditional and CURE/ALLURE experiences. Most studies reporting assessment of CUREs in the life sciences have made use of the Classroom Undergraduate Research Experiences (CURE) Survey (Lopatto and Tobias, 2010). The CURE Survey comprises three elements: 1) instructor report of the extent to which the learning experience resembles the practice of science research (e.g., the outcomes of the research are unknown, students have some input into the focus or design of the research); 2) student report of learning gains; and 3) student report of attitudes toward science. A series of Likert-type items probe students’ attitudes toward science and their educational and career interests, as well as students’ perceptions of the learning experience, the nature of science, their own learning styles, and the science-related skills they developed from participating in a CURE. Use of the CURE Survey has been an important first step in assessing student outcomes of these kinds of experiences. Yet this instrument is limited as a measure of the nature and outcomes of CUREs because some important information is missing about its overall validity. No information is available about its dimensionality—that is do student responses to survey items meant to represent similar underlying concepts correlate with each other, while correlating less with items meant to represent dissimilar concepts? For example, do responses to items about career interests correlate with themselves highly, but correlate less with items focused on attitudes toward science, a dissimilar concept? Other validity questions are also not addressed. For instance, does the survey measure all important aspects of CUREs and CURE outcomes, or are important variables missing? Is the survey useful for measuring a variety of CUREs in different settings, such as CUREs for majors or nonmajors, or CUREs at an introductory or advanced levels? Finally, is the survey a reliable measure—does the survey measure outcomes consistently over time and across different individuals and settings? To be consistent with the definition of CUREs given above, an assessment instrument must both touch on all five dimensions and elicit responses that capture other important aspects of CURE instruction that may be missing from this description. This will help ensure that the instrument has “content validity” (Trochim, 2006), meaning that the instrument can be used to measure all of the features important in a CURE learning experience. The CURE Survey relies on student perceptions of their own knowledge and skill gains, and like other such instruments, it is subject to concerns about the validity of self-report of learning gains. There is a very broad range of correlations between self-report measures of learning and measurements such as tests or expert judgments. Depending on which measures are compared, there may be a strong correlation, or almost no correlation, between self-reported data and relevant criteria (Falchikov and Boud, 1989). Validity problems with self-assessment can result from poor survey design, with survey items interpreted differently by different students, or from items designed in such a way that students are unable to recall key information or experiences (Bowman 2011; Porter et al., 2011). The tendency of respondents to give socially desirable answers is a familiar problem with self-reporting. Bowman and Hill (2011) found that student self-reporting of educational outcomes is subject to social bias; students respond more positively because they are either implicitly or explicitly aware of the desired response. A guarantee of anonymity mitigates this validity threat (Albanese et al., 2006). Respondents also give more valid responses when they have a clear idea of what they are assessing and have received frequent and clear feedback about their progress and abilities from others, and when respondents can remember what they did during the assessment period (Kuh, 2001). For example, in her study of the outcomes of undergraduate science research internships, Kardash (2000) compared perceptions of both student interns and faculty mentors of the gains interns made from participating in research. She found good agreement between interns and mentors on some skills, such as understanding concepts in the field and collecting data, but statistically significantly differences between mentor and intern ratings of other skills, with interns rating themselves more positively on their understanding of the importance of controls in research, their abilities to interpret results in light of original hypotheses, and their abilities to relate results to the “bigger picture.” More research is needed to understand the extent to which different students (majors, nonmajors, introductory, advanced, etc.) are able to accurately self-assess the diverse knowledge and skills they may develop from participating in CUREs. A few studies have focused on the psychosocial outcomes of participating in CUREs. One such study, conducted by Hanauer and colleagues (2012), documented the extent to which students developed a sense of ownership of the science projects they completed in a traditional laboratory course, a CURE involving fieldwork, or a research internship. Using linguistic analysis, the authors found that students in the CURE reported a stronger sense of ownership of their research projects compared with students who participated in traditional lab courses and research internships (Hanauer et al., 2012; Hanauer and Dolan, in press, 2014); these students also reported higher levels of persistence in science or medicine (Hanauer et al., 2012). Although the inferred relationship needs to be explored with a larger group of students and a more diverse set of CUREs, these results suggest that it is important to consider ownership and other psychosocial outcomes in future research and evaluation of CUREs. A few studies have explored whether and how different students experience CUREs differently and, in turn, realize different outcomes from CUREs. This is an especially noteworthy gap in the knowledge base, given the calls to engage all students in research experiences and that research has suggested that different students may realize different outcomes from participating in research (e.g., AAAS, 2011; Thiry et al., 2012). In one such study, Alkaher and Dolan (in press, 2014) interviewed students enrolled in a CURE, the Partnership for Research and Education in Plants for Undergraduates, at three different types of institutions (i.e., community college, liberal arts college, research university) in order to examine whether and how their sense of scientific self-authorship shifted during the CURE. Baxter-Magolda (1992) defined self-authorship as the “internal capacity to define one's beliefs, relations, and social identity” or, in this context, how one sees oneself with respect to science knowledge—as a consumer, user, or producer. Developing a sense of scientific self-authorship may be an important predictor of persistence in science, as students move from simply consuming science knowledge as it is presented to becoming critical users of science, and to seeing themselves as capable of contributing to the scientific body of knowledge. Alkaher and Dolan (in press, 2014) found that some CURE students made progress in their self-authorship because they perceived the CURE goals as important to the scientific community, yet the tasks were within their capacity to make a meaningful contribution. In contrast, other students struggled with the discovery nature of the CURE in comparison with their prior traditional lab learning experiences. They perceived their inability to find the “right answer” as reflecting their inability to do science. More research is needed to determine whether and how students’ backgrounds, motives, and interests influence how they experience CUREs, and whether they realize different outcomes as a result. NEXT STEPS FOR CURE ASSESSMENT Our discussion and collective knowledge of research on CUREs and undergraduate research internships revealed several gaps in our understanding of CUREs, which can be addressed by: Defining frameworks and learning theories that may help explain how students are influenced by participating in CUREs, and utilizing these frameworks or theories to design and study CUREs; Identifying and measuring the full range of important outcomes likely to occur in CURE contexts; Using valid and reliable measures, some of which have been used to study research internships or other undergraduate learning experiences and could be adapted for CURE use, as well as developing and testing new tools to assess CUREs specifically (see Weiss and Sosulski [2003] or Trochim [2006] for general explanations of validity and reliability in social science measurement); Establishing which outcomes are best documented using self-reporting, and developing new tools or adapting existing tools to measure other outcomes; and Gathering empirical evidence to identify the distinctive dimensions of CUREs and ways to characterize the degree to which they are present in a given CURE, as well as conducting investigations to characterize relationships between particular CURE dimensions or activities and student outcomes. Following these recommendations will require a collective, scholarly effort involving many education researchers and evaluators and many CUREs that are diverse in terms of students, instructors, activities, and institutional contexts. We suggest that priorities of this collective effort should be to: Use current knowledge from the study of CUREs, research internships, and other relevant forms of laboratory instruction (e.g., inquiry) to define short-, medium-, and long-term outcomes that may result from student participation in CUREs; Observe and characterize many diverse CUREs to identify the activities within CUREs likely to directly result in these short-term outcomes, delineating both rewards and difficulties students encounter as they participate; Use frameworks or theories and current knowledge to hypothesize pathways students may take toward achieving long-term outcomes—the connections between activities and short-, medium-, and long-term outcomes; Determine whether one can identify key short- and medium-term outcomes that serve as important “linchpins” or connecting points through which students progress to achieve desired long-term outcomes; and Assess the extent to which students achieve these key outcomes as a result of CURE instruction, using existing or novel instruments (e.g., surveys, interview protocols, tests) that have been demonstrated to be valid and reliable measures of the desired outcomes. At the front end, this process will require increased application of learning theories and consideration of the supporting research literature, but it is likely to result in many highly testable hypotheses and a more focused and informative approach to CURE assessment overall. For example, if we can define pathways from activities to outcomes, instructors will be better able to select activities to include or emphasize during CURE instruction and decide which short-term outcomes to assess. Education researchers and evaluators will be better able to hypothesize which aspects of CURE instruction are most critical for desired student outcomes and the most salient to study. Drawing from many of the references cited in this report, we have drafted a logic model for CURE instruction (Figure 1) as the first step in this process. (For more on logic models, see guidance from the W. K. Kellogg Foundation [2006].) The model includes the range of contexts, activities, outputs, and outcomes of CUREs that arose during our discussion. The model also illustrates hypothetical relationships between time, participation in CUREs, and short- and long-term outcomes resulting from CURE activities. Figure 1. CURE logic model. This model depicts the set of variables at play in CUREs identified by the authors. During CUREs, students can working individually, in groups, or with faculty (context, green box on left) to perform corresponding activities (middle, red boxes) that yield measurable outputs (middle, pink boxes). Activities and outputs are grouped according to the five related elements of CUREs (orange boxes and arrow). Possible CURE outcomes (blue) are ordered left to right according to when students might be able to demonstrate the outcome (blue arrow) and whether the outcome is likely to be achievable from participation in a single vs. multiple CUREs (blue triangle). It is important to recognize that, given the limited time frame and scope of any single CURE, students will not participate in all possible activities or achieve all possible outcomes depicted in the model. Rather, CURE instructors or evaluators could define a particular path and use it as a guide for designing program evaluations and assessing student outcomes. Figure 2 presents an example of how to do this with a focus on a subset of CURE activities and outcomes. It is a simplified pathway model based on findings from the research on undergraduate research internships and CUREs summarized above. Boxes in this model are potentially measurable waypoints, or steps, on a path that connects student participation in three CURE activities with the short-term outcomes students may realize during the CURE, medium-term outcomes they may realize at the end of or after the CURE, and potential long-term outcomes. Although each pathway is supported by evidence or hypotheses from the study of CUREs and research internships, these are not the only means to achieve long-term outcomes, and they do not often act alone. Rather, the model is intended to illustrate that certain short- and medium-term outcomes are likely to have a positive effect on linked long-term outcomes. See Urban and Trochim (2009) for a more detailed discussion of this approach. Figure 2. Example of a pathway model to guide CURE assessment. This model identifies a subset of activities (beige) students are likely to do during a CURE and the short- (pink), medium- (blue), and long- (green) term outcomes they may experience as a result. The arrows depict demonstrated or hypothesized relationships between activities and outcomes. (This figure is generated using software from the Cornell Office of Research and Evaluation [2010].) We explain below the example depicted in Figure 2, referencing explicit waypoints on the path with italics. This model is grounded in situated-learning theory (Lave and Wenger, 1991), which proposes that learning involves engagement in a “community of practice,” a group of people working on a common problem or endeavor (e.g., addressing a particular research question) and using a common set of practices (e.g., science practices). Situated-learning theory envisions learning as doing (e.g., presenting and evaluating work) and as belonging (e.g., interacting with faculty and peers, building networks), factors integral to becoming a practitioner (Wenger, 2008)—in the case of CUREs, becoming a scientist. Retention in a science major is a desired and measurable long-term outcome (bottom of Figure 2) that indicates students are making progress in becoming scientists and has been shown to result from participation in research (Perna et al., 2009; Eagan et al., 2013). Based on situated-learning theory, we hypothesize that three activities students might engage in are likely to lead to retention in a science major: design methods, present their work, and evaluate their own and others’ work during their research experience (Caruso et al., 2009; Harrison et al., 2011; Hanauer et al., 2012). These activities reflect the dimensions of “use of scientific practices” and “collaboration” described above. Following the right-hand path in the model, when students present their work and evaluate their own and others’ work, they will likely interact with each other and with faculty (Eagan et al., 2011). Interactions with faculty and interactions with peers may lead to improvements in students’ communication and collaboration skills, including their abilities to defend their work, negotiate, and make decisions about their research based on interactions (Ryder et al., 1999; Alexander et al., 2000; Seymour et al., 2004). Through these interactions, students may expand their professional networks, which may in turn offer increased access to mentoring (Packard, 2004; Eagan et al., 2011). Mentoring relationships, especially with faculty, connect undergraduates to networks that promote their education and career development by building their sense of scientific identity and defining their role within the broader scientific community (Crisp and Cruz, 2009; Hanauer, 2010; Thiry et al., 2010; Thiry and Laursen, 2011; Stanton-Salazar, 2011). Peer and faculty relationships also offer socio-emotional support that can foster students’ resilience and their ability to navigate the uncertainty inherent to science research (Chemers et al., 2011; Thiry and Laursen, 2011). Finally, research on factors that lead to retention in science majors indicates that increased science identity (Laursen et al., 2010; Estrada et al., 2011), ability to navigate uncertainty, and resilience are important precursors to a sense of belonging and ultimate retention (Gregerman et al., 1998; Zeldin and Pajares, 2000; Maton and Hrabowski, 2004; Seymour et al., 2004). The model also suggests that access to mentoring is a linchpin, a short- to medium-term outcome that serves as a connecting point through which activities are linked to long-term outcomes. Thus, access to mentoring might be assessed to diagnose students’ progress along the top pathway and predict the likelihood that they will achieve long-term outcomes. (For more insight into why assessing linchpins is particularly informative, see Urban and Trochim [2009].) Examples of measures that may be useful for testing aspects of this model and for which validity and reliability information is available include: the scientific identity scale developed by Chemers and colleagues (2011) and revised by Estrada and colleagues (2011); the student cohesiveness, teacher support, and cooperation scales of the What Is Happening in This Class? questionnaire (Dorman, 2003); and the faculty mentorship items published by Eagan and colleagues (2011). Data will need to be collected and analyzed using standard validation procedures to determine the usefulness of these scales for studying CUREs. Qualitative data from interviews or focus groups can be used to determine that students perceive these items as measuring relevant aspects of their CURE experiences and to confirm that they are interpreting the questions as intended. For example, developers of the Undergraduate Research Student Self-Assessment instrument used extensive interview data to identify key dimensions of student outcomes from research apprenticeship experiences, and then think-aloud interviews to test and refine the wording of survey items (Hunter et al., 2009). Interviews can also establish whether items apply to different groups of students. For example, items in the scientific identity scale (e.g., “I feel like I belong in the field of science”) may seem relevant, and thus “valid,” to science majors but not to non–science majors. Similarly, the faculty-mentoring items noted above (Eagan et al., 2011) include questions about whether faculty provided, for example, “encouragement to pursue graduate or professional study” or “an opportunity to work on a research project.” The first item will be most relevant to students who are enrolled in an advanced rather than an introductory CURE, while the second may be relevant only to students early enough in their undergraduate careers to have time to pursue a research internship. In addition, students may interpret the phrase “opportunity to work on a research project” in ways that are unrelated to mentorship by faculty, especially in the context of a CURE class with its research focus. Statistical analyses (e.g., factor analysis, calculation of Cronbach's alpha; Netemeyer et al., 2003) should confirm that the scales are consistent and stable—are they measuring what they are intended to measure and do they do so consistently? Such analyses would help determine whether students are responding as anticipated to particular items or scales and whether instruments developed to measure student outcomes of research internships can detect student growth from participation in CUREs, which are different experiences. We can also follow the left-hand path in this model with a focus on the CURE activities of designing methods and presenting work. This path is grounded in Baxter Magolda's (2003) work on students’ epistemological development and her theory of self-authorship. Specifically, as students take ownership of their learning, they transition from seeing themselves as consumers of knowledge to seeing themselves as producers of knowledge. Some students who design their own methods and present their work report an increased sense of ownership of the research (Hanauer et al., 2012; Hanauer and Dolan, 2014). Increased ownership has been shown to improve motivation and self-efficacy. Self-efficacy and motivation work in a positive-feedback loop to enhance one another and contribute to development of long-term outcomes, such as increased resilience (Graham et al., 2013). Social cognitive theory is useful for explaining this relationship: if people believe they are capable of accomplishing a task—described in the literature as self-efficacy—they are more likely to put forth effort, persist in the task, and be resilient in the face of failure (Bandura, 1986; Zeldin and Pajares, 2000). Self-efficacy has also been positively related to science identity (Zeldin and Pajares, 2000; Seymour et al., 2004; Hanauer, 2010; Estrada et al., 2011; Adedokun et al., 2013). Thus, self-efficacy becomes a linchpin that interacts closely with motivation and can be connected to retention in a science major. Existing measures that may be useful for testing this model and for which validity and reliability information is available include: the Project Ownership Survey (Hanauer and Dolan, 2014), scientific self-efficacy and scientific identity scales (Chemers et al., 2011; Estrada et al., 2011); and the self-authorship items from the Career Decision Making Survey (Creamer et al., 2010). Again, data would need to be collected and analyzed using standard validation procedures to determine the usefulness of these scales for studying CUREs. When considering what to include in a model or which pathways to emphasize, we encourage CURE stakeholders to remember that each CURE is in its own stage of development and has its own life cycle. Some are just starting and others are well established. CUREs at the beginning stages of implementation are likely to be better served by evaluating how well the program is being implemented before evaluating downstream student outcomes. Thus, early in the development of a CURE, those who are assessing CUREs may want to model a limited set of activities, outputs, and short-term outcomes. CUREs at later stages of development may focus more of their evaluation efforts on long-term student outcomes because earlier evaluations have demonstrated stability of the program's implementation. At this point, findings regarding student outcomes can more readily be attributed to participation in the CURE. Last, we would like to draw some comparisons between CUREs and research internships because these different experiences are likely to offer unique and complementary ways of engaging undergraduates in research that could be informative for CURE assessment. As noted above, a handful of studies indicate that CURE students may realize some of the same outcomes observed for students in research internships (Goodner et al., 2003; Drew and Triplett 2008; Lopatto et al., 2008; Caruso et al., 2009; Shaffer et al., 2010; Harrison et al., 2011). Yet, differences between CUREs and research internships (Table 1) are likely to influence the extent to which students achieve any particular outcome. For example, CUREs may offer different opportunities for student input and autonomy (Patel et al., 2009; Hanauer et al., 2012; Hanauer and Dolan, 2014; Table 2). The structure of CUREs may allow undergraduates to assume more responsibility in project decision making and take on leadership roles that are less often available in research internships. CUREs may involve more structured group work, providing avenues for students to develop analytical and collaboration skills as they explain or defend their thinking and provide feedback to one another. In addition, CURE students may have increased opportunities to develop and express skepticism because they are less likely to see their peers as authority figures. Alternatively, some CURE characteristics may limit the nature or extent of outcomes that students realize. CUREs take place in classroom environments with a much higher student–faculty ratio than is typical of UREs. With fewer experienced researchers to model scientific practices and provide feedback, students may be less likely to develop a strong understanding of the nature of science or a scientific identity. The amount of time students may spend doing the work in a CURE course is likely to be significantly less than what they would spend in a research internship. Students who enroll in CURE courses may be less interested in research, which may affect their own and classmates’ motivation and longer-term outcomes related to motivation. Research interns are more likely to develop close collegial relationships with faculty and other researchers, such as graduate students, postdoctoral researchers, and other research staff, who can in turn expand their professional network. In addition, CURE instructors may have limited specialized knowledge of the science that underpins the CURE. Thus, CURE students may not have access to sufficient mentorship or expertise to maximize the scientific and learning outcomes. SUMMARY This report is a first attempt to capture the distinct characteristics of CUREs and discuss ways in which they can be systematically evaluated. Utilizing current research on CUREs and on research internships, we identify and describe five dimensions of CURE instruction: use of science practices, discovery, broader relevance or importance, iteration, and collaboration. We describe how these elements might vary among different laboratory learning experiences and recommend an approach to CURE assessment that can characterize CURE activities and outcomes. We hope that our discussion draws attention to the importance of developing, observing, and characterizing many diverse CUREs. We also hope that this report successfully highlights the enormous potential of CUREs, not only to support students in becoming scientists, but also to provide research experiences to increasing numbers of students who will enter the workforce as teachers, employers, entrepreneurs, and young professionals. We intend for this report to serve as a starting point for a series of informed discussions and education research projects that will lead to far greater understanding of the usages, value, and impacts of CUREs, ultimately resulting in cost-effective, widely accessible, quality research experiences for a large number of undergraduate students.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Science education. Increasing persistence of college students in STEM.

                Bookmark

                Author and article information

                Contributors
                Role: Monitoring Editor
                Journal
                CBE Life Sci Educ
                CBE-LSE
                CBE-LSE
                CBE-LSE
                CBE Life Sciences Education
                American Society for Cell Biology
                1931-7913
                Summer 2016
                : 15
                : 2
                Affiliations
                Biology Department, Gonzaga University, Spokane, WA 99258
                Wisconsin Center for Education Research, University of Wisconsin–Madison, Madison, WI 53716
                §Department of Psychology, Grinnell College, Grinnell, IA 50112
                Author notes
                *Address correspondence to: Nancy L. Staub ( staub@ 123456gonzaga.edu ).
                Article
                CBE.15-10-0211
                10.1187/cbe.15-10-0211
                4909335
                27146160
                © 2016 N. L. Staub et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License ( http://creativecommons.org/licenses/by-nc-sa/3.0).

                “ASCB®” and “The American Society for Cell Biology®” are registered trademarks of The American Society for Cell Biology.

                Product
                Categories
                Article
                Custom metadata
                June 1, 2016

                Education

                Comments

                Comment on this article