0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A systematic review of transparency in Lesson Study research: how do we report on the observation and reflection stages?

      , , ,
      Frontiers in Education
      Frontiers Media SA

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Lesson Study is a method of professional development for teachers that has gained traction in recent decades. However, publications routinely fail to describe crucial details of the implementation or to link the mechanisms that facilitate teachers learning in Lesson Study to theory. This makes it difficult to meaningfully synthesize and replicate research findings. Using a protocol based on three dimensions of transparency, this systematic review examines 129 articles on Lesson Study published between 2015 and 2020 to identify how transparent they were in their reporting of how teachers observed and reflected together. The findings indicate a lack of transparency across several dimensions of how the Lesson Study intervention is reported and highlight a current lack of theorization and coherence in the field. To address some of these issues, we propose a framing structure that empirical papers on Lesson Study should give critical attention to in order to ensure relevance and transferability.

          Related collections

          Most cited references222

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Rayyan—a web and mobile app for systematic reviews

          Background Synthesis of multiple randomized controlled trials (RCTs) in a systematic review can summarize the effects of individual outcomes and provide numerical answers about the effectiveness of interventions. Filtering of searches is time consuming, and no single method fulfills the principal requirements of speed with accuracy. Automation of systematic reviews is driven by a necessity to expedite the availability of current best evidence for policy and clinical decision-making. We developed Rayyan (http://rayyan.qcri.org), a free web and mobile app, that helps expedite the initial screening of abstracts and titles using a process of semi-automation while incorporating a high level of usability. For the beta testing phase, we used two published Cochrane reviews in which included studies had been selected manually. Their searches, with 1030 records and 273 records, were uploaded to Rayyan. Different features of Rayyan were tested using these two reviews. We also conducted a survey of Rayyan’s users and collected feedback through a built-in feature. Results Pilot testing of Rayyan focused on usability, accuracy against manual methods, and the added value of the prediction feature. The “taster” review (273 records) allowed a quick overview of Rayyan for early comments on usability. The second review (1030 records) required several iterations to identify the previously identified 11 trials. The “suggestions” and “hints,” based on the “prediction model,” appeared as testing progressed beyond five included studies. Post rollout user experiences and a reflexive response by the developers enabled real-time modifications and improvements. The survey respondents reported 40% average time savings when using Rayyan compared to others tools, with 34% of the respondents reporting more than 50% time savings. In addition, around 75% of the respondents mentioned that screening and labeling studies as well as collaborating on reviews to be the two most important features of Rayyan. As of November 2016, Rayyan users exceed 2000 from over 60 countries conducting hundreds of reviews totaling more than 1.6M citations. Feedback from users, obtained mostly through the app web site and a recent survey, has highlighted the ease in exploration of searches, the time saved, and simplicity in sharing and comparing include-exclude decisions. The strongest features of the app, identified and reported in user feedback, were its ability to help in screening and collaboration as well as the time savings it affords to users. Conclusions Rayyan is responsive and intuitive in use with significant potential to lighten the load of reviewers.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interrater reliability: the kappa statistic

            The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Accommodation of a scientific conception: Toward a theory of conceptual change

                Bookmark

                Author and article information

                Journal
                Frontiers in Education
                Front. Educ.
                Frontiers Media SA
                2504-284X
                April 5 2024
                April 5 2024
                : 9
                Article
                10.3389/feduc.2024.1322624
                392da893-d391-4cef-b967-547ad7248f43
                © 2024

                Free to read

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article