+1 Recommend
1 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Student evaluations of teaching (mostly) do not measure teaching effectiveness

      Original article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching effectiveness. We show:

          • SET are biased against female instructors by an amount that is large and statistically significant.

          • The bias affects how students rate even putatively objective aspects of teaching, such as how promptly assignments are graded.

          • The bias varies by discipline and by student gender, among other things.

          • It is not possible to adjust for the bias, because it depends on so many factors.

          • SET are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.

          • Gender biases can be large enough to cause more effective instructors to get lower SET than less effective instructors.

          These findings are based on nonparametric statistical tests applied to two datasets: 23,001 SET of 379 instructors by 4,423 students in six mandatory first-year courses in a five-year natural experiment at a French university, and 43 SET for four sections of an online course in a randomized, controlled, blind experiment at a US university.

          Related collections

          Most cited references 23

          • Record: found
          • Abstract: not found
          • Article: not found

          On the Application of Probability Theory to Agricultural Experiments. Essay on Principles. Section 9

            • Record: found
            • Abstract: not found
            • Article: not found

            Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors

              • Record: found
              • Abstract: not found
              • Article: not found

              Is There Gender Bias in Student Evaluations of Teaching?


                Author and article information

                (View ORCID Profile)
                (View ORCID Profile)
                ScienceOpen Research
                07 January 2016
                : 0 (ID: 818d8ec0-5908-47d8-86b4-5dc38f04b23e )
                : 0
                : 1-11
                [1 ]OFCE, SciencesPo, Paris, France
                [2 ]PSL, Université Paris-Dauphine, LEDa, UMR DIAL, Paris, France
                [3 ]Department of Statistics, University of California, Berkeley, CA, USA
                Author notes
                [* ]Corresponding author's e-mail address: pbstark@ 123456berkeley.edu
                © 2016 Boring et al.

                This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

                Page count
                Figures: 0, Tables: 11, References: 23, Pages: 11
                Original article


                I have some comments:

                1. Use confidence intervals, not p values (cf. 'New Statistics'). These can be obtained by bootstrapping. If you really want p values, then report both.
                2. Put the effect sizes with confidence intervals in the abstract.
                3. Alter the empahsis to be more about the lack of correlation between SET and teacher ability. The focus on gender bias is not warranted with the small effect. It is clearly not "large" as claimed. mean r = 0.09 ≈ d 0.20, which by Cohen's standards is small. There is too much social justice warrior about this article.
                4. It is possible to adjust for the gender bias by simply adding a little to the females SET multiplied by the proportion of male students.
                5. Note which gender was the biased one in the abstract (male against females, not female against males, cf. Table 5).
                6. A simpler recommendation is that SETs should not be used at all since they apparently do not correlate with actual performance. Why have them? This side-steps the entire gender bias issue.
                7. Add to the abstract that the gender effect could not be explained by male instructors being better (i.e. findings from Table 4).
                8. The technical aspect of the analyses seemed fine to me.

                Hope these comments are useful. Overall, I liked the article.

                2016-03-22 12:29 UTC
                2 people recommend this

                Comment on this article