Blog
About


  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

Student evaluations of teaching (mostly) do not measure teaching effectiveness

Read Bookmark

Abstract

Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching effectiveness. We show:

  • SET are biased against female instructors by an amount that is large and statistically significant.

  • The bias affects how students rassignments are graded.

  • The bias varies by discipline and by student gender, among other things.

  • It is not possible to adjust for the bias, because it depends on so many factors.

  • SET are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.

  • Gender biases can be large enough to cause more effective instructors to get lower SET than less effective instructors.

These findings are based on nonparametric statistical tests applied to two datasets: 23,001 SET of 379 instructors by 4,423 students in six mandatory first-year courses in a five-year natural experiment at a French university, and 43 SET for four sections of an online course in a randomized, controlled, blind experiment at a US university.

Related collections

Author and article information

Affiliations
[1]OFCE, SciencesPo, Paris, France
[2]PSL, Université Paris-Dauphine, LEDa, UMR DIAL, Paris, France
[3]Department of Statistics, University of California, Berkeley, CA, USA
Author notes
[*]Corresponding author's e-mail address: pbstark@123456berkeley.edu
Contributors
(View ORCID Profile)
(View ORCID Profile)
Journal
SOR-EDU
ScienceOpen Research
ScienceOpen
2199-1006
07 January 2016
: 0 (ID: 818d8ec0-5908-47d8-86b4-5dc38f04b23e)
: 0
: 1-11
© 2016 Boring et al.

This work has been published open access under Creative Commons Attribution License CC BY 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com.

Counts
Figures: 0, Tables: 11, References: 23, Pages: 11
Product
Categories
Original article

Comments

I have some comments:

  1. Use confidence intervals, not p values (cf. 'New Statistics'). These can be obtained by bootstrapping. If you really want p values, then report both.
  2. Put the effect sizes with confidence intervals in the abstract.
  3. Alter the empahsis to be more about the lack of correlation between SET and teacher ability. The focus on gender bias is not warranted with the small effect. It is clearly not "large" as claimed. mean r = 0.09 ≈ d 0.20, which by Cohen's standards is small. There is too much social justice warrior about this article.
  4. It is possible to adjust for the gender bias by simply adding a little to the females SET multiplied by the proportion of male students.
  5. Note which gender was the biased one in the abstract (male against females, not female against males, cf. Table 5).
  6. A simpler recommendation is that SETs should not be used at all since they apparently do not correlate with actual performance. Why have them? This side-steps the entire gender bias issue.
  7. Add to the abstract that the gender effect could not be explained by male instructors being better (i.e. findings from Table 4).
  8. The technical aspect of the analyses seemed fine to me.

Hope these comments are useful. Overall, I liked the article.

2016-03-22 12:29 UTC
+1
One person recommends this

Comment on this article

Register to benefit from advanced discovery features on more than 28,000,000 articles

Already registered?

Email*:
Password*: