There is a growing body of research investigating assessor judgments in complex performance environments such as OSCE examinations. Post hoc analysis can be employed to identify some elements of "unwanted" assessor variance. However, the impact of individual, apparently "extreme" assessors on OSCE quality, assessment outcomes and pass/fail decisions has not been previously explored. This paper uses a range of "case studies" as examples to illustrate the impact that "extreme" examiners can have in OSCEs, and gives pragmatic suggestions to successfully alleviating problems.