Research on emotion recognition has been dominated by studies of photographs of facial
expressions. A full understanding of emotion perception and its neural substrate will
require investigations that employ dynamic displays and means of expression other
than the face. Our aims were: (i) to develop a set of dynamic and static whole-body
expressions of basic emotions for systematic investigations of clinical populations,
and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification
performance with these stimuli relative to the results of previous studies; and (iii)
to test the hypotheses that more exaggerated whole-body movements would produce (a)
more accurate emotion classification and (b) higher ratings of emotional intensity.
Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at
3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion
portrayals (full-light and point-light) were created from the same digital footage,
along with corresponding static images of the 'peak' of each emotion portrayal. Recognition
tasks confirmed previous findings that basic emotions are readily identifiable from
body movements, even when static form information is minimised by use of point-light
displays, and that full-light and even point-light displays can convey identifiable
emotions, though rather less efficiently than dynamic displays. Recognition success
differed for individual emotions, corroborating earlier results about the importance
of distinguishing differences in movement characteristics for different emotional
expressions. The patterns of misclassifications were in keeping with earlier findings
on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy,
especially for the dynamic point-light displays, but notably not for sadness, and
(b) produced higher emotional-intensity ratings, regardless of lighting condition,
for movies but to a lesser extent for stills, indicating that intensity judgments
of body gestures rely more on movement (or form-from-movement) than static form information.