In the context of face processing, the skill of processing speech from faces (speechreading) occupies a unique cognitive and neuropsychological niche. Neuropsychological dissociations in two cases (Campbell et al., 1986) suggested a very clear pattern: speechreading, but not face recognition, can be impaired by left-hemisphere damage, while face-recognition impairment consequent to right-hemisphere damage leaves speechreading unaffected. However, this story soon proved too simple, while neuroimaging techniques started to reveal further more detailed patterns. These patterns, moreover, were readily accommodated within the Bruce and Young (1986) model. Speechreading requires structural encoding of faces as faces, but further analysis of visible speech is supported by a network comprising several lateral temporal regions and inferior frontal regions. Posterior superior temporal regions play a significant role in speechreading natural speech, including audiovisual binding in hearing people. In deaf people, similar regions and circuits are implicated. While these detailed developments were not predicted by Bruce and Young, nevertheless, their model has stood the test of time, affording a structural framework for exploring speechreading in terms of face processing.