There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
When participants follow spoken instructions to pick up and move objects in a visual
workspace, their eye movements to the objects are closely time-locked to referential
expressions in the instructions. Two experiments used this methodology to investigate
the processing of the temporary ambiguities that arise because spoken language unfolds
over time. Experiment 1 examined the processing of sentences with a temporarily ambiguous
prepositional phrase (e.g., "Put the apple on the towel in the box") using visual
contexts that supported either the normally preferred initial interpretation (the
apple should be put on the towel) or the less-preferred interpretation (the apple
is already on the towel and should be put in the box). Eye movement patterns clearly
established that the initial interpretation of the ambiguous phrase was the one consistent
with the context. Experiment 2 replicated these results using prerecorded digitized
speech to eliminate any possibility of prosodic differences across conditions or experimenter
demand. Overall, the findings are consistent with a broad theoretical framework in
which real-time language comprehension immediately takes into account a rich array
of relevant nonlinguistic context.