The capacity to broadcast information, especially in the form of images in videos, has grown immensely and will likely increase more in the coming years. However, whether it's digital signage in public places or streaming video in homes and on phones, the amount and quality of visual and audio information available is truly amazing - if you don't face any barriers to consuming it. For people with impairments to hearing, vision or cognition this new ability to communicate can leave them feeling excluded. Issues surrounding video and audio content are not new for the deaf or hard of hearing community and subtitles have been around for decades. However, until recently no one has put much thought into how subtitles and captions are actually processed by the different groups and individuals who rely on them. How much cognitive effort is required to accurately and quickly process the information provided in subtitled form? Is there an optimal way to design the timing and pace of subtitles? In order to address these questions a closer look at some fundamental processes in human perception and cognition is needed.