0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The timecourse of multisensory speech processing in unilaterally stimulated cochlear implant users revealed by ERPs

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Highlights

          • Both normal-hearing (NH) and cochlear implant (CI) users show a clear benefit in multisensory speech processing.

          • Group differences in ERP topographies and cortical source activation suggest distinct audiovisual speech processing in CI users when compared to NH listeners.

          • Electrical neuroimaging, including topographic and ERP source analysis, provides a suitable tool to study the timecourse of multisensory speech processing in CI users.

          Abstract

          A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions.

          Related collections

          Most cited references127

          • Record: found
          • Abstract: not found
          • Article: not found

          Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The assessment and analysis of handedness: The Edinburgh inventory

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis

              We have developed a toolbox and graphic user interface, EEGLAB, running under the crossplatform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), independent component analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation.
                Bookmark

                Author and article information

                Contributors
                Journal
                Neuroimage Clin
                Neuroimage Clin
                NeuroImage : Clinical
                Elsevier
                2213-1582
                04 March 2022
                2022
                04 March 2022
                : 34
                Affiliations
                [a ]University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
                [b ]Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
                [c ]The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
                [d ]The LINE (The Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
                [e ]CIBM Center for Biomedical Imaging of Lausanne and Geneva, Lausanne, Switzerland
                [f ]Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
                Author notes
                [* ]Corresponding author. natalie.layer@ 123456uk-koeln.de
                Article
                S2213-1582(22)00047-X 102982
                10.1016/j.nicl.2022.102982
                8927996
                35303598
                9365ca4d-a8ca-4811-87c3-3092e93916c8
                © 2022 The Authors

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

                Categories
                Regular Article

                cochlear implant,event-related potential,cortical plasticity,multisensory integration,audiovisual interaction,audiovisual speech perception

                Comments

                Comment on this article