0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evidence of visual crossmodal reorganization positively relates to speech outcomes in cochlear implant users

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deaf individuals who use a cochlear implant (CI) have remarkably different outcomes for auditory speech communication ability. One factor assumed to affect CI outcomes is visual crossmodal plasticity in auditory cortex, where deprived auditory regions begin to support non-auditory functions such as vision. Previous research has viewed crossmodal plasticity as harmful for speech outcomes for CI users if it interferes with sound processing, while others have demonstrated that plasticity related to visual language may be beneficial for speech recovery. To clarify, we used electroencephalography (EEG) to measure brain responses to a partial face speaking a silent single-syllable word (visual language) in 15 CI users and 13 age-matched typical-hearing controls. We used source analysis on EEG activity to measure crossmodal visual responses in auditory cortex and then compared them to CI users’ speech-in-noise listening ability. CI users’ brain response to the onset of the video stimulus (face) was larger than controls in left auditory cortex, consistent with crossmodal activation after deafness. CI users also produced a mixture of alpha (8–12 Hz) synchronization and desynchronization in auditory cortex while watching lip movement while controls instead showed desynchronization. CI users with higher speech scores had stronger crossmodal responses in auditory cortex to the onset of the video, but those with lower speech scores had increases in alpha power during lip movement in auditory areas. Therefore, evidence of crossmodal reorganization in CI users does not necessarily predict poor speech outcomes, and differences in crossmodal activation during lip reading may instead relate to strategies or differences that CI users use in audiovisual speech communication.

          Related collections

          Most cited references60

          • Record: found
          • Abstract: found
          • Article: not found

          EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis

          We have developed a toolbox and graphic user interface, EEGLAB, running under the crossplatform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), independent component analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest.

            In this study, we have assessed the validity and reliability of an automated labeling system that we have developed for subdividing the human cerebral cortex on magnetic resonance images into gyral based regions of interest (ROIs). Using a dataset of 40 MRI scans we manually identified 34 cortical ROIs in each of the individual hemispheres. This information was then encoded in the form of an atlas that was utilized to automatically label ROIs. To examine the validity, as well as the intra- and inter-rater reliability of the automated system, we used both intraclass correlation coefficients (ICC), and a new method known as mean distance maps, to assess the degree of mismatch between the manual and the automated sets of ROIs. When compared with the manual ROIs, the automated ROIs were highly accurate, with an average ICC of 0.835 across all of the ROIs, and a mean distance error of less than 1 mm. Intra- and inter-rater comparisons yielded little to no difference between the sets of ROIs. These findings suggest that the automated method we have developed for subdividing the human cerebral cortex into standard gyral-based neuroanatomical regions is both anatomically valid and reliable. This method may be useful for both morphometric and functional studies of the cerebral cortex as well as for clinical investigations aimed at tracking the evolution of disease-induced changes over time, including clinical trials in which MRI-based measures are used to examine response to treatment.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Event-related EEG/MEG synchronization and desynchronization: basic principles.

              An internally or externally paced event results not only in the generation of an event-related potential (ERP) but also in a change in the ongoing EEG/MEG in form of an event-related desynchronization (ERD) or event-related synchronization (ERS). The ERP on the one side and the ERD/ERS on the other side are different responses of neuronal structures in the brain. While the former is phase-locked, the latter is not phase-locked to the event. The most important difference between both phenomena is that the ERD/ERS is highly frequency band-specific, whereby either the same or different locations on the scalp can display ERD and ERS simultaneously. Quantification of ERD/ERS in time and space is demonstrated on data from a number of movement experiments.
                Bookmark

                Author and article information

                Contributors
                btpaul@ryerson.ca
                andrew.dimitrijevic@sunnybrook.ca
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                22 October 2022
                22 October 2022
                2022
                : 12
                Affiliations
                [1 ]Department of Psychology, Toronto Metropolitan University, Toronto, ON M5B 2K3 Canada
                [2 ]GRID grid.413104.3, ISNI 0000 0000 9743 1587, Otolaryngology—Head and Neck Surgery, , Sunnybrook Health Sciences Centre, ; Toronto, ON M4N 3M5 Canada
                [3 ]GRID grid.17063.33, ISNI 0000 0001 2157 2938, Evaluative Clinical Sciences Platform, , Sunnybrook Research Institute, ; Toronto, ON M4N 3M5 Canada
                [4 ]GRID grid.17063.33, ISNI 0000 0001 2157 2938, Faculty of Medicine, Otolaryngology—Head and Neck Surgery, , University of Toronto, ; Toronto, ON M5S 1A1 Canada
                Article
                22117
                10.1038/s41598-022-22117-z
                9587996
                36273017
                207f77a6-e649-4df8-bbf1-f8e598dd73c5
                © The Author(s) 2022

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                Categories
                Article
                Custom metadata
                © The Author(s) 2022

                Uncategorized
                cognitive neuroscience,human behaviour
                Uncategorized
                cognitive neuroscience, human behaviour

                Comments

                Comment on this article