10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Neural Correlates of Modality-Sensitive Deviance Detection in the Audiovisual Oddball Paradigm

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The McGurk effect, an incongruent pairing of visual /ga/–acoustic /ba/, creates a fusion illusion /da/ and is the cornerstone of research in audiovisual speech perception. Combination illusions occur given reversal of the input modalities—auditory /ga/-visual /ba/, and percept /bga/. A robust literature shows that fusion illusions in an oddball paradigm evoke a mismatch negativity (MMN) in the auditory cortex, in absence of changes to acoustic stimuli. We compared fusion and combination illusions in a passive oddball paradigm to further examine the influence of visual and auditory aspects of incongruent speech stimuli on the audiovisual MMN. Participants viewed videos under two audiovisual illusion conditions: fusion with visual aspect of the stimulus changing, and combination with auditory aspect of the stimulus changing, as well as two unimodal auditory- and visual-only conditions. Fusion and combination deviants exerted similar influence in generating congruency predictions with significant differences between standards and deviants in the N100 time window. Presence of the MMN in early and late time windows differentiated fusion from combination deviants. When the visual signal changes, a new percept is created, but when the visual is held constant and the auditory changes, the response is suppressed, evoking a later MMN. In alignment with models of predictive processing in audiovisual speech perception, we interpreted our results to indicate that visual information can both predict and suppress auditory speech perception.

          Related collections

          Most cited references52

          • Record: found
          • Abstract: found
          • Article: not found

          The mismatch negativity (MMN) in basic research of central auditory processing: a review.

          In the present article, the basic research using the mismatch negativity (MMN) and analogous results obtained by using the magnetoencephalography (MEG) and other brain-imaging technologies is reviewed. This response is elicited by any discriminable change in auditory stimulation but recent studies extended the notion of the MMN even to higher-order cognitive processes such as those involving grammar and semantic meaning. Moreover, MMN data also show the presence of automatic intelligent processes such as stimulus anticipation at the level of auditory cortex. In addition, the MMN enables one to establish the brain processes underlying the initiation of attention switch to, conscious perception of, sound change in an unattended stimulus stream.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The free-energy principle: a rough guide to the brain?

            This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Visual speech speeds up the neural processing of auditory speech.

              Synchronous presentation of stimuli to the auditory and visual systems can modify the formation of a percept in either modality. For example, perception of auditory speech is improved when the speaker's facial articulatory movements are visible. Neural convergence onto multisensory sites exhibiting supra-additivity has been proposed as the principal mechanism for integration. Recent findings, however, have suggested that putative sensory-specific cortices are responsive to inputs presented through a different modality. Consequently, when and where audiovisual representations emerge remain unsettled. In combined psychophysical and electroencephalography experiments we show that visual speech speeds up the cortical processing of auditory signals early (within 100 ms of signal onset). The auditory-visual interaction is reflected as an articulator-specific temporal facilitation (as well as a nonspecific amplitude reduction). The latency facilitation systematically depends on the degree to which the visual signal predicts possible auditory targets. The observed auditory-visual data support the view that there exist abstract internal representations that constrain the analysis of subsequent speech inputs. This is evidence for the existence of an "analysis-by-synthesis" mechanism in auditory-visual speech perception.
                Bookmark

                Author and article information

                Journal
                Brain Sci
                Brain Sci
                brainsci
                Brain Sciences
                MDPI
                2076-3425
                28 May 2020
                June 2020
                : 10
                : 6
                : 328
                Affiliations
                [1 ]Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; rpriefer@ 123456adelphi.edu (R.P.); amandanagler@ 123456mail.adelphi.edu (A.N.)
                [2 ]Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; pjs2194@ 123456tc.columbia.edu (P.J.S.); psa2111@ 123456tc.columbia.edu (T.A.); kf2119@ 123456tc.columbia.edu (K.F.)
                Author notes
                [* ]Correspondence: mrandazzo@ 123456adelphi.edu ; Tel.: +1-516-877-4769
                Author information
                https://orcid.org/0000-0003-3055-2503
                https://orcid.org/0000-0003-3343-0288
                Article
                brainsci-10-00328
                10.3390/brainsci10060328
                7348766
                32481538
                1ca415cd-5a9b-4c33-8dce-4fb14b7b26e5
                © 2020 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 23 April 2020
                : 25 May 2020
                Categories
                Article

                audiovisual,mcgurk effect,fusion,mismatch negativity,n1
                audiovisual, mcgurk effect, fusion, mismatch negativity, n1

                Comments

                Comment on this article