7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The role of audiovisual congruence in aesthetic appreciation of contemporary music and visual art

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Does congruence between auditory and visual modalities affect aesthetic experience? While cross-modal correspondences between vision and hearing are well-documented, previous studies show conflicting results regarding whether audiovisual correspondence affects subjective aesthetic experience. Here, in collaboration with the Kentler International Drawing Space (NYC, USA), we depart from previous research by using music specifically composed to pair with visual art in the professionally-curated Music as Image and Metaphor exhibition. Our pre-registered online experiment consisted of 4 conditions: Audio, Visual, Audio-Visual-Intended (artist-intended pairing of art/music), and Audio-Visual-Random (random shuffling). Participants (N = 201) were presented with 16 pieces and could click to proceed to the next piece whenever they liked. We used time spent as an implicit index of aesthetic interest. Additionally, after each piece, participants were asked about their subjective experience (e.g., feeling moved). We found that participants spent significantly more time with Audio, followed by Audiovisual, followed by Visual pieces; however, they felt most moved in the Audiovisual (bi-modal) conditions. Ratings of audiovisual correspondence were significantly higher for the Audiovisual-Intended compared to Audiovisual-Random condition; interestingly, though, there were no significant differences between intended and random conditions on any other subjective rating scale, or for time spent. Collectively, these results call into question the relationship between cross-modal correspondence and aesthetic appreciation. Additionally, the results complicate the use of time spent as an implicit measure of aesthetic experience.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: found
          • Article: not found

          Crossmodal correspondences: a tutorial review.

          In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain "know" which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Sound symbolism in infancy: evidence for sound-shape cross-modal correspondences in 4-month-olds.

            Perceptual experiences in one modality are often dependent on activity from other sensory modalities. These cross-modal correspondences are also evident in language. Adults and toddlers spontaneously and consistently map particular words (e.g., 'kiki') to particular shapes (e.g., angular shapes). However, the origins of these systematic mappings are unknown. Because adults and toddlers have had significant experience with the language mappings that exist in their environment, it is unclear whether the pairings are the result of language exposure or the product of an initial proclivity. We examined whether 4-month-old infants make the same sound-shape mappings as adults and toddlers. Four month-olds consistently distinguished between congruent and incongruent sound-shape mappings in a looking time task (Experiment 1). Furthermore, mapping was based on the combination of consonants and vowels in the words given that neither consonants (Experiment 2) nor vowels (Experiment 3) alone sufficed for mapping. Finally, we confirmed that adults also made systematic sound-shape mappings (Experiment 4); however, for adults, vowels or consonants alone sufficed. These results suggest that some sound-shape mappings precede language learning, and may in fact aid in language learning by establishing a basis for matching labels to referents and narrowing the hypothesis space for young infants. Copyright © 2012 Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found
              Is Open Access

              Sound symbolism scaffolds language development in preverbal infants.

              A fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.g., a round shape) followed by a novel spoken word that either sound-symbolically matched ("moma") or mismatched ("kipi") the shape. Amplitude increase in the gamma band showed perceptual integration of visual and auditory stimuli in the match condition within 300 msec of word onset. Furthermore, phase synchronization between electrodes at around 400 msec revealed intensified large-scale, left-hemispheric communication between brain regions in the mismatch condition as compared to the match condition, indicating heightened processing effort when integration was more demanding. Finally, event-related brain potentials showed an increased adult-like N400 response - an index of semantic integration difficulty - in the mismatch as compared to the match condition. Together, these findings suggest that 11-month-old infants spontaneously map auditory language onto visual experience by recruiting a cross-modal perceptual processing system and a nascent semantic network within the first year of life. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
                Bookmark

                Author and article information

                Contributors
                finkl1@mcmaster.ca
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                9 September 2024
                9 September 2024
                2024
                : 14
                : 20923
                Affiliations
                [1 ]Department of Music, Max Planck Institute for Empirical Aesthetics, ( https://ror.org/000rdbk18) Frankfurt Am Main, HE Germany
                [2 ]GRID grid.4372.2, ISNI 0000 0001 2105 1091, Max Planck-NYU Center for Language, Music, & Emotion, ; Frankfurt Am Main, HE Germany
                [3 ]Department of Psychology, Neuroscience & Behaviour, McMaster University, ( https://ror.org/02fa3aq29) Hamilton, ON Canada
                [4 ]Institute of Psychology, Goethe University, ( https://ror.org/04cvxnb49) Frankfurt Am Main, HE Germany
                Article
                71399
                10.1038/s41598-024-71399-y
                11384752
                39251764
                594587c7-a3ed-4b59-a5dd-0b0430fb8dbe
                © The Author(s) 2024

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 19 December 2023
                : 27 August 2024
                Funding
                Funded by: Max Planck Institute for Empirical Aesthetics (2)
                Categories
                Article
                Custom metadata
                © Springer Nature Limited 2024

                Uncategorized
                multisensory integration,digital art museum,web-based data collection,time spent,feeling moved,enjoyment,human behaviour,sensory processing,perception

                Comments

                Comment on this article