28
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.

          Related collections

          Most cited references48

          • Record: found
          • Abstract: not found
          • Article: not found

          On combining classifiers

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Communication of emotions in vocal expression and music performance: different channels, same code?

              Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherer's (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                01 May 2014
                2014
                : 8
                : 94
                Affiliations
                [1] 1Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA
                [2] 2Center for Advanced Neurological Engineering, Institute of Engineering in Medicine, University of California San Diego, La Jolla, CA, USA
                [3] 3Music and Audio Computing Lab, Research Center for IT Innovation Academia Sinica, Taipei, Taiwan
                Author notes

                Edited by: Jan B. F. Van Erp, Toegepast Natuurwetenschappelijk Onderzoek, Netherlands

                Reviewed by: Kenji Kansaku, Research Institute of National Rehabilitation Center for Persons with Disabilities, Japan; Dezhong Yao, University of Electronic Science and Technology of China, China

                *Correspondence: Yuan-Pin Lin, Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego, 9500 Gilman Drive, Mail code 0559, La Jolla, CA 92093-0559, USA e-mail: yplin@ 123456sccn.ucsd.edu

                This article was submitted to Neuroprosthetics, a section of the journal Frontiers in Neuroscience.

                Article
                10.3389/fnins.2014.00094
                4013455
                24822035
                c20ed44f-2a8f-4164-b63a-7f4ceaa3e7ad
                Copyright © 2014 Lin, Yang and Jung.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 31 January 2014
                : 12 April 2014
                Page count
                Figures: 8, Tables: 4, Equations: 1, References: 63, Pages: 14, Words: 9624
                Categories
                Neuroscience
                Original Research Article

                Neurosciences
                eeg,emotion classification,affective brain-computer interface,music signal processing,music listening

                Comments

                Comment on this article