0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Decoding peak emotional responses to music from computational acoustic and lyrical features.

      1
      Cognition
      Elsevier BV
      Audio, Lyric, Machine learning, Music information retrieval, Natural language processing, Peak emotion

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Music can evoke strong emotions. Research has suggested that pleasurable chills (shivering) and tears (weeping) are peak emotional responses to music. The present study examines whether computational acoustic and lyrical features can decode chills and tears. The experiment comprises 186 pieces of self-selected music to evoke emotional responses from 54 Japanese participants. Machine learning analysis with L2-norm-regularization regression revealed the decoding accuracy and specified well-defined features. In Study 1, time-series acoustic features significantly decoded emotional chills, tears, and the absence of chills or tears by using information within a few seconds before and after the onset of the three responses. The classification results showed three significant periods, indicating that complex anticipation-resolution mechanisms lead to chills and tears. Evoking chills was particularly associated with rhythm uncertainty, while evoking tears was related to harmony. Violating rhythm expectancy may have been a trigger for chills, while the harmonious overlapping of acoustic spectra may have played a role in evoking tears. In Study 2, acoustic and lyrical features from the entire piece decoded tears but not chill frequency. Mixed emotions stemming from happiness were associated with major chords, while lyric content related to sad farewells can contribute to the prediction of emotional tears, indicating that distinctive emotions in music may evoke a tear response. When considered in tandem with theoretical studies, the violation of rhythm may biologically boost both the pleasure- and fight-related physiological response of chills, whereas tears may be evolutionarily embedded in the social bonding effect of musical harmony and play a unique role in emotional regulation.

          Related collections

          Author and article information

          Journal
          Cognition
          Cognition
          Elsevier BV
          1873-7838
          0010-0277
          May 2022
          : 222
          Affiliations
          [1 ] Montreal Neurological Institute, McGill University, Canada; Graduate School of Human Sciences, Osaka University, Japan; Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Japan. Electronic address: kazuma.mori@mcgill.ca.
          Article
          S0010-0277(21)00433-9
          10.1016/j.cognition.2021.105010
          34998244
          6ef23aaf-eb0e-4480-a556-202e3a215e3f
          History

          Music information retrieval,Audio,Lyric,Machine learning,Natural language processing,Peak emotion

          Comments

          Comment on this article