29
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English

      research-article
      1 , 2 , * , 1
      PLoS ONE
      Public Library of Science

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.

          Related collections

          Most cited references122

          • Record: found
          • Abstract: found
          • Article: not found

          Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.

          The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Core affect and the psychological construction of emotion.

            At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states--called core affect--influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Communication of emotions in vocal expression and music performance: different channels, same code?

              Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherer's (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.
                Bookmark

                Author and article information

                Contributors
                Role: ConceptualizationRole: Data curationRole: Formal analysisRole: InvestigationRole: MethodologyRole: Project administrationRole: SoftwareRole: ValidationRole: VisualizationRole: Writing – original draftRole: Writing – review & editing
                Role: ConceptualizationRole: Funding acquisitionRole: MethodologyRole: ResourcesRole: SupervisionRole: Writing – original draftRole: Writing – review & editing
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                16 May 2018
                2018
                : 13
                : 5
                : e0196391
                Affiliations
                [1 ] Department of Psychology, Ryerson University, Toronto, Canada
                [2 ] Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
                University of Pécs Medical School, HUNGARY
                Author notes

                Competing Interests: The second author holds a research chair sponsored by a commercial source: SONOVA/Phonak. Research funding related to the chair partly supported the development of the database presented in this paper. The agreement with the commercial sponsor does not entail restrictions on sharing of data and/or materials, and does not alter our adherence to PLOS ONE policies on sharing data and materials. In addition, neither of the authors are or have been on the editorial board of PLOS ONE, nor acted as an expert witness in relevant legal proceedings, nor sat or currently sit on a committee for an organization that may benefit from publication of the paper. Both authors declare, that to the best of their knowledge, there are no other competing interests.

                Author information
                http://orcid.org/0000-0002-6364-6410
                Article
                PONE-D-17-28472
                10.1371/journal.pone.0196391
                5955500
                29768426
                2a94b633-06f6-496b-b690-290e0f36e7cd
                © 2018 Livingstone, Russo

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 31 July 2017
                : 12 April 2018
                Page count
                Figures: 4, Tables: 7, Pages: 35
                Funding
                Funded by: funder-id http://dx.doi.org/10.13039/501100002790, Canadian Network for Research and Innovation in Machining Technology, Natural Sciences and Engineering Research Council of Canada;
                Award ID: 2012-341583
                Award Recipient :
                Funded by: funder-id http://dx.doi.org/10.13039/501100004228, Phonak;
                Award Recipient :
                This work was supported by FAR—Discovery Grant (2012-341583) from the Natural Sciences and Engineering Research Council of Canada, http://www.nserc-crsng.gc.ca/; FAR—Hear the world research chair in music and emotional speech from Phonak, https://www.phonak.com.
                Categories
                Research Article
                Biology and Life Sciences
                Psychology
                Emotions
                Social Sciences
                Psychology
                Emotions
                Social Sciences
                Linguistics
                Speech
                Biology and Life Sciences
                Anatomy
                Head
                Face
                Medicine and Health Sciences
                Anatomy
                Head
                Face
                Biology and Life Sciences
                Psychology
                Emotions
                Fear
                Social Sciences
                Psychology
                Emotions
                Fear
                Research and Analysis Methods
                Research Assessment
                Research Validity
                Biology and Life Sciences
                Neuroscience
                Cognitive Science
                Cognitive Psychology
                Music Cognition
                Biology and Life Sciences
                Psychology
                Cognitive Psychology
                Music Cognition
                Social Sciences
                Psychology
                Cognitive Psychology
                Music Cognition
                Biology and Life Sciences
                Anatomy
                Musculoskeletal System
                Medicine and Health Sciences
                Anatomy
                Musculoskeletal System
                Biology and Life Sciences
                Behavior
                Animal Behavior
                Animal Signaling and Communication
                Vocalization
                Biology and Life Sciences
                Zoology
                Animal Behavior
                Animal Signaling and Communication
                Vocalization
                Custom metadata
                All relevant data are within the paper and its Supporting Information files.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article