25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Sensing Emotion in Voices: Negativity Bias and Gender Differences in a Validation Study of the Oxford Vocal (‘OxVoc’) Sounds Database

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Emotional expressions are an essential element of human interactions. Recent work has increasingly recognized that emotional vocalizations can color and shape interactions between individuals. Here we present data on the psychometric properties of a recently developed database of authentic nonlinguistic emotional vocalizations from human adults and infants (the Oxford Vocal ‘OxVoc’ Sounds Database; Parsons, Young, Craske, Stein, & Kringelbach, 2014). In a large sample ( n = 562), we demonstrate that adults can reliably categorize these sounds (as ‘positive,’ ‘negative,’ or ‘sounds with no emotion’), and rate valence in these sounds consistently over time. In an extended sample ( n = 945, including the initial n = 562), we also investigated a number of individual difference factors in relation to valence ratings of these vocalizations. Results demonstrated small but significant effects of (a) symptoms of depression and anxiety with more negative ratings of adult neutral vocalizations ( R 2 = .011 and R 2 = .008, respectively) and (b) gender differences in perceived valence such that female listeners rated adult neutral vocalizations more positively and infant cry vocalizations more negatively than male listeners ( R 2 = .021, R 2 = .010, respectively). Of note, we did not find evidence of negativity bias among other affective vocalizations or gender differences in perceived valence of adult laughter, adult cries, infant laughter, or infant neutral vocalizations. Together, these findings largely converge with factors previously shown to impact processing of emotional facial expressions, suggesting a modality-independent impact of depression, anxiety, and listener gender, particularly among vocalizations with more ambiguous valence.

          Related collections

          Most cited references60

          • Record: found
          • Abstract: not found
          • Article: not found

          Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Detection of postnatal depression. Development of the 10-item Edinburgh Postnatal Depression Scale.

            The development of a 10-item self-report scale (EPDS) to screen for Postnatal Depression in the community is described. After extensive pilot interviews a validation study was carried out on 84 mothers using the Research Diagnostic Criteria for depressive illness obtained from Goldberg's Standardised Psychiatric Interview. The EPDS was found to have satisfactory sensitivity and specificity, and was also sensitive to change in the severity of depression over time. The scale can be completed in about 5 minutes and has a simple method of scoring. The use of the EPDS in the secondary prevention of Postnatal Depression is discussed.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Communication of emotions in vocal expression and music performance: different channels, same code?

              Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherer's (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                Psychol Assess
                Psychol Assess
                Psychological Assessment
                American Psychological Association
                1040-3590
                1939-134X
                22 September 2016
                August 2017
                : 29
                : 8
                : 967-977
                Affiliations
                [1 ]Department of Psychology, University of California, Los Angeles and Department of Psychiatry, University of Oxford
                [2 ]Department of Psychiatry, University of Oxford and Department of Clinical Medicine, Aarhus University
                [3 ]Department of Psychology, University of California, Los Angeles
                [4 ]Department of Psychiatry, University of Oxford
                [5 ]Department of Psychiatry, University of Oxford and Department of Clinical Medicine, Aarhus University
                [6 ]Department of Psychology, University of California, Los Angeles
                Author notes
                This research was in part financially supported by a Medical Research Council (MRC) DPhil Studentship (to Katherine S. Young); a NIH postdoctoral fellowship (T32MH15750; to Benjamin A. Tabak); the Barclay Foundation (to Alan Stein); an ERC Consolidator Grant: CAREGIVING ( n. 615539; to Morten L. Kringelbach), the TrygFonden Charitable Foundation (to Morten L. Kringelbach), and the Center for Music in the Brain, funded by the Danish National Research Foundation (DNRF117; to Morten L. Kringelbach).
                [*] [* ]Correspondence concerning this article should be addressed to Katherine S. Young, Department of Psychology, University of California, 1285 Franz Hall, Los Angeles, CA 90095-1563 kyoung@ 123456psych.ucla.edu
                Article
                pas_29_8_967 2016-45383-001
                10.1037/pas0000382
                5362357
                27656902
                2d5b61a8-8d23-4a34-a58d-e8f18d3cba1a
                © 2016 The Author(s)

                This article has been published under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Copyright for this article is retained by the author(s). Author(s) grant(s) the American Psychological Association the exclusive right to publish the article and identify itself as the original publisher.

                History
                : 12 April 2016
                : 28 June 2016
                : 20 July 2016
                Categories
                Articles

                Clinical Psychology & Psychiatry
                adult,emotional expression,infant,stimulus database,vocalization
                Clinical Psychology & Psychiatry
                adult, emotional expression, infant, stimulus database, vocalization

                Comments

                Comment on this article