22
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants’ preference ratings, or their judgments of intended emotion.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: found
          • Article: not found

          Music and emotion: perceptual determinants, immediacy, and isolation after brain damage.

          I Peretz (1998)
          This study grew out of the observation of a remarkable sparing of emotional responses to music in the context of severe deficits in music processing after brain damage in a non-musician. Six experiments were designed to explore the perceptual basis of emotional judgments in music. In each experiment, the same set of 32 excerpts taken from the classical repertoire and intended to convey a happy or sad tone were presented under various transformations and with different task demands. In Expts. 1 to 3, subjects were required to judge on a 10-point scale whether the excerpts were happy or sad. Altogether the results show that emotional judgments are (a) highly consistent across subjects and resistant to brain damage; (b) determined by musical structure (mode and tempo); and (c) immediate. Experiments 4 to 6 were designed to asses whether emotional and non-emotional judgments reflect the operations of a single perceptual analysis system. To this aim, we searched for evidence of dissociation in our brain-damaged patient, I.R., by using tasks that do not require emotional interpretation. These non-emotional tasks were a 'same-different' classification task (Expt. 4), error detection tasks (Expt. 5A,B) and a change monitoring task (Expt. 6). I.R. was impaired in these non-emotional tasks except when the change affected the mode and the tempo of the excerpt, in which case I.R. performed close to normal. The results are discussed in relation to the possibility that emotional and non-emotional judgments are the products of distinct pathways.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            How the brain separates sounds.

            In everyday life we often listen to one sound, such as someone's voice, in a background of competing sounds. To do this, we must assign simultaneously occurring frequency components to the correct source, and organize sounds appropriately over time. The physical cues that we exploit to do so are well-established; more recent research has focussed on the underlying neural bases, where most progress has been made in the study of a form of sequential organization known as "auditory streaming". Listeners' sensitivity to streaming cues can be captured in the responses of neurons in the primary auditory cortex, and in EEG wave components with a short latency (< 200ms). However, streaming can be strongly affected by attention, suggesting that this early processing either receives input from non-auditory areas, or feeds into processes that do.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Music perception with temporal cues in acoustic and electric hearing.

              The first specific aim of the present study is to compare the ability of normal-hearing and cochlear implant listeners to use temporal cues in three music perception tasks: tempo discrimination, rhythmic pattern identification, and melody identification. The second aim is to identify the relative contribution of temporal and spectral cues to melody recognition in acoustic and electric hearing. Both normal-hearing and cochlear implant listeners participated in the experiments. Tempo discrimination was measured in a two-interval forced-choice procedure in which subjects were asked to choose the faster tempo at four standard tempo conditions (60, 80, 100, and 120 beats per minute). For rhythmic pattern identification, seven different rhythmic patterns were created and subjects were asked to read and choose the musical notation displayed on the screen that corresponded to the rhythmic pattern presented. Melody identification was evaluated with two sets of 12 familiar melodies. One set contained both rhythm and melody information (rhythm condition), whereas the other set contained only melody information (no-rhythm condition). Melody stimuli were also processed to extract the slowly varying temporal envelope from 1, 2, 4, 8, 16, 32, and 64 frequency bands, to create cochlear implant simulations. Subjects listened to a melody and had to respond by choosing one of the 12 names corresponding to the melodies displayed on a computer screen. In tempo discrimination, the cochlear implant listeners performed similarly to the normal-hearing listeners with rate discrimination difference limens obtained at 4-6 beats per minute. In rhythmic pattern identification, the cochlear implant listeners performed 5-25 percentage points poorer than the normal-hearing listeners. The normal-hearing listeners achieved perfect scores in melody identification with and without the rhythmic cues. However, the cochlear implant listeners performed significantly poorer than the normal-hearing listeners in both rhythm and no-rhythm conditions. The simulation results from normal-hearing listeners showed a relatively high level of performance for all numbers of frequency bands in the rhythm condition but required as many as 32 bands in the no-rhythm condition. Cochlear-implant listeners performed normally in tempo discrimination, but significantly poorer than normal-hearing listeners in rhythmic pattern identification and melody recognition. While both temporal (rhythmic) and spectral (pitch) cues contribute to melody recognition, cochlear-implant listeners mostly relied on the rhythmic cues for melody recognition. Without the rhythmic cues, high spectral resolution with as many as 32 bands was needed for melody recognition for normal-hearing listeners. This result indicates that the present cochlear implants provide sufficient spectral cues to support speech recognition in quiet, but they are not adequate to support music perception. Increasing the number of functional channels and improved encoding of the fine structure information are necessary to improve music perception for cochlear implant listeners.
                Bookmark

                Author and article information

                Journal
                Trends Hear
                Trends Hear
                TIA
                sptia
                Trends in Hearing
                SAGE Publications (Sage CA: Los Angeles, CA )
                2331-2165
                26 August 2015
                2015
                : 19
                : 2331216515598971
                Affiliations
                [1 ]Centre de Recherche Cerveau et Cognition, Université de Toulouse, UPS, France
                [2 ]CerCo, CNRS, France
                [3 ]Cochlear France S.A.S, France
                [4 ]The Bionics Institute, Melbourne, Australia
                [5 ]Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark
                Author notes
                [*]Jeremy Marozeau, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads Building 352, Lyngby 2800, Denmark. Email: jemaroz@ 123456elektro.dtu.dk
                Article
                10.1177_2331216515598971
                10.1177/2331216515598971
                4593516
                26316123
                9a08e8ad-44c4-455a-919e-66998ddc2a57
                © The Author(s) 2015

                This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 License ( http://www.creativecommons.org/licenses/by-nc/3.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page( https://us.sagepub.com/en-us/nam/open-access-at-sage).

                History
                Categories
                Original Articles
                Custom metadata
                January - December 2015

                cochlear implant,music perception,emotions,auditory scene analysis

                Comments

                Comment on this article