9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Categorization of everyday sounds by cochlear implanted children

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Auditory categorization is an important process in the perception and understanding of everyday sounds. The use of cochlear implants (CIs) may affect auditory categorization and result in poor abilities. The current study was designed to compare how children with normal hearing (NH) and children with CIs categorize a set of everyday sounds. We tested 24 NH children and 24 children with CI on a free-sorting task of 18 everyday sounds corresponding to four a priori categories: nonlinguistic human vocalizations, environmental sounds, musical sounds, and animal vocalizations. Multiple correspondence analysis revealed considerable variation within both groups of child listeners, although the human vocalizations and musical sounds were similarly categorized. In contrast to NH children, children with CIs categorized some sounds according to their acoustic content rather than their associated semantic information. These results show that despite identification deficits, children with CIs are able to categorize environmental and vocal sounds in a similar way to NH children, and are able to use categorization as an adaptive process when dealing with everyday sounds.

          Related collections

          Most cited references43

          • Record: found
          • Abstract: found
          • Article: not found

          Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants.

          Speech recognition was measured as a function of spectral resolution (number of spectral channels) and speech-to-noise ratio in normal-hearing (NH) and cochlear-implant (CI) listeners. Vowel, consonant, word, and sentence recognition were measured in five normal-hearing listeners, ten listeners with the Nucleus-22 cochlear implant, and nine listeners with the Advanced Bionics Clarion cochlear implant. Recognition was measured as a function of the number of spectral channels (noise bands or electrodes) at signal-to-noise ratios of + 15, + 10, +5, 0 dB, and in quiet. Performance with three different speech processing strategies (SPEAK, CIS, and SAS) was similar across all conditions, and improved as the number of electrodes increased (up to seven or eight) for all conditions. For all noise levels, vowel and consonant recognition with the SPEAK speech processor did not improve with more than seven electrodes, while for normal-hearing listeners, performance continued to increase up to at least 20 channels. Speech recognition on more difficult speech materials (word and sentence recognition) showed a marginally significant increase in Nucleus-22 listeners from seven to ten electrodes. The average implant score on all processing strategies was poorer than scores of NH listeners with similar processing. However, the best CI scores were similar to the normal-hearing scores for that condition (up to seven channels). CI listeners with the highest performance level increased in performance as the number of electrodes increased up to seven, while CI listeners with low levels of speech recognition did not increase in performance as the number of electrodes was increased beyond four. These results quantify the effect of number of spectral channels on speech recognition in noise and demonstrate that most CI subjects are not able to fully utilize the spectral information provided by the number of electrodes used in their implant.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Thinking the voice: neural correlates of voice perception.

            The human voice is the carrier of speech, but also an "auditory face" that conveys important affective and identity information. Little is known about the neural bases of our abilities to perceive such paralinguistic information in voice. Results from recent neuroimaging studies suggest that the different types of vocal information could be processed in partially dissociated functional pathways, and support a neurocognitive model of voice perception largely similar to that proposed for face perception.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              What in the World Do We Hear?: An Ecological Approach to Auditory Event Perception

                Bookmark

                Author and article information

                Contributors
                Pascal.barone@cnrs.fr
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                5 March 2019
                5 March 2019
                2019
                : 9
                : 3532
                Affiliations
                [1 ]GRID grid.457025.1, UMR 5549, Faculté de Médecine Purpan, , Centre National de la Recherche Scientifique, ; Toulouse, France
                [2 ]ISNI 0000 0001 2353 1689, GRID grid.11417.32, Centre de Recherche Cerveau et Cognition, Université de Toulouse, Université Paul Sabatier, ; Toulouse, France
                [3 ]ISNI 0000 0004 0486 042X, GRID grid.410542.6, Unité de Recherche Interdisciplinaire Octogone, EA4156, Laboratoire Cognition, , Communication et Développement, Université de Toulouse Jean‐Jaurès, ; Toulouse, France
                [4 ]ISNI 0000 0004 0639 4960, GRID grid.414282.9, Faculté de Médecine de Purpan, Toulouse, , France; Service d’Oto‐Rhino‐Laryngologie et Oto‐Neurologie, Hopital Purpan, ; Toulouse, France
                Author information
                http://orcid.org/0000-0003-0389-1593
                http://orcid.org/0000-0003-1243-4407
                Article
                39991
                10.1038/s41598-019-39991-9
                6401047
                6781b48f-b626-485e-8880-cedd59b35175
                © The Author(s) 2019

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 13 July 2018
                : 21 January 2019
                Funding
                Funded by: Direction de la Recherche Clinique et de l’Innovation of the University Hospital of Purpan (DRCI, AOL 2011)
                Funded by: Doctoral subvention (Advanced Bionics SARL, France)
                Funded by: FundRef https://doi.org/10.13039/501100007772, Agir pour les Maladies Chroniques;
                Funded by: “Agir pour l’Audition” (#APA-RD2015-6B).
                Categories
                Article
                Custom metadata
                © The Author(s) 2019

                Uncategorized
                Uncategorized

                Comments

                Comment on this article