0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Statistics of natural reverberation enable perceptual separation of sound and space

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Significance

          Sounds produced in the world reflect off surrounding surfaces on their way to our ears. Known as reverberation, these reflections distort sound but provide information about the world around us. We asked whether reverberation exhibits statistical regularities that listeners use to separate its effects from those of a sound’s source. We conducted a large-scale statistical analysis of real-world acoustics, revealing strong regularities of reverberation in natural scenes. We found that human listeners can estimate the contributions of the source and the environment from reverberant sound, but that they depend critically on whether environmental acoustics conform to the observed statistical regularities. The results suggest a separation process constrained by knowledge of environmental acoustics that is internalized over development or evolution.

          Abstract

          In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

          Related collections

          Author and article information

          Journal
          Proc Natl Acad Sci U S A
          Proc. Natl. Acad. Sci. U.S.A
          pnas
          pnas
          PNAS
          Proceedings of the National Academy of Sciences of the United States of America
          National Academy of Sciences
          0027-8424
          1091-6490
          29 November 2016
          10 November 2016
          : 113
          : 48
          : E7856-E7865
          Affiliations
          [1] aDepartment of Brain and Cognitive Sciences, Massachusetts Institute of Technology , Cambridge, MA 02139
          Author notes
          1To whom correspondence should be addressed. Email: jtraer@ 123456mit.edu .

          Edited by David J. Heeger, New York University, New York, NY, and approved September 27, 2016 (received for review July 28, 2016)

          Author contributions: J.T. and J.H.M. designed research; J.T. performed research; J.T. analyzed data; and J.T. and J.H.M. wrote the paper.

          Article
          PMC5137703 PMC5137703 5137703 201612524
          10.1073/pnas.1612524113
          5137703
          27834730
          343def34-880e-42d5-835c-76acc9e7515e
          History
          Page count
          Pages: 10
          Funding
          Funded by: James S. McDonnell
          Award ID: Scholar Award
          Funded by: HHS | NIH | National Institute on Deafness and Other Communication Disorders (NIDCD) 100000055
          Award ID: R01DC014739
          Funded by: HHS | NIH | National Institute on Deafness and Other Communication Disorders (NIDCD) 100000055
          Award ID: F32DC013703-03
          Categories
          PNAS Plus
          Biological Sciences
          Psychological and Cognitive Sciences
          Social Sciences
          Psychological and Cognitive Sciences
          PNAS Plus

          psychoacoustics,psychophysics,environmental acoustics,auditory scene analysis,natural scene statistics

          Comments

          Comment on this article