5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep scattering transform applied to note onset detection and instrument recognition

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Automatic Music Transcription (AMT) is one of the oldest and most well-studied problems in the field of music information retrieval. Within this challenging research field, onset detection and instrument recognition take important places in transcription systems, as they respectively help to determine exact onset times of notes and to recognize the corresponding instrument sources. The aim of this study is to explore the usefulness of multiscale scattering operators for these two tasks on plucked string instrument and piano music. After resuming the theoretical background and illustrating the key features of this sound representation method, we evaluate its performances comparatively to other classical sound representations. Using both MIDI-driven datasets with real instrument samples and real musical pieces, scattering is proved to outperform other sound representations for these AMT subtasks, putting forward its richer sound representation and invariance properties.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: not found
          • Article: not found

          Speaker-independent isolated word recognition using dynamic features of speech spectrum

          S Furui (1986)
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            ANALYSIS OF SOUND PATTERNS THROUGH WAVELET TRANSFORMS

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Tempo and beat analysis of acoustic musical signals.

              E Scheirer (1998)
              A method is presented for using a small number of bandpass filters and banks of parallel comb filters to analyze the tempo of, and extract the beat from, musical signals of arbitrary polyphonic complexity and containing arbitrary timbres. This analysis is performed causally, and can be used predictively to guess when beats will occur in the future. Results in a short validation experiment demonstrate that the performance of the algorithm is similar to the performance of human listeners in a variety of musical situations. Aspects of the algorithm are discussed in relation to previous high-level cognitive models of beat tracking.
                Bookmark

                Author and article information

                Journal
                2017-03-28
                Article
                1703.09775
                723304e3-8e36-43a5-a0e9-03e3b8ecc6ad

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                stat.ML cs.SD

                Machine learning,Graphics & Multimedia design
                Machine learning, Graphics & Multimedia design

                Comments

                Comment on this article