11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      A Bag of Wavelet Features for Snore Sound Classification.

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Snore sound (SnS) classification can support a targeted surgical approach to sleep related breathing disorders. Using machine listening methods, we aim to find the location of obstruction and vibration within a subject's upper airway. Wavelet features have been demonstrated to be efficient in the recognition of SnSs in previous studies. In this work, we use a bag-of-audio-words approach to enhance the low-level wavelet features extracted from SnS data. A Naïve Bayes model was selected as the classifier based on its superiority in initial experiments. We use SnS data collected from 219 independent subjects under drug-induced sleep endoscopy performed at three medical centres. The unweighted average recall achieved by our proposed method is 69.4%, which significantly ([Formula: see text] one-tailed z-test) outperforms the official baseline (58.5%), and beats the winner (64.2%) of the INTERSPEECH COMPARE Challenge 2017 Snoring sub-challenge. In addition, the conventionally used features like formants, mel-scale frequency cepstral coefficients, subband energy ratios, spectral frequency features, and the features extracted by the OPENSMILE toolkit are compared with our proposed feature set. The experimental results demonstrate the effectiveness of the proposed method in SnS classification.

          Related collections

          Author and article information

          Journal
          Ann Biomed Eng
          Annals of biomedical engineering
          Springer Science and Business Media LLC
          1573-9686
          0090-6964
          Apr 2019
          : 47
          : 4
          Affiliations
          [1 ] Machine Intelligence & Signal Processing Group, MMK, Technische Universität München, Arcisstr. 21, 80333, Munich, Germany. andykun.qian@tum.de.
          [2 ] ZD.B Chair of Embedded Intelligence for Health Care & Wellbeing, Universität Augsburg, Eichleitnerstr. 30, 86159, Augsburg, Germany. andykun.qian@tum.de.
          [3 ] ZD.B Chair of Embedded Intelligence for Health Care & Wellbeing, Universität Augsburg, Eichleitnerstr. 30, 86159, Augsburg, Germany.
          [4 ] Munich School of Bioengineering, Technische Universität München, Boltzmannstr. 11, 85748, Garching, Germany.
          [5 ] audEERING GmbH, 82206, Gilching, Germany.
          [6 ] GLAM - Group on Language, Audio & Music, Department of Computing, Imperial College London, 180 Queens' Gate, Huxley Bldg., London, SW7 2AZ, UK.
          [7 ] Department of Otorhinolaryngology/Head and Neck Surgery, Klinikum rechts der Isar, Technische Universität München, Ismaningerstr. 22, 81675, Munich, Germany.
          [8 ] Department of Otorhinolaryngology/Head and Neck Surgery, Alfried Krupp Krankenhaus, Alfried-Krupp-Str. 21, 45131, Essen, Germany.
          [9 ] Department of Otorhinolaryngology/Head and Neck Surgery, Carl-Thiem-Klinikum Cottbus, Thiemstr. 111, 03048, Cottbus, Germany.
          Article
          10.1007/s10439-019-02217-0
          10.1007/s10439-019-02217-0
          30701397

          Comments

          Comment on this article