2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Do Discrete Self-Supervised Representations of Speech Capture Tone Distinctions?

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Discrete representations of speech, obtained from Self-Supervised Learning (SSL) foundation models, are widely used, especially where there are limited data for the downstream task, such as for a low-resource language. Typically, discretization of speech into a sequence of symbols is achieved by unsupervised clustering of the latents from an SSL model. Our study evaluates whether discrete symbols - found using k-means - adequately capture tone in two example languages, Mandarin and Yoruba. We compare latent vectors with discrete symbols, obtained from HuBERT base, MandarinHuBERT, or XLS-R, for vowel and tone classification. We find that using discrete symbols leads to a substantial loss of tone information, even for language-specialised SSL models. We suggest that discretization needs to be task-aware, particularly for tone-dependent downstream tasks.

          Related collections

          Author and article information

          Journal
          25 October 2024
          Article
          2410.19935
          cb3521e7-8540-4d4e-aca3-fecb179233df

          http://creativecommons.org/licenses/by/4.0/

          History
          Custom metadata
          Submitted to ICASSP 2025
          cs.CL cs.SD eess.AS

          Theoretical computer science,Electrical engineering,Graphics & Multimedia design

          Comments

          Comment on this article