14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

      Preprint

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.

          Related collections

          Author and article information

          Journal
          17 June 2020
          Article
          2006.09801
          c5bb92c2-79b9-4147-a158-0427118bdeff

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          5 pages, 3 figures, 3 tables, accepted to IEEE Communications Letters
          cs.LG stat.ML

          Machine learning,Artificial intelligence
          Machine learning, Artificial intelligence

          Comments

          Comment on this article

          Related Documents Log