15
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Remember What You Want to Forget: Algorithms for Machine Unlearning

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We study the problem of forgetting datapoints from a learnt model. In this case, the learner first receives a dataset \(S\) drawn i.i.d. from an unknown distribution, and outputs a predictor \(w\) that performs well on unseen samples from that distribution. However, at some point in the future, any training data point \(z \in S\) can request to be unlearned, thus prompting the learner to modify its output predictor while still ensuring the same accuracy guarantees. In our work, we initiate a rigorous study of machine unlearning in the population setting, where the goal is to maintain performance on the unseen test loss. We then provide unlearning algorithms for convex loss functions. For the setting of convex losses, we provide an unlearning algorithm that can delete up to \(O(n/d^{1/4})\) samples, where \(d\) is the problem dimension. In comparison, in general, differentially private learningv(which implies unlearning) only guarantees deletion of \(O(n/d^{1/2})\) samples. This shows that unlearning is at least polynomially more efficient than learning privately in terms of dependence on \(d\) in the deletion capacity.

          Related collections

          Author and article information

          Journal
          04 March 2021
          Article
          2103.03279
          372685fc-4847-4851-9b17-aa725929f6d7

          http://creativecommons.org/licenses/by-nc-sa/4.0/

          History
          Custom metadata
          cs.LG cs.AI

          Artificial intelligence
          Artificial intelligence

          Comments

          Comment on this article