0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      On the rate of convergence of Bregman proximal methods in constrained variational inequalities

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We examine the last-iterate convergence rate of Bregman proximal methods - from mirror descent to mirror-prox - in constrained variational inequalities. Our analysis shows that the convergence speed of a given method depends sharply on the Legendre exponent of the underlying Bregman regularizer (Euclidean, entropic, or other), a notion that measures the growth rate of said regularizer near a solution. In particular, we show that boundary solutions exhibit a clear separation of regimes between methods with a zero and non-zero Legendre exponent respectively, with linear convergence for the former versus sublinear for the latter. This dichotomy becomes even more pronounced in linearly constrained problems where, specifically, Euclidean methods converge along sharp directions in a finite number of steps, compared to a linear rate for entropic methods.

          Related collections

          Author and article information

          Journal
          15 November 2022
          Article
          2211.08043
          83707fc4-27ce-4a68-b0b9-d9f23fa2ada7

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Primary 65K15, 90C33, secondary 68Q25, 68Q32
          34 pages, 2 tables, 3 figures
          math.OC cs.LG

          Numerical methods,Artificial intelligence
          Numerical methods, Artificial intelligence

          Comments

          Comment on this article