1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Computational Complexity of Finding Stationary Points in Non-Convex Optimization

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Finding approximate stationary points, i.e., points where the gradient is approximately zero, of non-convex but smooth objective functions \(f\) over unrestricted \(d\)-dimensional domains is one of the most fundamental problems in classical non-convex optimization. Nevertheless, the computational and query complexity of this problem are still not well understood when the dimension \(d\) of the problem is independent of the approximation error. In this paper, we show the following computational and query complexity results: 1. The problem of finding approximate stationary points over unrestricted domains is PLS-complete. 2. For \(d = 2\), we provide a zero-order algorithm for finding \(\varepsilon\)-approximate stationary points that requires at most \(O(1/\varepsilon)\) value queries to the objective function. 3. We show that any algorithm needs at least \(\Omega(1/\varepsilon)\) queries to the objective function and/or its gradient to find \(\varepsilon\)-approximate stationary points when \(d=2\). Combined with the above, this characterizes the query complexity of this problem to be \(\Theta(1/\varepsilon)\). 4. For \(d = 2\), we provide a zero-order algorithm for finding \(\varepsilon\)-KKT points in constrained optimization problems that requires at most \(O(1/\sqrt{\varepsilon})\) value queries to the objective function. This closes the gap between the works of Bubeck and Mikulincer [2020] and Vavasis [1993] and characterizes the query complexity of this problem to be \(\Theta(1/\sqrt{\varepsilon})\). 5. Combining our results with the recent result of Fearnley et al. [2022], we show that finding approximate KKT points in constrained optimization is reducible to finding approximate stationary points in unconstrained optimization but the converse is impossible.

          Related collections

          Author and article information

          Journal
          13 October 2023
          Article
          2310.09157
          2308bc95-08dd-49d2-b8f1-73563202404d

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Full version of COLT 2023 extended abstract
          math.OC cs.CC cs.LG stat.ML

          Theoretical computer science,Numerical methods,Machine learning,Artificial intelligence

          Comments

          Comment on this article