165
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Dantzig selector: Statistical estimation when \(p\) is much larger than \(n\)

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In many important statistical applications, the number of variables or parameters \(p\) is much larger than the number of observations \(n\). Suppose then that we have observations \(y=X\beta+z\), where \(\beta\in\mathbf{R}^p\) is a parameter vector of interest, \(X\) is a data matrix with possibly far fewer rows than columns, \(n\ll p\), and the \(z_i\)'s are i.i.d. \(N(0,\sigma^2)\). Is it possible to estimate \(\beta\) reliably based on the noisy data \(y\)? To estimate \(\beta\), we introduce a new estimator--we call it the Dantzig selector--which is a solution to the \(\ell_1\)-regularization problem \[\min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma,\] where \(r\) is the residual vector \(y-X\tilde{\beta}\) and \(t\) is a positive scalar. We show that if \(X\) obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector \(\beta\) is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, \[\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr).\] Our results are nonasymptotic and we give values for the constant \(C\). Even though \(n\) may be much smaller than \(p\), our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).

          Related collections

          Most cited references27

          • Record: found
          • Abstract: not found
          • Article: not found

          Compressed sensing

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Stable signal recovery from incomplete and inaccurate measurements

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Atomic Decomposition by Basis Pursuit

                Bookmark

                Author and article information

                Journal
                04 June 2005
                2008-03-21
                Article
                10.1214/009053606000001523
                math/0506081
                13e8ad8f-a2b2-461b-bc72-c621e6ef0e89
                History
                Custom metadata
                62C05, 62G05 (Primary) 94A08, 94A12 (Secondary)
                IMS-AOS-AOS0204
                Annals of Statistics 2007, Vol. 35, No. 6, 2313-2351
                This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135]. Rejoinder in [arXiv:0803.3136]. Published in at http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)
                math.ST stat.TH

                Comments

                Comment on this article