54
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      High-dimensional Ising model selection using \({\ell_1}\)-regularized logistic regression

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We consider the problem of estimating the graph associated with a binary Ising Markov random field. We describe a method based on \(\ell_1\)-regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an \(\ell_1\)-constraint. The method is analyzed under high-dimensional scaling in which both the number of nodes \(p\) and maximum neighborhood size \(d\) are allowed to grow as a function of the number of observations \(n\). Our main results provide sufficient conditions on the triple \((n,p,d)\) and the model parameters for the method to succeed in consistently estimating the neighborhood of every node in the graph simultaneously. With coherence conditions imposed on the population Fisher information matrix, we prove that consistent neighborhood selection can be obtained for sample sizes \(n=\Omega(d^3\log p)\) with exponentially decaying error. When these same conditions are imposed directly on the sample matrices, we show that a reduced sample size of \(n=\Omega(d^2\log p)\) suffices for the method to estimate neighborhoods consistently. Although this paper focuses on the binary graphical models, we indicate how a generalization of the method of the paper would apply to general discrete Markov random fields.

          Related collections

          Most cited references12

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The Dantzig selector: Statistical estimation when \(p\) is much larger than \(n\)

          In many important statistical applications, the number of variables or parameters \(p\) is much larger than the number of observations \(n\). Suppose then that we have observations \(y=X\beta+z\), where \(\beta\in\mathbf{R}^p\) is a parameter vector of interest, \(X\) is a data matrix with possibly far fewer rows than columns, \(n\ll p\), and the \(z_i\)'s are i.i.d. \(N(0,\sigma^2)\). Is it possible to estimate \(\beta\) reliably based on the noisy data \(y\)? To estimate \(\beta\), we introduce a new estimator--we call it the Dantzig selector--which is a solution to the \(\ell_1\)-regularization problem \[\min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma,\] where \(r\) is the residual vector \(y-X\tilde{\beta}\) and \(t\) is a positive scalar. We show that if \(X\) obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector \(\beta\) is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, \[\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr).\] Our results are nonasymptotic and we give values for the constant \(C\). Even though \(n\) may be much smaller than \(p\), our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Approximating discrete probability distributions with dependence trees

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso)

                Bookmark

                Author and article information

                Journal
                02 October 2010
                Article
                10.1214/09-AOS691
                1010.0311
                d2ba95aa-b2ab-4ed9-84f7-d05def62cc96

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                IMS-AOS-AOS691
                Annals of Statistics 2010, Vol. 38, No. 3, 1287-1319
                Published in at http://dx.doi.org/10.1214/09-AOS691 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)
                math.ST stat.TH
                vtex

                Comments

                Comment on this article