Blog
About

4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Four lectures on probabilistic methods for data science

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Methods of high-dimensional probability play a central role in applications for statistics, signal processing theoretical computer science and related fields. These lectures present a sample of particularly useful tools of high-dimensional probability, focusing on the classical and matrix Bernstein's inequality and the uniform matrix deviation inequality. We illustrate these tools with applications for dimension reduction, network analysis, covariance estimation, matrix completion and sparse signal recovery. The lectures are geared towards beginning graduate students who have taken a rigorous course in probability but may not have any experience in data science applications.

          Related collections

          Most cited references 29

          • Record: found
          • Abstract: not found
          • Article: not found

          A Simple Proof of the Restricted Isometry Property for Random Matrices

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The Power of Convex Relaxation: Near-Optimal Matrix Completion

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              User-friendly tail bounds for sums of random matrices

               Joel A. Tropp (2010)
              This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.
                Bookmark

                Author and article information

                Journal
                2016-12-20
                Article
                1612.06661

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                Custom metadata
                60-01, 62-01, 65-01, 60B20, 65Cxx, 60E15, 62Fxx
                Lectures given at 2016 PCMI Graduate Summer School in Mathematics of Data
                math.PR cs.DS cs.IT math.IT math.ST stat.TH

                Data structures & Algorithms, Information systems & theory

                Comments

                Comment on this article