Blog
About

3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      A Big Data-as-a-Service Framework: State-of-the-Art and Perspectives

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references 127

          • Record: found
          • Abstract: found
          • Article: not found

          Nonlinear dimensionality reduction by locally linear embedding.

          Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Learning the parts of objects by non-negative matrix factorization.

            Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              MapReduce

                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Big Data
                IEEE Trans. Big Data
                Institute of Electrical and Electronics Engineers (IEEE)
                2332-7790
                2372-2096
                September 1 2018
                September 1 2018
                : 4
                : 3
                : 325-340
                Article
                10.1109/TBDATA.2017.2757942
                © 2018
                Product

                Comments

                Comment on this article