4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Geometrical Interpretation and Design of Multilayer Perceptrons

      Read this article at

      ScienceOpenPublisherPubMed
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <p class="first" id="d4304884e65">The multilayer perceptron (MLP) neural network is interpreted from the geometrical viewpoint in this work, that is, an MLP partition an input feature space into multiple nonoverlapping subspaces using a set of hyperplanes, where the great majority of samples in a subspace belongs to one object class. Based on this high-level idea, we propose a three-layer feedforward MLP (FF-MLP) architecture for its implementation. In the first layer, the input feature space is split into multiple subspaces by a set of partitioning hyperplanes and rectified linear unit (ReLU) activation, which is implemented by the classical two-class linear discriminant analysis (LDA). In the second layer, each neuron activates one of the subspaces formed by the partitioning hyperplanes with specially designed weights. In the third layer, all subspaces of the same class are connected to an output node that represents the object class. The proposed design determines all MLP parameters in a feedforward one-pass fashion analytically without backpropagation. Experiments are conducted to compare the performance of the traditional backpropagation-based MLP (BP-MLP) and the new FF-MLP. It is observed that the FF-MLP outperforms the BP-MLP in terms of design time, training time, and classification performance in several benchmarking datasets. Our source code is available at https://colab.research.google.com/drive/1Gz0L8A-nT4ijrUchrhEXXsnaacrFdenn?usp = sharing. </p>

          Related collections

          Author and article information

          Contributors
          Journal
          IEEE Transactions on Neural Networks and Learning Systems
          IEEE Trans. Neural Netw. Learning Syst.
          Institute of Electrical and Electronics Engineers (IEEE)
          2162-237X
          2162-2388
          2022
          : 1-15
          Affiliations
          [1 ]Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
          [2 ]U.S. Army Research Laboratory, Adelphi, MD, USA
          Article
          10.1109/TNNLS.2022.3190364
          35862331
          8180a718-e9ae-403f-bd52-8937daafa1c7
          © 2022

          https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html

          https://doi.org/10.15223/policy-029

          https://doi.org/10.15223/policy-037

          History

          Comments

          Comment on this article

          Related Documents Log