There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
<p class="first" id="d4304884e65">The multilayer perceptron (MLP) neural network is
interpreted from the geometrical
viewpoint in this work, that is, an MLP partition an input feature space into multiple
nonoverlapping subspaces using a set of hyperplanes, where the great majority of samples
in a subspace belongs to one object class. Based on this high-level idea, we propose
a three-layer feedforward MLP (FF-MLP) architecture for its implementation. In the
first layer, the input feature space is split into multiple subspaces by a set of
partitioning hyperplanes and rectified linear unit (ReLU) activation, which is implemented
by the classical two-class linear discriminant analysis (LDA). In the second layer,
each neuron activates one of the subspaces formed by the partitioning hyperplanes
with specially designed weights. In the third layer, all subspaces of the same class
are connected to an output node that represents the object class. The proposed design
determines all MLP parameters in a feedforward one-pass fashion analytically without
backpropagation. Experiments are conducted to compare the performance of the traditional
backpropagation-based MLP (BP-MLP) and the new FF-MLP. It is observed that the FF-MLP
outperforms the BP-MLP in terms of design time, training time, and classification
performance in several benchmarking datasets. Our source code is available at https://colab.research.google.com/drive/1Gz0L8A-nT4ijrUchrhEXXsnaacrFdenn?usp
= sharing.
</p>