0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Graph neural network based on brain inspired forward-forward mechanism for motor imagery classification in brain-computer interfaces

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction

          Within the development of brain-computer interface (BCI) systems, it is crucial to consider the impact of brain network dynamics and neural signal transmission mechanisms on electroencephalogram-based motor imagery (MI-EEG) tasks. However, conventional deep learning (DL) methods cannot reflect the topological relationship among electrodes, thereby hindering the effective decoding of brain activity.

          Methods

          Inspired by the concept of brain neuronal forward-forward (F-F) mechanism, a novel DL framework based on Graph Neural Network combined forward-forward mechanism (F-FGCN) is presented. F-FGCN framework aims to enhance EEG signal decoding performance by applying functional topological relationships and signal propagation mechanism. The fusion process involves converting the multi-channel EEG into a sequence of signals and constructing a network grounded on the Pearson correlation coeffcient, effectively representing the associations between channels. Our model initially pre-trains the Graph Convolutional Network (GCN), and fine-tunes the output layer to obtain the feature vector. Moreover, the F-F model is used for advanced feature extraction and classification.

          Results and discussion

          Achievement of F-FGCN is assessed on the PhysioNet dataset for a four-class categorization, compared with various classical and state-of-the-art models. The learned features of the F-FGCN substantially amplify the performance of downstream classifiers, achieving the highest accuracy of 96.11% and 82.37% at the subject and group levels, respectively. Experimental results affirm the potency of FFGCN in enhancing EEG decoding performance, thus paving the way for BCI applications.

          Related collections

          Most cited references52

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Deep learning with convolutional neural networks for EEG decoding and visualization

          Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces

            Brain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Semi-Supervised Classification with Graph Convolutional Networks

              We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. Published as a conference paper at ICLR 2017
                Bookmark

                Author and article information

                Contributors
                URI : http://loop.frontiersin.org/people/2412182/overviewRole:
                Role:
                URI : http://loop.frontiersin.org/people/256382/overviewRole:
                Role:
                Role:
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                28 March 2024
                2024
                : 18
                : 1309594
                Affiliations
                [1] 1Institute of Plasma Physics, Hefei Institutes of Physical Science, Chinese Academy of Sciences , Hefei, China
                [2] 2University of Science and Technology of China , Hefei, China
                [3] 3Mechanical Department, School of Energy Systems, Lappeenranta University of Technology (LUT) , Lappeenranta, Finland
                Author notes

                Edited by: S. Abdollah Mirbozorgi, University of Alabama at Birmingham, United States

                Reviewed by: Jiancai Leng, Qilu University of Technology, China

                Fan Gao, University of Kentucky, United States

                *Correspondence: Yuntao Song songyt@ 123456ipp.ac.cn
                Article
                10.3389/fnins.2024.1309594
                11008472
                38606308
                3fdf5d25-8723-4b34-8c93-f63465a1842e
                Copyright © 2024 Xue, Song, Wu, Cheng and Pan.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 08 October 2023
                : 04 March 2024
                Page count
                Figures: 8, Tables: 4, Equations: 16, References: 52, Pages: 12, Words: 7042
                Funding
                The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work is supported by the Comprehensive Research Facility for Fusion Technology Program of China under Contract no. 2018-000052-73-01-001228.
                Categories
                Neuroscience
                Original Research
                Custom metadata
                Neural Technology

                Neurosciences
                brain computer interface (bci),electroencephalography (eeg),motor imagery (mi),forward-forward mechanism,graph convolutional network (gcn)

                Comments

                Comment on this article