3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Brain–machine interface based on deep learning to control asynchronously a lower-limb robotic exoskeleton: a case-of-study

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          This research focused on the development of a motor imagery (MI) based brain–machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will.

          Methods

          A total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants’ neural activity using the second deep learning approach for the decoding.

          Results

          The three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance.

          Conclusion

          This research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study’s discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.

          Supplementary Information

          The online version contains supplementary material available at 10.1186/s12984-024-01342-9.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          Event-related EEG/MEG synchronization and desynchronization: basic principles.

          An internally or externally paced event results not only in the generation of an event-related potential (ERP) but also in a change in the ongoing EEG/MEG in form of an event-related desynchronization (ERD) or event-related synchronization (ERS). The ERP on the one side and the ERD/ERS on the other side are different responses of neuronal structures in the brain. While the former is phase-locked, the latter is not phase-locked to the event. The most important difference between both phenomena is that the ERD/ERS is highly frequency band-specific, whereby either the same or different locations on the scalp can display ERD and ERS simultaneously. Quantification of ERD/ERS in time and space is demonstrated on data from a number of movement experiments.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Deep learning with convolutional neural networks for EEG decoding and visualization

            Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces

              Brain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible.
                Bookmark

                Author and article information

                Contributors
                lferrero@umh.es
                Journal
                J Neuroeng Rehabil
                J Neuroeng Rehabil
                Journal of NeuroEngineering and Rehabilitation
                BioMed Central (London )
                1743-0003
                5 April 2024
                5 April 2024
                2024
                : 21
                : 48
                Affiliations
                [1 ]Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, ( https://ror.org/01azzms13) Elche, Spain
                [2 ]Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, ( https://ror.org/01azzms13) Elche, Spain
                [3 ]International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, ( https://ror.org/01azzms13) Elche, Spain
                [4 ]NSF IUCRC BRAIN, University of Houston, ( https://ror.org/048sx0r50) Houston, USA
                [5 ]International Affiliate NSF IUCRC BRAIN Site, Tecnológico de Monterrey, ( https://ror.org/03ayjn504) Monterrey, Mexico
                [6 ]Non-Invasive Brain Machine Interface Systems, University of Houston, ( https://ror.org/048sx0r50) Houston, TX USA
                [7 ]Valencian Graduate School and Research Network of Artificial Intelligence-valgrAI, Valencia, Spain
                Author information
                http://orcid.org/0000-0003-2256-757X
                http://orcid.org/0000-0002-4269-1554
                http://orcid.org/0000-0001-8057-5952
                http://orcid.org/0000-0001-5548-9657
                https://orcid.org/0000-0002-6499-1208
                Article
                1342
                10.1186/s12984-024-01342-9
                10996198
                38581031
                fa744b8e-aef9-4ea3-bdd2-dc6d569c53bb
                © The Author(s) 2024

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 12 September 2023
                : 15 March 2024
                Funding
                Funded by: Ministry of Science, Innovation and Universities through the Aid for the Training of University Teachers
                Award ID: FPU19/03165
                Award Recipient :
                Funded by: Valencian Graduate School and Research Network of Artificial Intelligence (ValgrAI), Generalitat Valenciana and European Union
                Funded by: the NSF IUCRC BRAIN Center award # 2137255 at the University of Houston
                Funded by: MCIN/AEI/10.13039/501100011033 and by ERDF A way of making Europe
                Award ID: PID2021-124111OB-C31
                Funded by: FundRef http://dx.doi.org/10.13039/100015581, Houston Methodist Research Institute;
                Categories
                Research
                Custom metadata
                © BioMed Central Ltd., part of Springer Nature 2024

                Neurosciences
                brain–machine interface,eeg,exoskeleton,deep learning,transfer learning
                Neurosciences
                brain–machine interface, eeg, exoskeleton, deep learning, transfer learning

                Comments

                Comment on this article