4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Efficient training of spiking neural networks with temporally-truncated local backpropagation through time

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Directly training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions. The well-known backpropagation through time (BPTT) algorithm proposed to train SNNs suffers from large memory footprint and prohibits backward and update unlocking, making it impossible to exploit the potential of locally-supervised training methods. This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm. The proposed algorithm explores both temporal and spatial locality in BPTT and contributes to significant reduction in computational cost including GPU memory utilization, main memory access and arithmetic operations. We thoroughly explore the design space concerning temporal truncation length and local training block size and benchmark their impact on classification accuracy of different networks running different types of tasks. The results reveal that temporal truncation has a negative effect on the accuracy of classifying frame-based datasets, but leads to improvement in accuracy on event-based datasets. In spite of resulting information loss, local training is capable of alleviating overfitting. The combined effect of temporal truncation and local training can lead to the slowdown of accuracy drop and even improvement in accuracy. In addition, training deep SNNs' models such as AlexNet classifying CIFAR10-DVS dataset leads to 7.26% increase in accuracy, 89.94% reduction in GPU memory, 10.79% reduction in memory access, and 99.64% reduction in MAC operations compared to the standard end-to-end BPTT. Thus, the proposed method has shown high potential to enable fast and energy-efficient on-chip training for real-time learning at the edge.

          Related collections

          Most cited references50

          • Record: found
          • Abstract: not found
          • Article: not found

          Gradient-based learning applied to document recognition

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            ImageNet classification with deep convolutional neural networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Object Detection With Deep Learning: A Review

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                06 April 2023
                2023
                : 17
                : 1047008
                Affiliations
                [1] 1Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology , Thuwal, Saudi Arabia
                [2] 2Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology , Thuwal, Saudi Arabia
                [3] 3Center for Embedded & Cyber-Physical Systems, University of California, Irvine , Irvine, CA, United States
                Author notes

                Edited by: Yiran Chen, Duke University, United States

                Reviewed by: Fangxin Liu, Shanghai Jiao Tong University, China; Guoqi Li, Tsinghua University, China

                *Correspondence: Khaled Nabil Salama khaled.salama@ 123456kaust.edu.sa

                This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience

                Article
                10.3389/fnins.2023.1047008
                10117667
                37090791
                99a68e59-8a9d-422f-b4ec-c884f5af7b81
                Copyright © 2023 Guo, Fouda, Eltawil and Salama.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 17 September 2022
                : 20 March 2023
                Page count
                Figures: 15, Tables: 2, Equations: 19, References: 76, Pages: 19, Words: 12466
                Funding
                Funded by: King Abdullah University of Science and Technology, doi 10.13039/501100004052;
                This research is funded by King Abdullah University of Science and Technology (KAUST) AI Initiative.
                Categories
                Neuroscience
                Original Research

                Neurosciences
                backpropagation through time,deep learning,energy-efficient training,local learning,neuromorphic computing,spiking neural networks

                Comments

                Comment on this article