0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Optimized Dual Fire Attention Network and Medium-Scale Fire Classification Benchmark

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references66

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Encoder-decoder with atrous separable convolution for semantic

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found
            Is Open Access

            Impact of Australia's catastrophic 2019/20 bushfire season on communities and environment. Retrospective analysis and current trends

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

              Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency. Published as a conference paper at ICLR 2016 (oral)
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                IEEE Transactions on Image Processing
                IEEE Trans. on Image Process.
                Institute of Electrical and Electronics Engineers (IEEE)
                1057-7149
                1941-0042
                2022
                2022
                : 31
                : 6331-6343
                Affiliations
                [1 ]Department of Software, Sejong University, Seoul, South Korea
                [2 ]School of Computer Science Engineering and Technology, Bennett University, Greater Noida, Uttar Pradesh, India
                Article
                10.1109/TIP.2022.3207006
                445b3c5d-016a-4d31-ab61-badcba0b7eb1
                © 2022

                https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-037

                History

                Comments

                Comment on this article