11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Contextual Transformer Networks for Visual Recognition

      Read this article at

      ScienceOpenPublisherPubMed
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <p class="first" id="d4151196e59">Transformer with self-attention has led to the revolutionizing of NLP field, and recently inspires the emergence of Transformer-style architecture design with competitive results in numerous CV tasks. Nevertheless, most of existing designs directly employ self-attention over a 2D feature map to obtain the attention matrix based on pairs of isolated queries and keys, but leave the rich contexts among neighbor keys under-exploited. Here we design a novel Transformer-style module, i.e., Contextual Transformer (CoT) block, for visual recognition. It fully capitalizes on the contextual information among input keys to guide the learning of dynamic attention matrix and thus strengthens the capacity of visual representation. Technically, CoT block first contextually encodes input keys via 3×3 convolution, leading to a static contextual representation. We further concatenate the encoded keys with input queries to learn the dynamic multi-head attention matrix through two consecutive 1×1 convolutions. The learnt attention matrix is multiplied by values to achieve the dynamic contextual representation. The fusion of static and dynamic contextual representations are finally taken as outputs. Our CoT block can readily replace each 3×3 convolution in ResNet architectures, yielding a Transformer-style backbone named as Contextual Transformer Networks (CoTNet). Through extensive experiments over a wide range of applications, we validate the superiority of CoTNet as a stronger backbone. </p>

          Related collections

          Most cited references64

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            • Record: found
            • Abstract: not found
            • Article: not found

            Gradient-based learning applied to document recognition

              • Record: found
              • Abstract: found
              • Article: not found

              Attention Is All You Need

              The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. 15 pages, 5 figures

                Author and article information

                Contributors
                Journal
                IEEE Transactions on Pattern Analysis and Machine Intelligence
                IEEE Trans. Pattern Anal. Mach. Intell.
                Institute of Electrical and Electronics Engineers (IEEE)
                0162-8828
                2160-9292
                1939-3539
                February 1 2023
                February 1 2023
                : 45
                : 2
                : 1489-1500
                Affiliations
                [1 ]JD Explore Academy, Beijing, China
                Article
                10.1109/TPAMI.2022.3164083
                35363608
                f7b3bd65-7a0e-40a8-bf95-50079396c1c8
                © 2023

                https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-037

                History

                Comments

                Comment on this article

                Related Documents Log