2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evaluation of College English Teaching Quality Based on Improved BT-SVM Algorithm

      research-article
      Computational Intelligence and Neuroscience
      Hindawi

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          With the development of teaching evaluation program, colleges and universities have reformed according to the actual situation of the school. With the development of evaluation activities, many universities are eager to establish their own teaching quality evaluation system, so as to pre-evaluate the teaching quality of schools. SVM is one of the most widely used machine learning algorithms that enables efficient statistical learning with a very limited number of samples. Considering the excellent learning performance of SVM, it is very suitable for the teaching quality evaluation system. In this paper, we optimize the existing multiple classification algorithm for binary trees and propose a new method. Learning the popular teaching quality evaluation system in colleges and universities, the binary tree support vector machine classification algorithm, and design comparison experiment, the experimental results show that the evaluation model proposed in this paper has strong generalization ability and higher classification accuracy and better classification efficiency.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: not found
          Is Open Access

          A Few Shot Classification Methods Based on Multiscale Relational Networks

          Learning information from a single or a few samples is called few-shot learning. This learning method will solve deep learning’s dependence on a large sample. Deep learning achieves few-shot learning through meta-learning: “how to learn by using previous experience”. Therefore, this paper considers how the deep learning method uses meta-learning to learn and generalize from a small sample size in image classification. The main contents are as follows. Practicing learning in a wide range of tasks enables deep learning methods to use previous empirical knowledge. However, this method is subject to the quality of feature extraction and the selection of measurement methods supports set and the target set. Therefore, this paper designs a multi-scale relational network (MSRN) aiming at the above problems. The experimental results show that the simple design of the MSRN can achieve higher performance. Furthermore, it improves the accuracy of the datasets within fewer samples and alleviates the overfitting situation. However, to ensure that uniform measurement applies to all tasks, the few-shot classification based on metric learning must ensure the task set’s homologous distribution.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Characterization inference based on joint-optimization of multi-layer semantics and deep fusion matching network

            The whole sentence representation reasoning process simultaneously comprises a sentence representation module and a semantic reasoning module. This paper combines the multi-layer semantic representation network with the deep fusion matching network to solve the limitations of only considering a sentence representation module or a reasoning model. It proposes a joint optimization method based on multi-layer semantics called the Semantic Fusion Deep Matching Network (SCF-DMN) to explore the influence of sentence representation and reasoning models on reasoning performance. Experiments on text entailment recognition tasks show that the joint optimization representation reasoning method performs better than the existing methods. The sentence representation optimization module and the improved optimization reasoning model can promote reasoning performance when used individually. However, the optimization of the reasoning model has a more significant impact on the final reasoning results. Furthermore, after comparing each module’s performance, there is a mutual constraint between the sentence representation module and the reasoning model. This condition restricts overall performance, resulting in no linear superposition of reasoning performance. Overall, by comparing the proposed methods with other existed methods that are tested using the same database, the proposed method solves the lack of in-depth interactive information and interpretability in the model design which would be inspirational for future improving and studying of natural language reasoning.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found
              Is Open Access

              A Deep Fusion Matching Network Semantic Reasoning Model

              As the vital technology of natural language understanding, sentence representation reasoning technology mainly focuses on sentence representation methods and reasoning models. Although the performance has been improved, there are still some problems, such as incomplete sentence semantic expression, lack of depth of reasoning model, and lack of interpretability of the reasoning process. Given the reasoning model’s lack of reasoning depth and interpretability, a deep fusion matching network is designed in this paper, which mainly includes a coding layer, matching layer, dependency convolution layer, information aggregation layer, and inference prediction layer. Based on a deep matching network, the matching layer is improved. Furthermore, the heuristic matching algorithm replaces the bidirectional long-short memory neural network to simplify the interactive fusion. As a result, it improves the reasoning depth and reduces the complexity of the model; the dependency convolution layer uses the tree-type convolution network to extract the sentence structure information along with the sentence dependency tree structure, which improves the interpretability of the reasoning process. Finally, the performance of the model is verified on several datasets. The results show that the reasoning effect of the model is better than that of the shallow reasoning model, and the accuracy rate on the SNLI test set reaches 89.0%. At the same time, the semantic correlation analysis results show that the dependency convolution layer is beneficial in improving the interpretability of the reasoning process.
                Bookmark

                Author and article information

                Contributors
                Journal
                Comput Intell Neurosci
                Comput Intell Neurosci
                cin
                Computational Intelligence and Neuroscience
                Hindawi
                1687-5265
                1687-5273
                2022
                19 August 2022
                : 2022
                : 2974813
                Affiliations
                Jinhua Advanced Research Institute, Jinhua 321000, China
                Author notes

                Academic Editor: Arpit Bhardwaj

                Author information
                https://orcid.org/0000-0001-5657-7114
                Article
                10.1155/2022/2974813
                9417759
                36035833
                3f75de83-2eb3-4601-8059-b13512354b92
                Copyright © 2022 Minsheng Lou.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 22 June 2022
                : 22 July 2022
                Categories
                Research Article

                Neurosciences
                Neurosciences

                Comments

                Comment on this article