2
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      A novel approach based on combining deep learning models with statistical methods for COVID-19 time series forecasting

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The COVID-19 pandemic has disrupted the economy and businesses and impacted all facets of people’s lives. It is critical to forecast the number of infected cases to make accurate decisions on the necessary measures to control the outbreak. While deep learning models have proved to be effective in this context, time series augmentation can improve their performance. In this paper, we use time series augmentation techniques to create new time series that take into account the characteristics of the original series, which we then use to generate enough samples to fit deep learning models properly. The proposed method is applied in the context of COVID-19 time series forecasting using three deep learning techniques, (1) the long short-term memory, (2) gated recurrent units, and (3) convolutional neural network. In terms of symmetric mean absolute percentage error and root mean square error measures, the proposed method significantly improves the performance of long short-term memory and convolutional neural networks. Also, the improvement is average for the gated recurrent units. Finally, we present a summary of the top augmentation model as well as a visual representation of the actual and forecasted data for each country.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis

            As discussed in the previous statistical notes, although many statistical methods have been proposed to test normality of data in various ways, there is no current gold standard method. The eyeball test may be useful for medium to large sized (e.g., n > 50) samples, however may not useful for small samples. The formal normality tests including Shapiro-Wilk test and Kolmogorov-Smirnov test may be used from small to medium sized samples (e.g., n 2.1 Kurtosis is a measure of the peakedness of a distribution. The original kurtosis value is sometimes called kurtosis (proper) and West et al. (1996) proposed a reference of substantial departure from normality as an absolute kurtosis (proper) value > 7.1 For some practical reasons, most statistical packages such as SPSS provide 'excess' kurtosis obtained by subtracting 3 from the kurtosis (proper). The excess kurtosis should be zero for a perfectly normal distribution. Distributions with positive excess kurtosis are called leptokurtic distribution meaning high peak, and distributions with negative excess kurtosis are called platykurtic distribution meaning flat-topped curve. 2) Normality test using skewness and kurtosis A z-test is applied for normality test using skewness and kurtosis. A z-score could be obtained by dividing the skew values or excess kurtosis by their standard errors. As the standard errors get smaller when the sample size increases, z-tests under null hypothesis of normal distribution tend to be easily rejected in large samples with distribution which may not substantially differ from normality, while in small samples null hypothesis of normality tends to be more easily accepted than necessary. Therefore, critical values for rejecting the null hypothesis need to be different according to the sample size as follows: For small samples (n < 50), if absolute z-scores for either skewness or kurtosis are larger than 1.96, which corresponds with a alpha level 0.05, then reject the null hypothesis and conclude the distribution of the sample is non-normal. For medium-sized samples (50 < n < 300), reject the null hypothesis at absolute z-value over 3.29, which corresponds with a alpha level 0.05, and conclude the distribution of the sample is non-normal. For sample sizes greater than 300, depend on the histograms and the absolute values of skewness and kurtosis without considering z-values. Either an absolute skew value larger than 2 or an absolute kurtosis (proper) larger than 7 may be used as reference values for determining substantial non-normality. Referring to Table 1 and Figure 1, we could conclude all the data seem to satisfy the assumption of normality despite that the histogram of the smallest-sized sample doesn't appear as a symmetrical bell shape and the formal normality tests for the largest-sized sample were rejected against the normality null hypothesis. 3) How strict is the assumption of normality? Though the humble t test (assuming equal variances) and analysis of variance (ANOVA) with balanced sample sizes are said to be 'robust' to moderate departure from normality, generally it is not preferable to rely on the feature and to omit data evaluation procedure. A combination of visual inspection, assessment using skewness and kurtosis, and formal normality tests can be used to assess whether assumption of normality is acceptable or not. When we consider the data show substantial departure from normality, we may either transform the data, e.g., transformation by taking logarithms, or select a nonparametric method such that normality assumption is not required.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review

              Convolutional neural networks (CNNs) have been applied to visual tasks since the late 1980s. However, despite a few scattered applications, they were dormant until the mid-2000s when developments in computing power and the advent of large amounts of labeled data, supplemented by improved algorithms, contributed to their advancement and brought them to the forefront of a neural network renaissance that has seen rapid progression since 2012. In this review, which focuses on the application of CNNs to image classification tasks, we cover their development, from their predecessors up to recent state-of-the-art deep learning systems. Along the way, we analyze (1) their early successes, (2) their role in the deep learning renaissance, (3) selected symbolic works that have contributed to their recent popularity, and (4) several improvement attempts by reviewing contributions and challenges of over 300 publications. We also introduce some of their current trends and remaining challenges.
                Bookmark

                Author and article information

                Contributors
                abbasimehr@azaruniv.ac.ir
                Journal
                Neural Comput Appl
                Neural Comput Appl
                Neural Computing & Applications
                Springer London (London )
                0941-0643
                1433-3058
                10 October 2021
                : 1-15
                Affiliations
                [1 ]GRID grid.411468.e, ISNI 0000 0004 0417 5692, Faculty of Information Technology and Computer Engineering, , Azarbaijan Shahid Madani University, ; Tabriz, Iran
                [2 ]GRID grid.27755.32, ISNI 0000 0000 9136 933X, Department of Engineering Systems and Environment, , University of Virginia, ; Charlottesville, Virginia USA
                [3 ]GRID grid.4643.5, ISNI 0000 0004 1937 0327, Present Address: School of Industrial and Information Engineering, , Politecnico di Milano University, ; Milano, Italy
                Author information
                http://orcid.org/0000-0001-8615-5553
                Article
                6548
                10.1007/s00521-021-06548-9
                8502508
                34658536
                843f07bf-6bf8-48eb-ac1d-cb27e891f3f6
                © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2021

                This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.

                History
                : 1 December 2020
                : 14 September 2021
                Categories
                Original Article

                Neural & Evolutionary computing
                deep learning,time series forecasting,augmentation methods,covid-19 pandemic

                Comments

                Comment on this article