16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Application of machine-learning methods in forest ecology: recent progress and future challenges

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Machine learning, an important branch of artificial intelligence, is increasingly being applied in sciences such as forest ecology. Here, we review and discuss three commonly used methods of machine learning (ML) including decision-tree learning, artificial neural network, and support vector machine and their applications in four different aspects of forest ecology over the last decade. These applications include: (i) species distribution models, (ii) carbon cycles, (iii) hazard assessment and prediction, and (iv) other applications in forest management. Although ML approaches are useful for classification, modeling, and prediction in forest ecology research, further expansion of ML technologies is limited by the lack of suitable data and the relatively “higher threshold” of applications. However, the combined use of multiple algorithms and improved communication and cooperation between ecological researchers and ML developers still present major challenges and tasks for the betterment of future ecological research. We suggest that future applications of ML in ecology will become an increasingly attractive tool for ecologists in the face of “big data” and that ecologists will gain access to more types of data such as sound and video in the near future, possibly opening new avenues of research in forest ecology.

          Related collections

          Most cited references79

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Deep learning in neural networks: An overview

            In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A fast learning algorithm for deep belief nets.

              We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
                Bookmark

                Author and article information

                Journal
                Environmental Reviews
                Environ. Rev.
                Canadian Science Publishing
                1181-8700
                1208-6053
                December 2018
                December 2018
                : 26
                : 4
                : 339-350
                Affiliations
                [1 ]Department of Biological Sciences, University of Quebec at Montreal, Montreal, QC H3C 3P8, Canada.
                [2 ]Great Lake Forestry Centre, Canadian Forest Service, Natural Resources Canada, Sault Ste. Marie, ON P6A 2E5, Canada.
                [3 ]Institut de Recherche sur les Forêts, Université du Québec en Abitibi-Témiscamingue, Rouyn-Noranda, QC J9T 2L8, Canada.
                Article
                10.1139/er-2018-0034
                6b584ebe-e5c7-4897-94b4-5db36a915645
                © 2018

                http://www.nrcresearchpress.com/page/about/CorporateTextAndDataMining

                History

                Comments

                Comment on this article