51
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      An explainable deep machine vision framework for plant stress phenotyping

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Significance

          Plant stress identification based on visual symptoms has predominately remained a manual exercise performed by trained pathologists, primarily due to the occurrence of confounding symptoms. However, the manual rating process is tedious, is time-consuming, and suffers from inter- and intrarater variabilities. Our work resolves such issues via the concept of explainable deep machine learning to automate the process of plant stress identification, classification, and quantification. We construct a very accurate model that can not only deliver trained pathologist-level performance but can also explain which visual symptoms are used to make predictions. We demonstrate that our method is applicable to a large variety of biotic and abiotic stresses and is transferable to other imaging conditions and plants.

          Abstract

          Current approaches for accurate identification, classification, and quantification of biotic and abiotic stresses in crop research and production are predominantly visual and require specialized training. However, such techniques are hindered by subjectivity resulting from inter- and intrarater cognitive variability. This translates to erroneous decisions and a significant waste of resources. Here, we demonstrate a machine learning framework’s ability to identify and classify a diverse set of foliar stresses in soybean [ Glycine max (L.) Merr.] with remarkable accuracy. We also present an explanation mechanism, using the top-K high-resolution feature maps that isolate the visual symptoms used to make predictions. This unsupervised identification of visual symptoms provides a quantitative measure of stress severity, allowing for identification (type of foliar stress), classification (low, medium, or high stress), and quantification (stress severity) in a single framework without detailed symptom annotation by experts. We reliably identified and classified several biotic (bacterial and fungal diseases) and abiotic (chemical injury and nutrient deficiency) stresses by learning from over 25,000 images. The learned model is robust to input image perturbations, demonstrating viability for high-throughput deployment. We also noticed that the learned model appears to be agnostic to species, seemingly demonstrating an ability of transfer learning. The availability of an explainable model that can consistently, rapidly, and accurately identify and quantify foliar stresses would have significant implications in scientific research, plant breeding, and crop production. The trained model could be deployed in mobile platforms (e.g., unmanned air vehicles and automated ground scouts) for rapid, large-scale scouting or as a mobile application for real-time detection of stress by farmers and researchers.

          Related collections

          Most cited references9

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Using Deep Learning for Image-Based Plant Disease Detection

          Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found
            Is Open Access

            Machine Learning for High-Throughput Stress Phenotyping in Plants.

            Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

              The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.
                Bookmark

                Author and article information

                Journal
                Proc Natl Acad Sci U S A
                Proc. Natl. Acad. Sci. U.S.A
                pnas
                pnas
                PNAS
                Proceedings of the National Academy of Sciences of the United States of America
                National Academy of Sciences
                0027-8424
                1091-6490
                1 May 2018
                16 April 2018
                16 April 2018
                : 115
                : 18
                : 4613-4618
                Affiliations
                [1] aDepartment of Mechanical Engineering, Iowa State University , Ames, IA 50011;
                [2] bDepartment of Agronomy, Iowa State University , Ames, IA 50011
                Author notes
                2To whom correspondence may be addressed. Email: soumiks@ 123456iastate.edu or arti@ 123456iastate.edu .

                Edited by Sheng Yang He, Department of Energy, Plant Research Laboratory, Michigan State University, East Lansing, MI, and approved March 21, 2018 (received for review September 29, 2017)

                Author contributions: A.K.S., B.G., A.S., and S.S. designed research; S.G., D.B., A.K.S., B.G., A.S., and S.S. performed research; S.G., B.G., and S.S. contributed new reagents/analytic tools; S.G., D.B., A.K.S., B.G., A.S., and S.S. analyzed data; and S.G., D.B., A.K.S., B.G., A.S., and S.S. wrote the paper.

                1S.G. and D.B. contributed equally to this work.

                Author information
                http://orcid.org/0000-0001-9424-5655
                http://orcid.org/0000-0002-7522-037X
                Article
                201716999
                10.1073/pnas.1716999115
                5939070
                29666265
                bb6919c7-fe07-4ff5-a221-832085646fc7
                Copyright © 2018 the Author(s). Published by PNAS.

                This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).

                History
                Page count
                Pages: 6
                Funding
                Funded by: USDA NIFA
                Award ID: 2017-67007-26151
                Funded by: USDA CRIS
                Award ID: IOW04403
                Categories
                Biological Sciences
                Agricultural Sciences
                Physical Sciences
                Computer Sciences

                plant stress phenotyping,machine learning,explainable deep learning,resolving rater variabilities,precision agriculture

                Comments

                Comment on this article