5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Aerial Imagery Analysis – Quantifying Appearance and Number of Sorghum Heads for Applications in Breeding and Agronomy

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Sorghum ( Sorghum bicolor L. Moench) is a C4 tropical grass that plays an essential role in providing nutrition to humans and livestock, particularly in marginal rainfall environments. The timing of head development and the number of heads per unit area are key adaptation traits to consider in agronomy and breeding but are time consuming and labor intensive to measure. We propose a two-step machine-based image processing method to detect and count the number of heads from high-resolution images captured by unmanned aerial vehicles (UAVs) in a breeding trial. To demonstrate the performance of the proposed method, 52 images were manually labeled; the precision and recall of head detection were 0.87 and 0.98, respectively, and the coefficient of determination ( R 2) between the manual and new methods of counting was 0.84. To verify the utility of the method in breeding programs, a geolocation-based plot segmentation method was applied to pre-processed ortho-mosaic images to extract >1000 plots from original RGB images. Forty of these plots were randomly selected and labeled manually; the precision and recall of detection were 0.82 and 0.98, respectively, and the coefficient of determination between manual and algorithm counting was 0.56, with the major source of error being related to the morphology of plants resulting in heads being displayed both within and outside the plot in which the plants were sown, i.e., being allocated to a neighboring plot. Finally, the potential applications in yield estimation from UAV-based imagery from agronomy experiments and scouting of production fields are also discussed.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: not found
          Is Open Access

          Machine Learning for High-Throughput Stress Phenotyping in Plants.

          Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition

            Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called “deep learning meta-architectures”. We combine each of these meta-architectures with “deep feature extractors” such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant’s surrounding area.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              DeepFruits: A Fruit Detection System Using Deep Neural Networks

              This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Plant Sci
                Front Plant Sci
                Front. Plant Sci.
                Frontiers in Plant Science
                Frontiers Media S.A.
                1664-462X
                23 October 2018
                2018
                : 9
                : 1544
                Affiliations
                [1] 1International Field Phenomics Research Laboratory, Institute for Sustainable Agro-ecosystem Services, Graduate School of Agricultural and Life Sciences, The University of Tokyo , Tokyo, Japan
                [2] 2Agriculture and Food – Commonwealth Scientific and Industrial Research Organisation , St Lucia, QLD, Australia
                [3] 3Queensland Alliance for Agriculture and Food Innovation, The University of Queensland , Toowoomba, QLD, Australia
                [4] 4Montpellier SupAgro , Montpellier, France
                [5] 5Laboratory of Biometry and Bioinformatics, Department of Agricultural and Environmental Biology, Graduate School of Agricultural and Life Sciences, The University of Tokyo , Tokyo, Japan
                [6] 6Queensland Alliance for Agriculture and Food Innovation, The University of Queensland , Warwick, QLD, Australia
                [7] 7School of Agriculture and Food Sciences, The University of Queensland , Gatton, QLD, Australia
                Author notes

                Edited by: Yann Guédon, Centre de Coopération Internationale en Recherche Agronomique pour le Développement (CIRAD), France

                Reviewed by: Zhanguo Xin, Agricultural Research Service (USDA), United States; Thiago Teixeira Santos, Empresa Brazileira de Pesquisa Agropecuária (EMBRAPA), Brazil

                *Correspondence: Wei Guo, guowei@ 123456isas.a.u-tokyo.ac.jp

                This article was submitted to Technical Advances in Plant Science, a section of the journal Frontiers in Plant Science

                Article
                10.3389/fpls.2018.01544
                6206408
                30405675
                29c46c45-367b-4a61-b85e-9408aa37bc35
                Copyright © 2018 Guo, Zheng, Potgieter, Diot, Watanabe, Noshita, Jordan, Wang, Watson, Ninomiya and Chapman.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 23 June 2018
                : 02 October 2018
                Page count
                Figures: 9, Tables: 1, Equations: 0, References: 33, Pages: 9, Words: 0
                Categories
                Plant Science
                Original Research

                Plant science & Botany
                high-throughput phenotyping,uav remote sensing,sorghum head detecting and counting,breeding field,image analysis

                Comments

                Comment on this article