8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Deep Learning-Based Phenotypic Analysis of Rice Root Distribution from Field Images

      research-article
      ,
      Plant Phenomics
      AAAS

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Root distribution in the soil determines plants' nutrient and water uptake capacity. Therefore, root distribution is one of the most important factors in crop production. The trench profile method is used to observe the root distribution underground by making a rectangular hole close to the crop, providing informative images of the root distribution compared to other root phenotyping methods. However, much effort is required to segment the root area for quantification. In this study, we present a promising approach employing a convolutional neural network for root segmentation in trench profile images. We defined two parameters, Depth50 and Width50, representing the vertical and horizontal centroid of root distribution, respectively. Quantified parameters for root distribution in rice ( Oryza sativa L.) predicted by the trained model were highly correlated with parameters calculated by manual tracing. These results indicated that this approach is useful for rapid quantification of the root distribution from the trench profile images. Using the trained model, we quantified the root distribution parameters among 60 rice accessions, revealing the phenotypic diversity of root distributions. We conclude that employing the trench profile method and a convolutional neural network is reliable for root phenotyping and it will furthermore facilitate the study of crop roots in the field.

          Related collections

          Most cited references55

          • Record: found
          • Abstract: not found
          • Book Chapter: not found

          U-Net: Convolutional Networks for Biomedical Image Segmentation

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The NumPy Array: A Structure for Efficient Numerical Computation

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

              We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
                Bookmark

                Author and article information

                Contributors
                Journal
                Plant Phenomics
                Plant Phenomics
                PLANTPHENOMICS
                Plant Phenomics
                AAAS
                2643-6515
                2020
                16 October 2020
                : 2020
                : 3194308
                Affiliations
                Institute of Crop Science, National Agriculture and Food Research Organization, 2-1-2 Kannondai, Tsukuba, Ibaraki 305-8518, Japan
                Author information
                https://orcid.org/0000-0001-6295-2208
                https://orcid.org/0000-0003-4006-954X
                Article
                10.34133/2020/3194308
                7706345
                33313548
                411b9446-b6d5-40a3-97fc-a16b5db7f176
                Copyright © 2020 S. Teramoto and Y. Uga.

                Exclusive Licensee Nanjing Agricultural University. Distributed under a Creative Commons Attribution License (CC BY 4.0).

                History
                : 15 May 2020
                : 2 September 2020
                Funding
                Funded by: Japan Science and Technology Agency
                Award ID: JPMJCR17O1
                Categories
                Research Article

                Comments

                Comment on this article