2
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Verification of Ecological Citizen Science Data: Current approaches and future possibilities

      , , , , ,

      Biodiversity Information Science and Standards

      Pensoft Publishers

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Citizen science schemes (projects) enable ecological data collection over very large spatial and temporal scales, producing datasets of high value for both pure and applied research. However, the accuracy of citizen science data is often questioned, owing to issues surrounding data quality and verification, the process by which records are checked after submission for correctness. Verification is a critical process for ensuring data quality and for increasing trust in such datasets, but verification approaches vary considerably among schemes. Here, we systematically review approaches to verification across ecological citizen science schemes, which feature in published research, aiming to identify the options available for verification, and to examine factors that influence the approaches used (Baker et al. 2021). We reviewed 259 schemes and were able to locate verification information for 142 of those. Expert verification was most widely used, especially among longer-running schemes. Community consensus was the second most common verification approach, used by schemes such as Snapshot Serengeti (Swanson et al. 2016) and MammalWeb (Hsing et al. 2018). It was more common among schemes with a larger number of participants and where photos or video had to be submitted with each record. Automated verification was not widely used among the schemes reviewed. Schemes that used automation, such as eBird (Kelling et al. 2011) and Project FeederWatch (Bonter and Cooper 2012) did so in conjunction with other methods such as expert verification. Expert verification has been the default approach for schemes in the past, but as the volume of data collected through citizen science schemes grows and the potential of automated approaches develops, many schemes might be able to implement approaches that verify data more efficiently. We present an idealised system for data verification, identifying schemes where this hierachical system could be applied and the requirements for implementation. We propose a hierarchical approach in which the bulk of records are verified by automation or community consensus, and any flagged records can then undergo additional levels of verification by experts.

          Related collections

          Most cited references 5

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          A generalized approach for producing, quantifying, and validating citizen science data from wildlife images

          Abstract Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large‐scale camera‐trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics—level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported “nothing here” for an image that was ultimately classified as containing an animal (fraction blank)—to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert‐verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post‐hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large‐scale monitoring of African wildlife.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Data validation in citizen science: a case study from Project FeederWatch

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found
              Is Open Access

              Economical crowdsourcing for camera trap image classification

                Bookmark

                Author and article information

                Journal
                Biodiversity Information Science and Standards
                BISS
                Pensoft Publishers
                2535-0897
                September 20 2021
                September 20 2021
                : 5
                Article
                10.3897/biss.5.75506
                © 2021

                Comments

                Comment on this article