40
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants

      ,
      Behavior Research Methods
      Springer Science and Business Media LLC

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Participant attentiveness is a concern for many researchers using Amazon's Mechanical Turk (MTurk). Although studies comparing the attentiveness of participants on MTurk versus traditional subject pool samples have provided mixed support for this concern, attention check questions and other methods of ensuring participant attention have become prolific in MTurk studies. Because MTurk is a population that learns, we hypothesized that MTurkers would be more attentive to instructions than are traditional subject pool samples. In three online studies, participants from MTurk and collegiate populations participated in a task that included a measure of attentiveness to instructions (an instructional manipulation check: IMC). In all studies, MTurkers were more attentive to the instructions than were college students, even on novel IMCs (Studies 2 and 3), and MTurkers showed larger effects in response to a minute text manipulation. These results have implications for the sustainable use of MTurk samples for social science research and for the conclusions drawn from research with MTurk and college subject pool samples.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: found
          • Article: not found

          Conducting behavioral research on Amazon's Mechanical Turk.

          Amazon's Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Investigating Variation in Replicability

            Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Crowdsourcing user studies with Mechanical Turk

                Bookmark

                Author and article information

                Journal
                Behavior Research Methods
                Behav Res
                Springer Science and Business Media LLC
                1554-3528
                March 2016
                March 12 2015
                March 2016
                : 48
                : 1
                : 400-407
                Article
                10.3758/s13428-015-0578-z
                25761395
                e21c549e-2163-4430-b366-cd07191c50ca
                © 2016

                http://www.springer.com/tdm

                History

                Comments

                Comment on this article