10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative ( https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.

          Related collections

          Most cited references123

          • Record: found
          • Abstract: not found
          • Article: not found

          ImageNet Large Scale Visual Recognition Challenge

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            LIBSVM: A library for support vector machines

            LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Psychophysics Toolbox

                Bookmark

                Author and article information

                Contributors
                Role: Reviewing Editor
                Role: Senior Editor
                Journal
                eLife
                Elife
                eLife
                eLife
                eLife Sciences Publications, Ltd
                2050-084X
                27 February 2023
                2023
                : 12
                : e82580
                Affiliations
                [1 ] Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health ( https://ror.org/01cwqze88) Bethesda United States
                [2 ] Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences ( https://ror.org/0387jng26) Leipzig Germany
                [3 ] Department of Medicine, Justus Liebig University Giessen ( https://ror.org/033eqas34) Giessen Germany
                [4 ] Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain Sciences ( https://ror.org/0387jng26) Leipzig Germany
                [5 ] Machine Learning Core, National Institute of Mental Health, National Institutes of Health ( https://ror.org/01cwqze88) Bethesda United States
                University of Toronto ( https://ror.org/03dbr7087) Canada
                Donders Institute for Brain, Cognition and Behaviour Netherlands
                University of Toronto ( https://ror.org/03dbr7087) Canada
                University of Toronto ( https://ror.org/03dbr7087) Canada
                Harvard University ( https://ror.org/03vek6s52) United States
                Author notes
                [†]

                These authors contributed equally to this work.

                Author information
                https://orcid.org/0000-0001-7257-428X
                https://orcid.org/0000-0002-2983-4709
                https://orcid.org/0000-0002-8040-5686
                https://orcid.org/0000-0002-2446-717X
                https://orcid.org/0000-0003-1830-2501
                https://orcid.org/0000-0001-6861-8964
                Article
                82580
                10.7554/eLife.82580
                10038662
                36847339
                678b6861-5578-4246-af3d-e3b2a9cb2b0e

                This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

                History
                : 09 August 2022
                : 25 February 2023
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: ZIA-MH-002909
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: ZIC-MH002968
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100004189, Max-Planck-Gesellschaft;
                Award ID: Max Planck Research Group M.TN.A.NEPF0009
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100000781, European Research Council;
                Award ID: Starting Grant StG-2021-101039712
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100003495, Hessisches Ministerium für Wissenschaft und Kunst;
                Award ID: LOEWE Start Professorship
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100018668, Max Planck School of Cognition;
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100003495, Hessisches Ministerium für Wissenschaft und Kunst;
                Award ID: Tha Adaptive Mind
                Award Recipient :
                The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
                Categories
                Tools and Resources
                Neuroscience
                Custom metadata
                THINGS-data reflects three large-scale neuroimaging and behavioral datasets of object processing in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1854 objects.

                Life sciences
                fmri,meg,behavior,research data,objects,vision,human
                Life sciences
                fmri, meg, behavior, research data, objects, vision, human

                Comments

                Comment on this article