1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Cognitive penetrability of scene representations based on horizontal image disparities

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The structure of natural scenes is signaled by many visual cues. Principal amongst them are the binocular disparities created by the laterally separated viewpoints of the two eyes. Disparity cues are believed to be processed hierarchically, first in terms of local measurements of absolute disparity and second in terms of more global measurements of relative disparity that allow extraction of the depth structure of a scene. Psychophysical and oculomotor studies have suggested that relative disparities are particularly relevant to perception, whilst absolute disparities are not. Here, we compare neural responses to stimuli that isolate the absolute disparity cue with stimuli that contain additional relative disparity cues, using the high temporal resolution of EEG to determine the temporal order of absolute and relative disparity processing. By varying the observers’ task, we assess the extent to which each cue is cognitively penetrable. We find that absolute disparity is extracted before relative disparity, and that task effects arise only at or after the extraction of relative disparity. Our results indicate a hierarchy of disparity processing stages leading to the formation of a proto-object representation upon which higher cognitive processes can act.

          Related collections

          Most cited references68

          • Record: found
          • Abstract: found
          • Article: not found

          On the interpretation of weight vectors of linear models in multivariate neuroimaging.

          The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            An alternative method for significance testing of waveform difference potentials.

            Guthrie and Buchwald (1991) proposed an ad hoc procedure for assessing the statistical significance of waveform difference potentials that may arise in a variety of psychophysiology research contexts. In our paper, an alternative method is presented and demonstrated that has fewer underlying assumptions than does the Guthrie-Buchwald test and may, therefore, produce better results in some situations. In particular, the test proposed here (a) is distribution free, (b) requires no assumption of an underlying correlation structure (e.g., first-order autoregressive), (c) requires no estimate of the population autocorrelation coefficient, (d) is exact, (e) produces p values for any number of subjects and time points, and (f) is highly intuitive as well as theoretically justifiable. This procedure may be used to carry out multiple comparisons with exact specification of experiment-wise error, however, this test is based on permutation principles and may require large amounts of computer time for its implementation.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Bayesian Cognitive Modeling: A Practical Course

                Bookmark

                Author and article information

                Contributors
                milenak@stanford.edu
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                25 October 2022
                25 October 2022
                2022
                : 12
                : 17902
                Affiliations
                [1 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Psychology, , Stanford University, ; 450 Jane Stanford Way, Stanford, CA USA
                [2 ]GRID grid.168010.e, ISNI 0000000419368956, Wu-Tsai Neuroscience Institute, Stanford University, ; 290 Jane Stanford Way, Stanford, CA USA
                Article
                22670
                10.1038/s41598-022-22670-7
                9596438
                36284130
                81ac3faf-706c-4acb-bba3-8cbc2d6d794b
                © The Author(s) 2022

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 9 March 2022
                : 18 October 2022
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000053, National Eye Institute;
                Award ID: EY018875
                Award ID: EY018875
                Award ID: EY018875
                Award Recipient :
                Categories
                Article
                Custom metadata
                © The Author(s) 2022

                Uncategorized
                attention,visual system,object vision
                Uncategorized
                attention, visual system, object vision

                Comments

                Comment on this article