26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for paralyzed and healthy users, the EBCI “Wish Mouse.”

          Related collections

          Most cited references52

          • Record: found
          • Abstract: found
          • Article: not found

          Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general.

          Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            The Hybrid BCI

            Nowadays, everybody knows what a hybrid car is. A hybrid car normally has two engines to enhance energy efficiency and reduce CO2 output. Similarly, a hybrid brain-computer interface (BCI) is composed of two BCIs, or at least one BCI and another system. A hybrid BCI, like any BCI, must fulfill the following four criteria: (i) the device must rely on signals recorded directly from the brain; (ii) there must be at least one recordable brain signal that the user can intentionally modulate to effect goal-directed behaviour; (iii) real time processing; and (iv) the user must obtain feedback. This paper introduces hybrid BCIs that have already been published or are in development. We also introduce concepts for future work. We describe BCIs that classify two EEG patterns: one is the event-related (de)synchronisation (ERD, ERS) of sensorimotor rhythms, and the other is the steady-state visual evoked potential (SSVEP). Hybrid BCIs can either process their inputs simultaneously, or operate two systems sequentially, where the first system can act as a “brain switch”. For example, we describe a hybrid BCI that simultaneously combines ERD and SSVEP BCIs. We also describe a sequential hybrid BCI, in which subjects could use a brain switch to control an SSVEP-based hand orthosis. Subjects who used this hybrid BCI exhibited about half the false positives encountered while using the SSVEP BCI alone. A brain switch can also rely on hemodynamic changes measured through near-infrared spectroscopy (NIRS). Hybrid BCIs can also use one brain signal and a different type of input. This additional input can be an electrophysiological signal such as the heart rate, or a signal from an external device such as an eye tracking system.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Eye-hand coordination in object manipulation.

              We analyzed the coordination between gaze behavior, fingertip movements, and movements of the manipulated object when subjects reached for and grasped a bar and moved it to press a target-switch. Subjects almost exclusively fixated certain landmarks critical for the control of the task. Landmarks at which contact events took place were obligatory gaze targets. These included the grasp site on the bar, the target, and the support surface where the bar was returned after target contact. Any obstacle in the direct movement path and the tip of the bar were optional landmarks. Subjects never fixated the hand or the moving bar. Gaze and hand/bar movements were linked concerning landmarks, with gaze leading. The instant that gaze exited a given landmark coincided with a kinematic event at that landmark in a manner suggesting that subjects monitored critical kinematic events for phasic verification of task progress and subgoal completion. For both the obstacle and target, subjects directed saccades and fixations to sites that were offset from the physical extension of the objects. Fixations related to an obstacle appeared to specify a location around which the extending tip of the bar should travel. We conclude that gaze supports hand movement planning by marking key positions to which the fingertips or grasped object are subsequently directed. The salience of gaze targets arises from the functional sensorimotor requirements of the task. We further suggest that gaze control contributes to the development and maintenance of sensorimotor correlation matrices that support predictive motor control in manipulation.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                18 November 2016
                2016
                : 10
                : 528
                Affiliations
                [1] 1Department of Neurocognitive Technologies, Kurchatov Complex of NBICS Technologies, National Research Centre “Kurchatov Institute,” Moscow, Russia
                [2] 2Department of Cybernetics, National Research Nuclear University MEPhI Moscow, Russia
                [3] 3Department of Computer Systems and Technologies, National Research Nuclear University MEPhI Moscow, Russia
                [4] 4Centre for Cognitive Programs and Technologies, Russian State University for Humanities Moscow, Russia
                [5] 5Department of Psychology, Technische Universität Dresden Dresden, Germany
                Author notes

                Edited by: Mikhail Lebedev, Duke University, USA

                Reviewed by: George C. McConnell, Stevens Institute of Technology, USA; Witali Dunin-Barkowski, Scientific Research Institute of System Analysis (RAS), Russia

                *Correspondence: Sergei L. Shishkin sergshishkin@ 123456mail.ru

                This article was submitted to Neuroprosthetics, a section of the journal Frontiers in Neuroscience

                Article
                10.3389/fnins.2016.00528
                5114310
                27917105
                8f1cf7e0-9b8b-495e-bfee-7d72516d92e6
                Copyright © 2016 Shishkin, Nuzhdin, Svirin, Trofimov, Fedorova, Kozyrskiy and Velichkovsky.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 06 September 2016
                : 31 October 2016
                Page count
                Figures: 8, Tables: 6, Equations: 0, References: 75, Pages: 20, Words: 16926
                Funding
                Funded by: Russian Science Foundation 10.13039/501100006769
                Award ID: 14-28-00234
                Funded by: Russian Foundation for Basic Research 10.13039/501100002261
                Award ID: 15-29-01344
                Categories
                Neuroscience
                Original Research

                Neurosciences
                brain-computer interfaces,human-computer interfaces,gaze interaction,assistive technology,eye tracking,detection of intention,slow cortical potentials,stimulus-preceding negativity

                Comments

                Comment on this article