7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Tracking the Progression of Reading Through Eye-gaze Measurements

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this paper we consider the problem of tracking the progression of reading through eye-gaze measurements. Such an algorithm is novel and will ultimately help to develop a method of analyzing eye-gaze data which had been collected during reading activity in order to uncover crucial information regarding the individual's interest level and quality of experience while reading a passage of text or book. Additionally, such an approach will serve as a "visual signature" - a method of verifying if an individual has indeed given adequate attention to critical text-based information. Further, an accurate "reading-progression-tracker" has potential applications in educational institutions, e-readers and parenting solutions. Tracking the progression of reading remains a challenging problem due to the fact that eye-gaze movements are highly noisy and the eye-gaze is easily distracted in a limited space, like an e-book. In a prior work, we proposed an approach to analyze eye-gaze fixation points collected while reading a page of text in order to classify each measurement to a line of text; this approach did not consider tracking the progression of reading along the line of text. In this paper, we extend the capabilities of the previous algorithm in order to accurately track the progression of reading along each line. the proposed approach employs least squares batch estimation in order to estimate three states of the horizontal saccade: position, velocity and acceleration. First, the proposed approach is objectively evaluated on a simulated eye-gaze dataset. Then, the proposed algorithm is demonstrated on real data collected by a Gazepoint eye-tracker while the subject is reading several pages from an electronic book.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: found
          • Article: not found

          Faces and text attract gaze independent of the task: Experimental data and computer model.

          Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attention by tracking the eye movements of subjects in two types of tasks-free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom-up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced model's predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Using Think-Alouds to Examine Reader-Text Interest

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Snap clutch, a moded approach to solving the Midas touch problem

                Bookmark

                Author and article information

                Journal
                07 May 2019
                Article
                1905.02823
                67de0a1b-6d85-4b71-9dc4-a2fdc8eb2a55

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.HC eess.SP

                Electrical engineering,Human-computer-interaction
                Electrical engineering, Human-computer-interaction

                Comments

                Comment on this article