56
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Sonification of reference markers for auditory graphs: effects on non-visual point estimation tasks

      research-article

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.

          Most cited references32

          • Record: found
          • Abstract: not found
          • Article: not found

          Why a Diagram is (Sometimes) Worth Ten Thousand Words

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            An experimental system for auditory image representations.

            This paper presents an experimental system for the conversion of images into sound patterns. The system was designed to provide auditory image representations within some of the known limitations of the human hearing system, possibly as a step towards the development of a vision substitution device for the blind. The application of an invertible (1-to-1) image-to-sound mapping ensures the preservation of visual information. The system implementation involves a pipelined special purpose computer connected to a standard television camera. The time-multiplexed sound representations, resulting from a real-time image-to-sound conversion, represent images up to a resolution of 64 x 64 pixels with 16 gray-tones per pixel. A novel design and the use of standard components have made for a low-cost portable prototype conversion system having a power dissipation suitable for battery operation. Computerized sampling of the system output and subsequent calculation of the approximate inverse (sound-to-image) mapping provided the first convincing experimental evidence for the preservation of visual information in the sound representations of complicated images. However, the actual resolution obtainable with human perception of these sound representations remains to be evaluated.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Visual Display of Quantitative Information

                Bookmark

                Author and article information

                Contributors
                Journal
                peerj-cs
                PeerJ Computer Science
                PeerJ Comput. Sci.
                PeerJ Inc. (San Francisco, USA )
                2376-5992
                6 April 2016
                : 2
                : e51
                Affiliations
                [-1] School of Electronic Engineering and Computer Science, Queen Mary University of London , United Kingdom
                Article
                cs-51
                10.7717/peerj-cs.51
                7a86e9f9-6b72-4b49-ba4c-234c4d44076e
                ©2016 Metatla et al.

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.

                History
                : 17 September 2015
                : 29 February 2016
                Funding
                Funded by: EPSRC
                Award ID: EP/J017205/1
                This work was funded by the EPSRC Grant number EP/J017205/1. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Human–Computer Interaction

                Computer science
                Sonification,Point estimation,Auditory graphs,Non-visual interaction,Reference markers

                Comments

                Comment on this article