0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      The Interactive Visualization Gap in Initial Exploratory Data Analysis

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: found
          • Article: not found

          D³: Data-Driven Documents.

          Data-Driven Documents (D3) is a novel representation-transparent approach to visualization for the web. Rather than hide the underlying scenegraph within a toolkit-specific abstraction, D3 enables direct inspection and manipulation of a native representation: the standard document object model (DOM). With D3, designers selectively bind input data to arbitrary document elements, applying dynamic transforms to both generate and modify content. We show how representational transparency improves expressiveness and better integrates with developer tools than prior approaches, while offering comparable notational efficiency and retaining powerful declarative components. Immediate evaluation of operators further simplifies debugging and allows iterative development. Additionally, we demonstrate how D3 transforms naturally enable animation and interaction with dramatic performance improvements over intermediate representations. © 2010 IEEE
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A multi-level typology of abstract visualization tasks.

            The considerable previous work characterizing visualization usage has focused on low-level tasks or interactions and high-level tasks, leaving a gap between them that is not addressed. This gap leads to a lack of distinction between the ends and means of a task, limiting the potential for rigorous analysis. We contribute a multi-level typology of visualization tasks to address this gap, distinguishing why and how a visualization task is performed, as well as what the task inputs and outputs are. Our typology allows complex tasks to be expressed as sequences of interdependent simpler tasks, resulting in concise and flexible descriptions for tasks of varying complexity and scope. It provides abstract rather than domain-specific descriptions of tasks, so that useful comparisons can be made between visualization systems targeted at different application domains. This descriptive power supports a level of analysis required for the generation of new designs, by guiding the translation of domain-specific problems into abstract tasks, and for the qualitative evaluation of visualization usage. We demonstrate the benefits of our approach in a detailed case study, comparing task descriptions from our typology to those derived from related work. We also discuss the similarities and differences between our typology and over two dozen extant classification systems and theoretical frameworks from the literatures of visualization, human-computer interaction, information retrieval, communications, and cartography.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Polaris: a system for query, analysis, and visualization of multidimensional relational databases

                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Visualization and Computer Graphics
                IEEE Trans. Visual. Comput. Graphics
                Institute of Electrical and Electronics Engineers (IEEE)
                1077-2626
                January 2018
                January 2018
                : 24
                : 1
                : 278-287
                Article
                10.1109/TVCG.2017.2743990
                28866512
                0867017f-78a0-44ad-a312-c90bfd0c15c9
                © 2018
                History

                Comments

                Comment on this article