A Structured Review of Information Visualization Success Measurement Nadine Amende A Structured Review of Information Visualization Success Measurement

Information visualization research has been popular for nearly two decades, but a more widespread adaption of visualization tools is missing. We present a state-of-the-art in measuring information visualization success by means of a structured literature review and a classification framework. This article identifies and classifies 30 empirical journal papers, and consolidates the empirical findings. The review shows an absence of theoretical success models and influence factors as a basis for indepth success analysis. Dominant research design is that of laboratory experiments which analyze an individual perspective on success. The review results are discussed and a research agenda is proposed.


INTRODUCTION
As the information visualization discipline evolves, many information visualization techniques and systems exist, but a more widespread adoption is lacking.The first hype was followed by disillusion.On the one hand, organizations want to see a payoff for their investments in any information systems (IS), but standardized instruments for measuring information visualization performance comprehensively are missing.On the other hand, many users are still struggling to use even simple visualizations [22].Thus, it is crucial to analyze the performance of information visualization and to understand its critical influence factors.At this, the human factor plays the most important role.There is a need to analyze how information visualization supports individuals in their information tasks and how they conduct this information related tasks, e.g.how they interact with information, how they perceive it visually, how they search for information and solve problems [4] [14].Therefore, it is important to undertake validated research and present evident results in performance or rather success measurement, which has been a permanent topic in IS research.The aim of this paper is to analyze and describe the current state of information visualization success measurement by means of scientific methods to ensure validity.The following questions are addressed: • Which theoretical models or frameworks for assessing success can be found in literature and were used in past empirical studies?• Which influence factors were analyzed in past empirical studies?
• What information visualization techniques were analyzed in past empirical studies?
• Which research designs were used in past empirical studies?
To answer these questions we conducted a structured literature review.The results of this review can show first starting points to understand the lacking adoption.The remainder of this paper is structured as follows.Below, we give an overview of IS measurement in general and information visualization measurement.We then introduce our structured literature review method and present the results.Finally, we deduce a research agenda and point out some limitations of our research.

INFORMATION VISUALIZATION SUCCESS MEASUREMENT
Measuring IS success has been popular for 25 years and is an ongoing issue in IS research with a plethora of success definitions, e.g.individual or organizational performance, user acceptance or user satisfaction, and of models like the technology acceptance model by Davis [8], the task technology fit model by Goodhue [15], Roger's diffusion of innovations theory [24] or the most famous DeLone and McLean IS success model (updated 2003) [10].Many researchers extended the established model with more dimensions and relationships, respecified them, examined the relationships or identified standardized measures to evaluate the specified dimensions.This has helped to better understand IS success [21].In the field of information visualization Cognitive Fit Theory by Vessey [26]  Nadine.Amende@uni-passau.de of a task and decision maker's mental representation of a task.Current evaluation practices of information visualization are experiments, case studies and technical usability studies [14].To gain a better insight in these evaluation practices, adaption of IS success measurement expertise to the information visualization domain can be useful.

RESEARCH METHODOLOGY
To follow the rigor and relevance debate in IS research [18] on the one hand and a call of Webster and Watson (2002) on the other hand [27], this paper describes a structured literature research method and thus a rigorous foundation for a research agenda in information visualization success measurement.

Structured Literature Review
A structured literature review facilitates theory development, closes areas where plenty research exists and discovers new research areas [27].It summarizes and integrates prior research and elicits inconsistencies, relations and gaps in research to help following a research objective.Conducting a literature search in the IS domain is very complex and time consuming.This is caused by a constantly increasing number of IS journals and conferences, e.g.693 active IS journals are listed in index of Information Systems Journals. 1Therefore, a relatively comprehensive number of relevant and qualitative articles is crucial for an evident literature analysis.A structured literature review ensures that all relevant sources can be gathered and analyzed by means of a scientific method [12].
The process of review and the exclusion of sources are documented.Because of this transparency and reliability of research, results of these activities can be understood and reused.
1 URL: http://lamp.infosys.deakin.edu.au/journals/ last access: 08.03.2010Brocke and Riemer recommend a 5-phasing iterative literature review process (see figure 1), which is the basis for this paper and is described below.A review process has to be iterative, because knowledge continuously grows and review results have to be updated and extended regularly [2].

Definition of Review Scope
The review scope defines the degree of coverage of sources and the period covered.The degree of coverage of sources is crucial for reviewing.Cooper distinguishes between an exhaustive review, an exhaustive review describing only a sample, a representative review typifying larger groups of articles and a pivotal review illustrating central articles [6].In keeping with this paper's objective the state of the art of success measurement in information visualization was accomplished by means of an exhaustive review.The period covered can be restricted as well.A period can be selective, exhaustive or determines an interval.The spread of information visualization as an area of research began with some developments at XEROX Palo Alto Research Center in the early 90's and first information visualization conferences in 1995.IS success measurement has been pushed since 1992 when DeLone and McLean published their IS success measurement model.Thus, an appropriate interval between 1992 and 2009 was determined to focus on a wide range on relevant publications.

Literature Search
A systematic selection of appropriate sources was carried out by means of IS journal rankings.Multiple ranking lists [1], [13], [28], [17] and [19] were consolidated to identify top IS journals.Thus, identified top journals do not represent the perception of one researcher and one ranking method.Journals were selected, if they were listed in three out of the five ranking lists.In total, 13 top IS journals were identified.Several specific information visualization journals were added to receive more relevant articles despite a lower quality than top journals [28].

A Structured Review of Information Visualization Success Measurement
Nadine Amende The selection of conference papers was rejected because of a restricted length and thus relatively unspecific statements and a lower quality than journals [2].Books were rejected as well because of partially lacking review processes.In a next step, appropriate key phrases were generated by combining keywords for information visualization, e.g."map visualization", "tree visualization" with keywords for measurement, e.g."evaluation", "benefit" or "acceptance".These key phrases were used for literature searches of electronic databases, e.g.The search of databases was conducted in three steps.Initially, articles were retrieved by searching in titles, abstracts, denoted article keywords and partially in full text.Results of this first step were 343 "initial hits".In a second step, abstracts of these initial hits were analyzed in-depth in respect of their relevance that is, consistency with the defined time period and review objectives -use of information visualization and its success measurement.Finally, a roughly full text analysis of the 142 "abstract hits" was conducted to select relevant literature for inclusion in the review.Articles were rejected for the following reasons to ensure a certain quality of articles: • address no empirical investigation, e.g.only meta analysis, literature review or analytical evaluation [25], • address scientific visualization of physical data, e.g.ozone concentration in the atmosphere, instead of information visualization of non-physical data, e.g.financial and business data [3], • do not describe research design, • do not define a success construct, exogenous and endogenous variables for measurement, • is a duplicate [21] or • uses data from another source (second hand).
The evaluation phases (step 2 and 3) limit the amount of literature to relevant articles.The resulting list of "relevant hits" contains 30 articles, which became the basis for the literature review.

Literature Analysis and Synthesis
The identified relevant articles were considered for an in-depth analysis and synthesis.For evident results the analysis and synthesis process has to be conducted systematically.A framework was A Structured Review of Information Visualization Success Measurement Nadine Amende developed to categorize the relevant articles and to structure the analysis process.Theoretical foundations for IS success measurement were determined to analyze their use for information visualization evaluation.Therefore, common IS success reference models and frameworks e.g. the updated D&M model, TAM, TTF, diffusion of innovations theory, End User Computing Satisfaction -EUCS by Doll and Torkzadeh [11] and SERVQUAL by Parasuraman et al. [20] were considered [21].

Framework for
The cognitive fit model emerged from information visualization research was added.Articles which referenced another theory were categorized as "other" or which referenced no theory were categorized as "n/a" (not applicable).Endogenous and exogenous variables were determined to analyze the underlying success constructs in-depth.That is, it is crucial to know and analyze potential factors which can influence success and to clearly define success.Endogenous variables were deduced from theoretical success models.
Exogenous variables describe influence factors for successful visualization and were deduced from IS success models and information visualization theory [22].Articles which used no designated variables were categorized as "other".The two categories visualization type and objective of visualization were determined to present the object of analysis and show the focus of the investigated studies.[25].Both categories permit to analyze reliability, internal and external validity of the empirical studies.

RESEARCH AGENDA
The results (see figure 2a and 2b) show that use of theoretical consolidated success models is weak.
Only 2 studies used cognitive fit theory and 1 the task technology fit model as foundation.The use of theoretical models can improve analyzing success A Structured Review of Information Visualization Success Measurement Nadine Amende and its influence factors.These models permit a structured and holistic analysis of influence factors and causal relationships.They depict already investigated and validated influence factors and causal relationships.Due to their popularity, many studies exist presenting measures and approaches for evaluation.Measures of these common success models can be adapted for information visualization research.Thus, validated success models support a rigor evaluation process.There has to be more research by applying common success models and adapting IS models for the domain of information visualization or developing new models.
A more in-depth analysis of influence factors reveals that most studies only analyze influences by varying visualization types (28) and additionally task complexities (11).Exogenous variables like user skills, task type, system quality and other constructs contained in success models like performance expectation, voluntariness, subjective norm has to be investigated for deeper understanding of success.[22].Beyond, instead of laboratory artificial settings research has to be down in reallife settings with real datasets and realistic tasks to ensure generalizability.

CONCLUSIONS
Information visualization research has been popular for nearly two decades, but a broad adoption of systems is missing.We presented a state-of-theart in measuring information visualization success by means of a scientific method and pinpoint future research.Limitations of the structured literature review can be seen in the definition of review scope, the selection of relevant journals and key phrases, the rejection of conference articles and omitted backward and forward search in article references.This could have elided potentially relevant articles.The literature review was conducted by one person.Thus, subjective interpretations could result, but can be avoided by intercoder reliability tests.Success measurement is important for researchers and for practitioners to see a return on their investments.This literature review presented starting points for understanding the lacking adoption of information visualization systems.Future information visualization success research has to concentrate on holistic or rather multidimensional and in-depth analysis of success.Concerning the A Structured Review of Information Visualization Success Measurement Nadine Amende first, different stakeholder perspectives have to be analyzed.Concerning the last, there has to be more investigation of various influence factors and causal relationships.This can be assured by means of theoretic IS success models.From a practical perspective there has to be an analysis of costs as influence factor too. In-depth analysis can be ensured by analyzing real human behavior in realistic experiment settings and by using qualitative research design strategies e. g. action research, case studies or ethnographic studies.

ACKNOWLEDGMENTS
Thanks to colleagues and reviewers for many fruitful discussions and valuable comments.
explains higher task performance by a match of visual representation Nadine Amende University of Passau, Innstrasse 43 94032 Passau 0049-851-5092593

Table 1 :
ScienceDirect, ProQuest, Emerald, Springer and Wiley Science or journal websites.A precise key phrase excludes not necessarily relevant articles.Table 1 displays the applied searching functionalities and the used databases and websites.Considered journals for literature review

Table 2 :
Table 2 displays the results of literature search.Results of literature search

Table 3 :
Literature analysis framework