Leveraging Visualization to Improve Sensemaking within a Computational RPD Model A Military Perspective

Motivation – To explore the concept that knowledge visualization would improve sensemaking within the scope of a NDM model; specifically the agent-based R-CAST system derived from Klein’s Recognition-Primed Decision (RPD) Model. Research approach – In order to evaluate the effectiveness of the visual sensemaking extension to gain better situation awareness and team performance, we are taking an experimental approach. Research limitations/Implications – This is an ongoing effort with results of planned experiment expected in early 2009. Originality/Value – The knowledge visualization concept offered in this effort is based on a hybrid dimensionality reduction technique and the aggregated similarity measures of an RPD experience space. The results of the knowledge visualization will likely improve the perception, comprehension and projection associated with an RPD experience space. Take away message – Processes that support a decision maker’s sensemaking and situational awareness to solve problems are inadequate. Extending RPD with knowledge visualization may significantly improve the decision making process.


INTRODUCTION
Today's military environment epitomizes the challenges faced in naturalistic decision making (NDM) -requiring commander and staff to routinely operate in dynamic environments that are characterized with uncertainty, time stress and high stake outcomes.The traditional attrition-based warfare of the past has given way to full-spectrum operations, requiring Soldiers to understand and operate within the continuum of stability operations and combat (US Army FM 3.0, 2001;Chiarelli & Michaelis, 2005).Historically this process has been supported by the time-honored military decision making process (MDMP).The MDMP is a deliberate analytical seven-step process with over 100 sub-steps that has proven successful in conventional settings (Wade, 1999).The environments faced by today's military are anything but conventional.Our information rich network-centric operations (NCO) combined with the asymmetric nature of our adversaries are straining the old processes.Required are processes and computational models that support the decision maker's experience and capitalize on sensemaking and situational awareness to solve problems.Towards this end, the NDM paradigm has shown great promise with Klein's Recognition Primed Decision (RPD) model gaining wide acceptance among the military community (Warrick, McIIwaine, & Hutton 2003;Klein 1998).
The RPD model relies on intuition and experience along with mental simulation to recognize the current situation and quickly formulate a satisficing solution based on that past experience (Klein 1998).At a high level of abstraction, the model has two phases: recognition and evaluation.During the recognition phase the decision maker attempts to match (recognize) the current situational environment to previous experiences.Utilizing a strategy called "feature matching", the decision maker uses the perceived cues in the environment to develop situation awareness, seeking further information as necessary.In the event that the situation is not typical, a secondary strategy used during the recognition phase is "story building".With story building the decision maker explores potential hypotheses and attempts to construct an explanation of the current situation and link observed information (Fan et al., 2006).In keeping with the NDM satisficing paradigm, actions are evaluated one at a time in the Evaluation Phase.The decision maker mentally imagines (mentally simulates) how the action will achieve the goal, while monitoring the expectancies.If the decision maker decides that the action will work, the action is accepted and implemented.If, during the mental simulation, the decision maker decides that the action will not work, a context-specific modification is made to the action; otherwise, the experience is rejected and the process is started over.Mental simulation provides the necessary context-specific situational understanding the decision maker must have to adapt to dynamic environments.
For complex military operations, challenges arise during the recognition phase when the current situation is difficult to distinguish or not fully developed.This research effort seeks to leverage the decision maker's knowledge visualization capability; augmenting and capitalizing on their sensemaking aptitude.Integrating knowledge visualization into the computational RPD process enhances the decision maker's understanding of the underlying decision space, supporting not supplanting the collaboration between the human operator and the codified decision model being exercised.The knowledge visualization is based on the aggregated similarity measures of the experience space and allows the decision maker to improve their sensemaking capabilities -adjusting contextual constraints, directly prioritizing missing information, and improved situational awareness with a holistic view of the decision space

APPROACH
To accomplish this research we are extending R-CAST, an agent-based computational RPD model developed at Pennsylvania State University (PSU) (Fan & Yen 2007) with the capability to interactively visualize the experience space through dimensionality reduction.While there are a number of dimensionality reduction visualization techniques available, the approach utilized for this effort is a hybrid multidimensional scaling (MDS) method.The intrinsic power of MDS is that it allows the dimensionality reduction of complex n-space to a human understandable 2-or 3dimensional space.To overcome the mixed scales of measure (quantitative and qualitative) associated with our experience space we are using an adaptation to Gower's similarity coefficient calculation.In order to evaluate the effectiveness of the visual sensemaking extension to R-CAST an early 2009 experiment is planned at the PSU.

Sensemaking
The importance of sensemaking and situational awareness to the RPD model cannot be over-emphasized.At the heart of RPD process is the ability to "make sense" of the situation to determine what course of action should be taken -this motivates the need to improve these activities, including knowledge visualization techniques.
While the situational awareness and sensemaking have occasionally been used interchangeably they differ depending on one's perspective.Growing out of cognitive psychology, the most often cited work on situational awareness (SA) is that by Mica Endsley (Endsley, 1995).Endsley suggests SA is represented at three levels: perception (SA Level 1), comprehension (SA level 2) and projection (SA level 3).In simple terms perception involves the identification of the critical elements within the decision maker's environment.Comprehension is attained when elements within the environment are combined to form an understanding of the current situation with respect to the decision maker's goals.The highest level of SA, projection, involves prognosticating the future state of critical the elements and their association to your goals.Interestingly, Endsley's definition of SA closely matches what many military decision makers think of sensemaking.
Klein and others have suggested that situational awareness is more a state derived out of sensemaking and that sensemaking is actually a larger process (Klein, Moon & Hoffman, 2006a;Klein, Moon & Hoffman, 2006b;Hutton, Klein & Wiggins, 2008).Hutton defines sensemaking as "the deliberate effort to understand events and is typically triggered by unexpected changes or surprises that make a decision maker doubt their prior understanding.Sensemaking is the active process of building, refining, questioning and recovering situation awareness (Hutton et al., 2008)."Paraphrasing Klein, sensemaking fulfills a number of functions: satisfying a need to comprehend, improving contextual plausibility, clarifying the past, anticipating difficulties, guiding information exploration, and providing a common ground for shared sensemaking.
From the military's perspective, sensemaking is a critical tenet of NCO and has been the subject of much review (Leedom, 2001;Leedom 2004;Garstka & Alberts, 2004).The central hypothesis of the NCO is that operating within the context of robust and networked physical, information, cognitive and social domains -the warfigher of the future will be empowered to make better decisions.Here the physical domain is where effects take place and supporting infrastructure exist; the information domain is where information is created, manipulated and shared; the cognitive domain is where sensemaking occurs and decisions are made; and the social domain is where entities interact sharing information and collaborating on decisions.In this context, sensemaking is defined by three interrelated activities: forming an awareness of key elements relevant to the situation, forming understanding of the contextual environment and making decisions (Garstka & Alberts, 2004).

Knowledge Visualization
The goal of introducing knowledge visualization to the RPD process is to enhance the decision maker's understanding of the underlying decision space -supporting not supplanting the collaboration between the human operator and the codified decision model being exercised.Visualization takes advantage of the human capacity for spatial reasoning and the development of mental or concept maps of complex relationships (Börner, Chen & Boyack, 2003;Heer & Landay, 2005).It is through this visual construct that the human is able to project relationships and association between and among the visualized objects that they would not be able to otherwise.
On one level, visualization can be thought of as a dimension-reduction activity designed to summarize complexity and encourage the influence of human visual perception into the decision making process (Card, Mackinlay, Shneiderman, 1999;Shneiderman, 2001).With that understanding, the complex environment of military decision making begs for the incorporation of visualization assistance.Complementing the computational RPD process with visualization affords the decision maker two critical features: first, the required ability for improved sensemaking when recognition is slow to converge and second, the ability to explore and exploit the experience space.Expert system developers realized early on that results without explanation were not sufficient.To instill trust in the process the system had to allow interactive interrogation.
While there are a number of potential dimensionality reduction visualization techniques available, the approach invoked for this effort was a hybrid multidimensional scaling (MDS) method.Differing from other forms of multivariate statistics, specifically principal component analysis, MDS does not constrain the data to be normally distributed.Originating out of the fields of mathematical psychology and social sciences, MDS is a data analysis approach used to visually interrogate the similarity or dissimilarity between pair-wise "distances" among a given set of objects (Torgerson 1953, Richardson 1938, Young 1985, Cox & Cox 2001).The values of the distances, sometimes called proximity measures or similarity measures, can be obtained either as perceived subjective measures or in our R-CAST extension calculated objectively within the pair-wise comparison of the given set of experience and expectation objects.
For calculated similarities, ideally, all of the defining attributes should be of the same data type (Heady 2007).Unfortunately, most real-world problems do not lend themselves to this constraint.One of first to confront the combination of quantitative and qualitative (mixed scales of measure) was Gower (Gower 1971).Given an array of objects with k attributes, the global similarity value S ij between two objects is defined as the summation of the individual attribute similarities S ijk multiplied by a possible weighting factor.Here, S ijk corresponds to the measure of local similarity assigned to the object pair (X i , X j ), restricted to attribute k.The summation of the individual similarities is divided by the summation across all weights.Gower's metric allows for the weighing of individual attributes and the possibility of missing data.For situations where the individual similarity can not be computed for an attribute k, set W k = 0 thereby removing it from contributing to the numerator and demoninator calculation.The calculation for the individual similarities for both real is given below -where X ik and X jk are the K th attribute for objects X i and X j respectively.r(k) is defined as the range for that particular quantitative attribute.

Illustrative Example
An illustrative example of this visualization technique follows.The goal of this example is to assist an analyst in identifying high-valued individuals (HVI).Although this example represents a pedagogical subset of the original, the features vector and associated output give an overview of how the process operates.The features vector, a collection of generic affiliations and simple social network calculations, can be seen atop table 1.Note this feature set is a collection of mixed data types (nominal, binary, and interval) with the Person of Interest highlighted.After processing the feature set through a Gower proximity routine the results of a PERMAP1 MDS visualization are displayed below in figure 1. Quick interpretation of the results reveal that our person of interest, highlighted with red circle, is more closely related to our SH group, which happens to be a sub-group associated with hostage taking.The takeaway from this example is that by innovatively adding visualization to the decision making process we are able to bring the analyst's visual perception into the sensemaking process.It is this same type of visual sensemaking we are adding to the RCAST recognition / expectation monitoring routines with the added capability to prioritize missing information modify cues and expand "what-if" capabilities.

R-CAST
One of the computational RPD models available is the R-CAST system.R-CAST is an agent-based, RPD-enhanced framework that extends the CAST (Collaborative Agents for Simulating Teamwork) agent architecture.The R-CAST framework enables agents to collaborate with other members of the team (software or human) in sharing information relevant to their decision-makings based on the RPD paradigm.Leveraging the concept of shared mental models in team cognition, R-CAST proactively anticipates information needs and collaborate in seeking and monitoring relevant information effectively, allowing improved human-agent and agent-agent collaboration (Yen et al, 2005;Hanratty, Dumer, Yen & Fan 2003;Fan, Sun, McNeese & Yen, 2005).

Overview
Major components of R-CAST are shown in Figure 2. To capture the recognition phase in the RPD model, the Decision Making module uses the information in the knowledge base, past experiences from the experience base, and the current situation recognition to determine if a past experience matches the current situation.The evolution of decisions can involve inter-agent, intra-agent and human-agent activities, which are coordinated by the Teamwork Manager and Taskwork Manager, respectively (Fan et al., 2006).The Expectancy Monitoring module monitors the current situational context for anticipated changes and informs the Decision Making module accordingly.Experiences that are adapted to a successful outcome are processed into the system as new experiences through the Experience Adaptation Module.Experiences are codified as cues, goals, courses of action and expectancies.In R-CAST the cues, goals and expectancies are represented as predicates (Fan et al., 2005).

Visualization Extension to RCAST
The goal in extending RCAST's visualization of the decision space is to improve the decision maker's ability at sensemaking (see figure 3).The ultimate objective is to satisfy some of the capabilities that sensemaking aims to provide, including: satisfying a need to comprehend, helping improve contextual plausibility / explain anomalies, clarifying the past, anticipating the difficulties and concerns -so as to marshal correct resources, guiding information exploration and exploitation and providing a common ground for shared sensemaking (Klein, Moon, and Hoffman, 2006a;Wirek, & Sutcliffe 2005;Leedom 2004).The following description details how the visualization sensemaking extensions are being developed.
Matching a human's cognitive decision-making cycle with an agent's decision-making cycle, or at least establishing a mutual awareness of what the agent is doing, is essential to support effective mixed-initiative behaviors in a decisionsupport system.We are developing a User-Interface (UI) Sensemaking Channel module such that an R-CAST agent can reveal its processing status to a human user, and allow the human user to initiate a wide array of interactions with the agent.
In order to reveal the situation dynamics on the UI channel, we are employing two levels of visualizations: first, a highlevel similarity-based view depicting the relation of current state to past codified states (figure 3a) and second, a layered structure for organizing situational information and knowledge.One benefit of this two-level approach is to allow differing degrees of sensemaking.The high-level visualization allows for quick interpretation and exploitation, while the layered structure visualization permits a more detailed analysis of the situation revealing the semantic relationship among state variables at different echelon.
To support human-agent collaboration, the visualizations will be interactive in the sense that (a) the views will refresh as the situation evolves;  In order to evaluate the effectiveness of the sensemaking extension in assisting human decision makers to gain better situation awareness and team performance, we plan to take an experimental approach.Sixty command and control (C2) teams will be recruited from the Pennsylvania State University's Reserve Officers' Training Corps (ROTC) program.Each team will have an S2 Intelligence Officer role (played by a human operator and an R-CAST agent) and an S3 Operations Officer role (played by an R-CAST agent).The S2 human operator needs to work together with the S2 agent to build situation awareness and recommend targets for S3 to attack.For thirty of the teams, the S2 agent will have no display constructs shared with the human operator, while for the other thirty teams, the S2 agent is equipped with such displays to help the human operators build better situation awareness and encourage human to actively revise/correct the agent's situational information.At planned intervals during each of the scenarios, participants will complete SAGAT (Situational Awareness Global Assessment Technique) queries to record his/her measures of situation awareness.

CONCLUSION
NDM and specifically RPD have shown great promise in revolutionizing the way the military will develop future decisions support systems.Challenges exist to ensure individual sensemaking are not ignoredallowing the decision maker the ability to adjust contextual constraints, direct prioritizing of missing information, and improve situational awareness with a holistic view of the experience space.From the military's perspective, sensemaking is a critical tenet of network-centric operations.Early prototypes of the visually-extended R-CAST system have shown the potential to improve the interrelated sensemaking activities for forming awareness, developing understanding and improving overall decisions.
In conjunction with the US Army's Advanced Decision Architectures (ADA) Collaborative Technology Alliance (CTA), the R-CAST visualization experiment is scheduled to occur in first quarter 2009 at Pennsylvania State University.Results of experiment will be presented as part of the NDM poster session.
weight assigned to an individual attribute categories.

Figure
Figure 2: R-CAST Architecture (b) a list of predicates with known values for the predicate arguments pops up when a user hovers the mouse pointer over a node; (c) clicking on a node allows a user to explore the truth value of a predicate with certain fixed argument values; when this depends on some missing information at the lower-level, the corresponding nodes are highlighted; (d) clicking on a piece of missing information allows a user to assert a truth value based on his/her reasoning; (e) clicking on a piece of known information allows a user to share the truth value with some selected team members; (f) clicking on a rule allows a user to adjust the encoded information relation; and (g) clicking on a layer allows a user to add new or remove existing rules.