Technosocial Predictive Analytics in Support of Naturalistic Decision Making

Motivation – Anticipate outcomes through predictive and proactive reasoning across domains as diverse as energy, security, the environment, health and finance in order to maximize opportunities and counter adversities. Research approach – New methods for anticipatory critical thinking have been developed that implement a multi-perspective approach to predictive modeling in support of Naturalistic Decision Making. Research limitations/Implications – This is ongoing work. Work on assessing the strength and limitations of the approach in terms of utility and usability has just recently started (Scholtz & Whiting 2009). Originality/Value – Integration of technosocial predictive modeling with knowledge management and analytic gaming to support collaborative decision making. Take away message – The emerging approach is uniquely multidisciplinary in two main regards. First, it strives to create decision advantage through the integration of human and physical models. Second, it leverages knowledge management, visual analytics and gaming to facilitate the achievement of interoperable knowledge inputs and enhance human cognitive processes.


INTRODUCTION
The ability to estimate the occurrence of future events on the basis of expertise, observation and intuition is paramount to the human decision-making process.From a biophysical perspective, there is strong evidence that the neocortex provides a basic framework for memory and prediction in which human intelligence emerges as a process of pattern storage, recognition and projection rooted in our experience of the world and driven by perception and creativity (Hawkins 2004).These findings corroborate the naturalistic view of human decision making as a situation-action matching process which is context-bound and driven by experiential knowledge and intuition (Lipshitz, Klein, Orasanu & Salas, 2001;Gigerenzer, 2007).The goal of this paper is to describe an integrated set of capabilities that address the insights from theories of human intelligence and cognitive processing to support Naturalistic Decision Making (NDM).
Analysts and policymakers are constantly challenged to make assessments about plausible outcomes across domains as diverse as energy, security, the environment, health and finance in order to maximize opportunities and counter adversities.Despite the human natural disposition towards prediction, our ability to forecast, analyze and respond to plausible futures remains one of the greatest intelligence challenges due to inherent limitations on human cognition.For example, Heuer (1999) argues convincingly that a thorough analysis of competing hypotheses and supporting evidence can help analysts to avoid premature commitment to a single expected outcome with consequent neglect of relevant evidence relative to plausible alternative futures.Klein (1998:158-9) makes a related point in identifying metacognition as a process that enables experts to think more strategically for increased success in decision making.However, reasoning tasks such as competitive hypothesis analysis and meta-cognition are hard for humans to perform effectively and efficiently.Because of memory and focus attention limitations on human cognition (Miller, 1956;Heuer, 1999), most people are unable to retain several hypotheses/strategies and relevant supporting evidence in working memory and are too easily influenced by biased judgment (Kahneman & Tversky, 1973;Janis, 1982;Heuer, 1999).Moreover, supporting evidence for competing scenario analysis needs to be distilled from very large document repositories.Without the help of machine-aided information extraction and content analysis processes, such a task would require extravagant expenditure of human resources and would thus not result in timely actions.
Qualities such as the ability to focus on what is perceived to be most important and the capacity to make quick decisions by insight and intuition make human judgment uniquely effective (Gigerenzer, 2007;Gladwell, 2005).However, the same qualities can also be responsible for fallacious reasoning when judgment is affected by lack of knowledge/expertise (Klein, 1998), "groupthink" (Janis, 1982;Surowiecki, 2004), and increased confidence in extreme judgments and highly correlated observables (Kahneman & Tversky, 1973).If we are to help analysts and policymakers provide better proactive analysis and response, processes and capabilities must be made available that enable NDM while countering such adverse influences on human judgment.We propose to achieve this objective through the development of a Technosocial Predictive Analytic environment that • Relies on knowledge reach-back capabilities to inform analysis and response during decision making • Supplements the expertise of the analyst and policymaker with simulated scenarios generated by integrated computational models • Engages analysts and policymakers within a gaming environment that stimulates creative critical reasoning through visual analysis and collaborative/competitive work.
We begin with a review of background technologies relevant to NDM and an outline of the distinctive aspects of our approach.We proceed by describing a technosocial predictive analytic framework that aims at supporting NDM through the integration of three main components: knowledge encapsulation, technosocial modeling and analytic gaming.We describe a prototype system which implements this approach and exemplify the main aspects of its functionality with specific reference to the three components.

BACKGROUND
Significant advances have been made in the area of predictive modeling, with specific reference to the inclusion of social and behavioral factors, both in agent/equation-based approaches (Gilbert & Troitzsch, 2005) and probabilistic evidentiary reasoning approaches (Sticha, Buede & Rees 2005;Peterson, Sanfilippo, Baddeley & Franklin, 2008;Unwin & Fecht 2009;Whitney, Brothers, Coles, Young, Wolf, Thompson, Niesen, Madsen & Henderson, 2009).For example, Chaturvedi, Dolk, Chaturvedi, Mulpuri, Lengacher, Mellema, Poddar, Foong & Armstrong (2005) present an agent-based approach to modeling insurgency in terms of grievances, level of resources, and capacity to mobilize using insights from resource mobilization theory (McCarthy & Zald, 2001).Sticha et al. (2005) describe an application of Bayesian networks that enables the analyst to perform anticipatory reasoning on a subject's decision-making process using as indicators personality factors derived from Leadership Trait Analysis (Hermann, 2003) and the Neuroticism-Extroversion-Openness Inventory (Costa & McCrae, 1985).The approaches emerging from these and related works present novel ways of addressing anticipatory reasoning using established techniques such as agent-based modeling and Bayesian networks.However, they primarily rely on computer-based modeling and simulation and ultimately fail to integrate human judgment in the reasoning process in a meaningful way.
More recently, a new generation of approaches has emerged where modeling and simulation is coupled with role playing within a gaming environment to stimulate collaborative decision-making (Kuit, Mayer & De Jong, 2005;Valkering, Offermans, Tàbara & Wallman, 2007;Barretau & Abrami, 2008).For example, Valkering et al. (2007) present an agent-based modeling framework for water management which integrates a gaming environment.The modeling component provides a representation of agents and their interactions with reference to water usage and the resulting impact on society and the environment.Game players are represented as policy actors operating within a society dealing with issues of water management.The goal for each player is to "survive in a sustainable world".
Analytic gaming has already been used by analysts as a process, rather than a computational system, for helping understand complex issues through the exploration of strategies and policies (Schwabe, 1994) and "remains a uniquely distinctive tool for assessing the interplay of competing strategies" (Harris, 2005).The integration of gaming and modeling techniques yields the promise of an ideal partnership between human and artificial intelligence.The early attempts mentioned in the previous paragraph corroborate such a promise.Our approach shares with these early attempts the insight that gaming can provide an ideal environment in which human and automated agents can work together in an engaging and fruitful way.At the same time, we differ by enforcing a stronger tie to knowledge management processes and by allowing a higher degree of independence between modeling and gaming processes.

TECHNOSOCIAL PREDICTIVE ANALYTICS
Our primary goal is to create decision advantage in support of NDM through a process of analytical transformation that integrates psychosocial and physical models by leveraging insights from both the social and natural sciences.There is now increased awareness among subject-matter experts, analysts, and policymakers that a combined understanding of interacting physical and human factors is essential in estimating plausible futures for real-world scenarios.The relevance of such a multi-perspective approach to predictive analysis is pervasive across domains, as evidenced by acceptance of assessments such as the following from an increasing number of scientists and policymakers.
• Social Movement Theory: Integrated understanding of the infrastructural, social and ideational context in which contentious social movements operate is an essential analytical step in framing the emergence of violent behavior (Wiktorowicz, 2004) • Disaster Recovery: Anticipating how the public will react to official rescue and recovery directives during a major catastrophe and how the public's reaction will interact with infrastructural and logistic factors is key to maximizing the effectiveness of emergency-response operations (Court, Pittman, Alexopoulos, Goldsman, Kim, Loper, Pritchett & Haddock, 2004) • Climate Change: Combined understanding of anthropogenic effects (e.g., chemical waste) and natural processes (e.g., solar variation) is needed to predict the impact of global warming (Gore, 2006) The adoption of an integrated multidisciplinary modeling approach implies that effective decision-making requires a team effort.This is simply because it is nearly impossible for a single individual to function as an unbiased expert in all relevant knowledge domains.Moreover, there is strong evidence that the aggregated judgment of large groups of diverse and independent individuals systematically rivals the opinion of an elite few (Surowiecki, 2004).
Our multidisciplinary focus is further developed through the combination of integrated modeling with knowledge management, visual analytics and gaming.Knowledge management facilitates the achievement of interoperable knowledge inputs through the encapsulation of evidence from information streams and insights from subject domain experts in a time-and cost-effective fashion.Visual analytics reduce cognitive load on the users by increasing the transparency of inference from data analyses and computational models and simulations.Gaming enables the harnessing of social intelligence to improve decision-making.
Consequently, our Technosocial Predictive Analytics approach consists of three main components: (1) Knowledge encapsulation comprising the acquisition, vetting and dissemination of expert knowledge and evidence; (2) Technosocial modeling focusing on the integration of human and physical models, and (3) Analytic gaming exploiting visual interactivity and collaborative workflows to stimulate creative and competitive thinking for decisionmaking.
These three components are integrated within a service-oriented architecture.A graphic rendition of the emerging platform is shown in Figure 1 with specific reference to the land management use case which will be discussed in the next section of this paper.The general workflow of this platform comprises the following processes.First, the outputs of the models are transformed into relational data and loaded into a knowledge base.This transformation makes it possible to reduce the output of any model to a set of alternative scenarios with associated user-configurable parameters.Model output transformed in such a way is utilized by the gaming component via knowledge base queries to enable a group of analysts/policymakers to work collaboratively and/or competitively.The endgame is to determine the most desirable (e.g., sustainable) outcome through a stepwise process of negotiation.At any time, the knowledge encapsulation component can be brought in to provide documentation and evidence on aspects of the scenarios under scrutiny and shed light on the inferential processes underlying such scenarios.

Knowledge Encapsulation
The Knowledge Encapsulation Framework (KEF) is a suite of tools that enables the acquisition of knowledge inputs to support the modeling task and provides knowledge reach-back during decision making (Cowell, Gregory, Marshall & McGrath, 2009).KEF can be used to capture evidence from trusted material such as journal articles and government reports, and social media such as blogs, wikis and forums.It enables discussions surrounding domain-specific topics and provides automatically generated semantic annotations for improved content investigation.The current KEF implementation is presented within a semantic wiki environment, providing a simple but powerful collaborative space for team members to harvest and annotate content, align evidence with modeling framework, and discuss/review annotations and alignments.KEF can be configured to harvest information from individual sites, use search engines as proxies, or collect material from social media sites.Harvesting strategies include simple metadata extraction, topic identification, sentiment analysis and rhetorical analysis.All data in the KEF repository are automatically tagged with basic document metadata (source, author, date, etc.), as well as with semantic information extracted from the text during the ingestion routine.Using information extraction tools, KEF identifies and annotates entities of interest (people, locations, events, etc.) and user-identified key terms (e.g., climate terms in the case of a climate modeling scenario) in naturally occurring text.These annotations provide a means for content search, navigation and categorization.Importantly, users can correct existing annotations or create their own to match their individual needs and add notes.Finally, each document has a "talk page" where users can asynchronously discuss issues related to the document.A synchronous "chat" component is also available for online discussions.By storing content along with related discussions and annotation/vetting processes, KEF creates a living memory that enables users to move back in time and understand why certain choices were made.
Figure 2 illustrates the KEF process from the user's perspective.Knowledge elicitation experts meet with modelers and subject-matter experts to reach an understanding of their problem.For example, in the case of a modeling group trying to understand the effects of climate change on the Indian subcontinent, this may lead to the creation of a context map showing all the elements of climate change that may apply (e.g., access to education, clean water, etc.) and a selection of documents currently used to create and parameterize their models.Documents collected in this first phase are used as part of the discovery phase.The documents are "virtually" dissected by a number of KEF text analysis components in order to understand structure, content and relevance.Based on these elements, new material (e.g., documents, websites, blogs, forums, news articles, etc.) are discovered and pushed through an extraction pipeline prior to being ingested into the knowledge base.This process is cyclic, altered by the feedback provided by the user during the vetting/review phase.
As material is introduced to the knowledge base, it can be reviewed by the user through the KEF wiki using a four-stage process: review, relevance, evaluation and task alignment.The review process leverages tools such as automated summarization to ease cognitive load on the user, e.g., by reducing the amount of text a user needs to read to decide on document relevance.For each piece of evidence, the user is asked to make a judgment regarding its relevance to their current domain (i.e., relevant, irrelevant, not sure).Irrelevant material is moved to an archival namespace and is no longer included in system statistics.Relevant material is further rated by the user for strength and credibility.The final stage allows the user to align the information that has been rated directly to a specific model.

Technosocial Modeling
The technosocial modeling component currently consists of three independent models.The goal of the first model is to create viable future scenarios that address both technical and social factors involved in assessing the impact of climate change on U.S. power-grids and the wider implications for national security.The second model focuses on integrating social and technical factors for the characterization of dynamic scenarios for organizations in infrastructures within a Bayesian network environment with specific reference to IED (improvised explosive device) threat scenarios.The third model addresses the vulnerability of food security and energy infrastructure to climate change and terrorism through the examination of tradeoffs in food and fuel security with reference to biofuel production.

Predicting the Impact of Climate Change on U.S. Power Grids and Its Wider Implications on National Security
An example of climate change is an increased atmospheric temperature, which in turn causes a surge in electricity consumption.The increased temperature also affects precipitation, which changes the natural hydrological process and thus hydroelectric generation; it also influences wind electricity generation.Together, these demands could adversely affect the power grids and cause a widespread outage.If such an outage persisted, it would impair the ability of our entire critical infrastructure to perform and could potentially cripple our society.Our model addresses the potential impact of these changes on society in the next 50 years from both a technical and social perspective.
The interdisciplinary R&D effort extends the latest modeling theories and practices derived from atmospheric physics, electrical engineering, building engineering, social sciences, economics, and public policy to form a tightly coupled technosocial predictive analytics system.A recurring challenge in our work is the granularity differences, in terms of both data and methodology, among the domain models.One solution is to provide a highly interactive visual analytics layer on top of the domain components to facilitate the integration of evidence and arguments required by and generated from the different models.The integrated system creates viable future scenarios that address both technical and social factors involved in all model domains.These scenarios enable policymakers and stakeholders to formulate a coherent, unified strategy towards building a safe and secure society.
Our model involves four major components that address problems arising in the 1) climate, 2) social, 3) buildings energy use and power grids, and 4) security and infrastructure analytics domains.Figure 3 provides an overview of the overall system and the relations across the main model components.While the climate component accepts input mainly from external sources, the other three accept input from each other as well as from external sources.On top of these components is a thin visual analytics layer that facilitates the integration of evidence and arguments required by and generated from the model components.Preliminary results on predicting the impact of The Dynamic Scenarios for Organizations in Infrastructures modeling approach (Whitney, Brothers, Coles, Young, Wolf, Thompson, Niesen, Madsen & Henderson, 2009) focuses on integrating social and technical factors within Bayesian network models, along with associated information (threat-related data and evidence) to drive the model.Bayesian networks (BNs) are graphical representations of causal relationships among variables, provide generalized, quantitative modeling capability with established methods for integrating data, and compactly represent causal interactions in a complex environment where uncertainty predominates (Jensen & Nielsen, 2007).
BNs can incorporate social factors that influence threat such as capability, opportunity, and motivation.Integrating social/behavioral modeling within technical process models should improve threat likelihood modeling and traditional decision-making processes by enabling decision makers to account for the variety of uncertainty associated with various model components.We are currently focusing on modeling the IED threat, detailing the physical steps leading up to an IED event, as a way of testing the integration of technical and social components and processes as well as the motivational aspects that drive human activities, supporting likelihood assessments.
The IED process model shown in Figure 4 represents an IED attack as a number of explicit steps.These steps includes obtaining funding and bomb materials, recruiting people, constructing the device, selecting the target, delivering the device to its target, carrying out the attack, and escaping.These steps can be probabilistically linked, so that the graph in Figure 4 also represents a BN of the process.The BN formalism allows the probabilistic representation of interventions and error on the overall outcome.All of the steps of the process have logical connections to other steps and the ultimate degree and likelihood of success is based on the outcome of each step.Distinct groups with their distinct motivations can potentially play across the process shown in Figure 4.A group with a political agenda might be driving the planning and/or finance (steps 1 and 2).A group (or individuals) with financial motivations might be engaged in placing the IEDs (step 12).At each step, there must be willing and capable individuals to carry the process forward.assessment model designed to examine greenhouse gas emissions, climate change, and mitigation scenarios.EPIC (http://epicapex.brc.tamus.edu) is a watershed-scale biophysical model that is used to examine agriculture (management as well as productivity) and ecosystems.
VRIM (Brenkert & Malone, 2005) is an indicators model that produces comparative analyses of social and environmental resilience to climate change.Governance is being modeled as rules of behavior.The integrated model relies on the identification of historical data and expert knowledge harvested, analyzed and vetted in the KEF component to address interrelated questions about climate change, food/energy vulnerability and national security.The STELLA ® application (http://www.iseesystems.com)provides the computational environment for model integration.Details of the model integration approach developed are provided in Malone, Izaurralde, Thomson & Morgan (2009) with reference to land management questions about the tradeoffs in biofuel production for the Indian subcontinent.Figure 5 shows the interface to such a model.The interface allows the user to change parameters controlling the production of food and biofuel crops.As food production goes down, the ability to reach the critical level of daily protein needed to support the population in the region may be compromised and this is signaled by the "food panic indicator".

Analytic Gaming
The Analytical Gaming (AG) component provides an abstract way of describing the interaction across a team of players, whose decisionmaking behavior is informed by alternative scenarios generated by computational models (e.g., those described in the previous section) within a scripted environment.Rather than prescribing a specific gaming paradigm, the AG architecture offers tools and processes to support the definition of player roles and interactions for such roles that are appropriate to the selected model outputs through game configurations.A game configuration specifies a set of game parameters, a set of domain models, a set of roles, a set of game elements and a set of handles.The game parameters specify the game space and the values of the game parameters determine the current state of the game.Each parameter is associated with a description of its meaning and a data type.The domain models are simulations that are external to the game; they are specified by describing their input and output parameters, each of which has a description and a data type.The game elements interact with the gaming environment, the controls and displays used by the players, and the game parameters.A game element may display the value of one or more game parameters within the gaming environment; game elements also provide a way for the values of game parameters to be manipulated by players.Finally, the handles are the logical abstraction of things that the players are able to influence.Handles drive changes to the game elements when manipulated by the players.Figure 6 provides an example of the Analytic Gaming launch screen for the land management model described in the previous section.Using the launch screen, a user can select the role s/he wishes to play (e.g., farmer, minister of agriculture, minister of energy).Through a dialogue window such as the one shown in Figure 7, each player can set boundaries conditions for the handles which define the functionality of the player's role and propose specific values for such handles to achieve her/his objectives.The game may be played for a predetermined number of turns (e.g. each turn represents one year), or played indefinitely until a desired state of consensus is reached.In either case, we note the importance of capturing each action taken during game-play to build a decisionmaking record that can be analyzed after the game is done.Built up over time, and many gameplay instances, this record will provide a rich data source for analytic use in answering questions regarding player and model behavior under varying circumstances.

CONCLUSIONS
The achievement of sustainable growth requires a decision-making process capable of capturing dependencies across domains as diverse as energy, security, the environment, health and finance.NDM provides an ideal paradigm in which to articulate such a decision making process, because of the human ability to discern complex patters through perception and creativity.However, successful adoption of the NDM paradigm in this context is undermined by the occurrence of biased and uninformed judgment.In this paper, we have argued that such adversities can be countered by supporting NDM with computational tools and processes capable of supplementing the knowledge and expertise of analysts and policymakers and stimulating creative critical reasoning.The resulting environment emerges as "technosocial" predictive analytics software platform which creates decision advantage through the integration of human and physical models, leveraging knowledge management to support the achievement of knowledge inputs and analytic gaming to foster social intelligence through collaborative/competitive work.
Nuclear Risk Prevention: Most nuclear accidents can be anticipated by a predictable interaction of technology and human performance failures (NRC, 2005) • Fuel Efficiency Standards: EPA fuel-efficiency tests have overstated performance because human factors such as faster speeds and acceleration and air-conditioner use have been neglected in use-case forecasting (EPA, 2006) • Behavioral Economics: Insights on human cognitive and emotional biases improve our understanding of economic decisions and the effect of economic decisions on market prices, returns, and the allocation of resources (Mullainathan & Thalerb, 2001) • Human Health: Multilevel studies that consider a broad range of biological, family, community, socio-cultural, environmental, policy, and macro-level economic factors are necessary to prevent and mitigate the emergence of health threats such as childhood obesity (NIH RFA-HD-08-023, 2008) and drug addiction (NIDA, 2007).

Figure 3 :
Figure 3: An overview of the model system.
Vulnerability of Food Security and Energy Infrastructure to Climate Change and TerrorismThis model focuses on the Indian subcontinent (India, Pakistan, and Bangladesh) as a developing region with essential roles in the global issues of climate change, terrorism/national security, and overall economic development and well-being.It draws on three existing models developed at the Pacific Northwest National Laboratory (MiniCAM, EPIC, VRIM) and is adding governance, civil society and cultural models.MiniCAM (http://www.globalchange.umd.edu/models/minicam) is an integrated

Figure 6 :
Figure 6: The Analytical Gaming launch screen

Figure 5 :
Figure 5: Interface to biofuel production model.

Figure 7 :
Figure 7: A player's dialogue box in the Analytic Game component.