Introduction
Like art and religion, technology is an integrated part of a culture. (Rivers, 2005, p.567)
Disputes that may appear to be a matter of semantics are, at a deeper level, disputes about the meaning of technology in our lives. (Thompson, 1991, p.37)
In order to develop concrete courses of action, measures of effectiveness and sound government policies for research and development priorities, it is necessary to approach technology as a factor that transforms work activities and introduces social, cultural and organisational changes. This paper aims to develop a conceptual framework for understanding the role of technology in intelligence. The focus is on technological capabilities that support an analysis of sociocultural processes related to so‐called ‘new threats’.
It seems to be widely acknowledged that information and communications technologies (ICTs) play a key role in shaping the character of contemporary society (Bijker and Law, 1992; Castells, 1996, 2001; Poster, 2001). This has been reflected in the names given to the techno‐economic system that has emerged in the second half of the twentieth century – the Information Age, the Network Society, and so on. Nevertheless, social scholars of technology argue that the impact of science and technology on different aspects of society needs to be made more visible and is yet to be properly explored. Technology is still outside mainstream research on sociocultural changes in many social disciplines. For example, Charles Weiss (2005) argues that science and technology have had a fundamental and pervasive influence on various aspects of international affairs, changing the structure of the international system, relations among its actors, diplomacy, war, administration, trade and the gathering of intelligence. Nevertheless, it is difficult for international relations analysts and practitioners to consider science and technology as a factor affecting international affairs. This is because the science and technology topic is largely beyond consideration within disciplines that traditionally have informed the international relations area, such as political economy, history, political science and sociology. Weiss suggests that, in order to be taken into account, science and technology need to be conceptualised from the international relations perspective.
Similarly, technology has not been conceptualised from the intelligence perspective. Rather, there is a powerful trend within contemporary research on intelligence and technology to represent technology as having universal applicability and meaning, regardless of the context in which it is used:
The shape of tomorrow’s intelligence architecture is already discernible. Secure intranets, adapted to commercially available software and Web‐systems, which find, organise, filter and analyse information are now in common use. Classified and unclassified material, including imagery, Sigint, Elint and Masint, are available literally at the touch of a key. Small, hand‐held personal digital assistants that give soldiers in the field access to huge databases using web‐enabled, wireless communications, are transforming the use of intelligence at the operational level of war. Advanced search engines and text analysis tools like Pathfinder are having a similar effect in the strategic domain, allowing analysts to swiftly extract useable intelligence from large amounts of data. (Dupont, 2003, p.28)
This paper argues that, in order to consider technology as a factor of change within the intelligence practice, technology needs to be conceptualised from the perspective of this practice. While reflecting the nature of intelligence practice and being shaped by practitioners’ needs, this concept should be addressed to technology developers.
Theoretical background
This paper draws upon three interrelated approaches. First, it draws upon the concept of a tool as a mediator developed in an activity theory (Leontiev, 1978; Vygotsky, 1986; Engeström, 1987; Wertsch, 1991). This concept highlights the sociocultural nature of tools and the active role of instrumentalities in shaping the subject’s ways of acting upon objects and interacting with other social actors (Bødker, 1990; Nardi, 1997; Shchedrovitsky, 2005–2007; Resnyansky, 2008, 2010a).
Also, this study draws upon the concept of a sociotechnical system developed within the areas of organisational development and workplace studies (Luff et al., 2000; Garcia et al., 2006). According to this concept, it is essential that the design and evaluation of systems (tools) has been informed by an analysis of the particular context in which a system will be deployed (Jirotka and Wallen, 2000; Woodwardet al., 2001; Bennett and Resnyansky, 2006; Bedny and Karkowski, 2007). The heuristic significance of the concept of a sociotechnical system is that it enables approaching a piece of technology as an active participant of a system of social activity (Resnyansky, 2010a).
Finally, this paper draws upon a concept of technology as a sociocultural construct that embodies particular groups’ values and interests. According to the social constructivist approach, the properties and effects of technology can be attributed to social biases and politics built into them (MacKenzie and Wajcman, 1985; Bijker et al., 1987). Technology allows for different interpretations of its functional, sociocultural and even technical properties, and what technology is depends upon its interpretation as constructed by relevant social groups (Winner, 1991; Woolgar, 1991; Cawson et al., 1995; Kling, 1996).
The social constructivist paradigm provides a good theoretical and methodological foundation for an exploration of technology as an integral part of specific practices. It implies that the main research task lies in understanding how technological tools can be interpreted by different groups within specific contexts, and in using this understanding to inform the development of technological capabilities. It is not, however, easy to implement this research programme. As Introna (2009, p.28) maintains, ‘so many potentially important scripts are increasingly difficult to understand, even for the experts’. It may not always be clear what the relevant groups are. The naturalisation of the technologist and marketing discourses on ICTs within different areas of practice makes it difficult for particular groups of practitioners to develop their own vision of technology. Rather, together with the majority of society, they are likely to become subject to technology’s coercive power, and appropriate the technological paradigm embodied in the tools (Roszak, 1986; Rushkoff, 1999; Burbules and Callister, 2000; Introna and Nissenbaum, 2000). This, in turn, makes it difficult for technology developers to understand practitioner needs and to assess the transformative potential of technology.
Linking technology and intelligence
The nature of the problems emerging in relation to a changing technology‐security landscape stimulates interdisciplinary research and dialogue. Forums are organised and papers are written highlighting the importance of bringing together the efforts of engineers, computational scientists, social scientists and practitioners, such as intelligence analysts, defence managers, politicians and government officials (Patil et al., 2005; Threat Anticipation: Social Science Methods and Model, 2005; Nau and Wilkenfeld, 2007; Michael and Michael, 2007, 2008; Berkowitz, 2008; Subrahmanian and Kruglanski, 2008; Turnley and Perls, 2008). The forums focus on new technological applications and computational methods, practitioners’ needs and concerns, and new possibilities and challenges that technology may bring to practice. There is, however, a significant difference between the interacting areas (research, development and practice) in the fociof consideration and the issues they are concerned with.
Within the research and development area, thinking about technology is grounded within the epistemological and methodological principles of natural and computational sciences. Discussion of capabilities enabling intelligence and national security analysis is conducted in terms of general epistemological principles, classes of problems, and specific mathematical and computational methods and techniques (Chen and Xu, 2006; Cioffi‐Revilla and O’Brien, 2007; Turnley and Perls, 2008). For example, Chen and Xu (2006) focus on knowledge database discovery (KDD) techniques, arguing that these techniques ‘can play a central role in improving the counter‐terrorism and crime‐fighting capabilities of intelligence and security agencies by reducing cognitive and information overload’ (p.235). This argument has been grounded in a claim that these techniques were successfully applied in areas such as marketing, finance, manufacturing and biology, enabling the extraction of useful knowledge from large collections of raw data: ‘Knowledge discovery usually consists of multiple stages, including data selection, data preprocessing, data transformation, data mining, and the interpretation and evaluation of patterns’ (p.235). Having suggested that intelligence and security informatics (ISI) can be based on KDD technologies, Chen and Xu distinguish between the following classes of ISI technologies: information sharing and collaboration, crime association mining, spatial and temporal crime pattern mining, and criminal network mining.
Within the area of intelligence research, the conceptualisation of practice is shaped by an insider perspective and is grounded within the model of the intelligence cycle:
Any theory of strategic intelligence must take into account the so‐called intelligence cycle, a model that describes the sequence of activities that carries intelligence from the initial planning stages all the way to a finished product ready for the consideration of decision‐makers at the highest councils of government. The cycle consists of five phases: planning and direction, collection, processing, production and analysis, and dissemination. Each phase involves behaviour that must be taken into account by intelligence theorists. In reality, the intelligence ‘cycle’ is less a series of smoothly integrated phases, one leading to another, than a complex matrix of back‐and‐forth interactions among intelligence officers (the ‘producers’ of intelligence) and the policy officials they serve (the ‘consumers’). This matrix – a composite of intricate human and bureaucratic relationships – is characterized by interruptions, midcourse corrections, and multiple feedback loops. Even though reductionist, the concept of a cycle remains analytically useful, drawing attention to the processof intelligence. Conceptually, the cycle provides at least a rough approximation of how intelligence professionals think in their work. (Johnson, 2009, p.34)
The role of technology is discussed mainly in relation to data collection and processing. This discussion is also shaped by ‘what specialists call INTs, or intelligence disciplines’ (Berkowitz, 2008, p.38), such as SIGINT (signals intelligence), IMINT (imagery intelligence), GEOINT (geospatial intelligence), MASINT (measurement and signatures intelligence), HUMINT (intelligence collected by human beings), and OSINT (open source intelligence).
The intelligence‐specific conceptualisations of practice are brought together with the categorisations of technology borrowed from such fields as electronic engineering, telecommunications, and computational science. For example, Bruce Berkowitz (2008) maintains that intelligence technologies fit into four basic categories: sensors that collect data (optical, electronic, etc.); platforms that carry sensors (ships, satellites, etc.); information and communication technologies that process and transfer both data and finished intelligence; and enabling devices (cameras, covert communications, etc.).
In general, the technologies that the intelligence community uses are not that much different from what is understood in the outside world, and the intelligence community depends more than ever on the R&D base that everyone else draws from. The differences lie in their specialized features and how quickly they are delivered into operation relative to the usual pace of technology development. (Berkowitz, 2008, p.38)
There is a good understanding within intelligence research that ‘[O]rganizational structure is not neutral. Without question structure favors certain interests and facilitates specific kinds of communication and control’ (Hastedt and Skelley, 2009, p.115). Technology, however, is perceived as a neutral instrument rather than a factor that may also contribute to the balance of interests and power, and the ways in which practitioners interact and communicate: ‘Modern writing on the information revolution suggests that the real problems in exploiting it to the full are managerial and human rather than technical’ (Herman, 2003, p.57).
Intelligence analysis as an activity of knowledge production
Researchers and practitioners argue that intelligence is undergoing a paradigmatic change (Gill and Phythian, 2006; Treverton and Gabbard, 2008). This is partly attributable to the changing nature of threats and threatening actors, and partly to the changing information landscape. ICTs have launched an era of transparency, creating new opportunities and challenges for security and intelligence: ‘While some might believe that intelligence is immune to such developments, it is actually in many ways driven by transparency’ (O’Connell, 2005, p.143). In today’s world, argues Treverton (2003), the intelligence business should be less about collection and secrets and more about information ‘defined as a high‐quality understanding of the world using all sources, where secrets matter much less and where selectionis the critical challenge’ (p.98). Because of the changing nature of threats, intelligence data gathering and analysis need to draw upon diverse sources of data: research literature, media, computerised databases, websites, and so on (Pillar, 2004). The previously sharp distinction between collection and analysis is blurring, particularly when the Internet is used as a source of information. Schmitt (2005) argues that it is necessary to change the traditional intelligence mindset based on a positivist understanding of data as objective facts existing independent of the observer. Initially, he says, the intelligence community acquired this mindset in order to break away from the intelligence–policy maker nexus. The positivist mindset, however, may significantly restrict analysts’ understanding of what is useful or relevant information. For example, the Internet needs to be explored as a social practice of identity construction and community formation (Whine, 1999a,b; Bailey and Grimaila, 2006; Weimann, 2006). In this kind of analysis, technological aspects of the Internet, such as hyperlinks and multimodality, need to be interpreted as manifestations of social practices, such as the naturalisation of selected ideas and the legitimisation of certain sources of authority and truth. It is important for intelligence practitioners to adopt the view of the Internet as a locus of the reproduction of social subjects and structures (Atran, 2006; Resnyansky, 2009a). Can technological tools enabling information extraction and analysis allow for an acquisition of this view? Or do they hinder the proliferation of this view by encouraging the intelligence community to approach the Internet merely as a source of information?
In order to deal effectively with new threats, it is not enough to analyse indications and warnings related to the activity of concrete actors (individuals and organisations). It is also necessary to understand the social, economic, political, cultural and ideological causes and factors that can contribute to the emergence of threatening actors and, most importantly, what can prevent these causes and factors from emerging (Crelinsten, 2009; Resnyansky, 2009b). Without understanding these causes and factors, democratic societies may not be able to solve the problem of political violence and social instability. Hence, they may always have to deal with the consequences in the form of violent events and disintegrated social actors. The activity of intelligence analysis needs, therefore, to be enhanced by social science methodology, i.e. a methodology grounded within the principle of understanding.
The literature on intelligence research is crowded with case studies of intelligence failures where failure is defined as the inability to predict certain events. Such studies provide retrospective analyses of specific cases with the purpose of learning lessons. This stream of writing on intelligence can also provide a useful insight into the nature of intelligence analysis as a kind of activity unable to ‘predict’ events in the same way as natural sciences can predict the behaviour of physical systems. According to theoreticians of intelligence, intelligence failures are inevitable (Betts, 2009). Understanding and exploratory analysis are becoming particularly important because of the changing nature of threats. This traditional epistemological culture of intelligence may be affected by tools that introduce the natural science methodological paradigm, with such requirements as validation, verification and prediction. It is beyond the scope of this paper to discuss whether this change would be for better or worse – its purpose is to argue that the role of technology can be crucial in this process.
Intelligence practice as a communicative interaction
Organisational dynamics and politics play an important role in the intelligence business. There are different opinions regarding the vertical (centralised) and the horizontal (network) models of intelligence (Berkowitz and Goodman, 2000; Herman, 2003; Hastedt and Skelley, 2009). The proponents of a centralised model of intelligence argue that in modern conditions, an understanding of the implications of ICTs cannot be informed by a concept of intelligence which has more to do ‘with the short‐term, rapid turnaround varieties of tactical and operational intelligence in the battlefield than the work and product of national intelligence communities’ (Davies, 2002, p.315). From the tactical commander perspective, however, the centralised intelligence architecture may look inefficient, mainly because it does not enable a productive dialogue between analysts and consumers:
The distant analyst often has little visibility or understanding of exactly why the tactical consumer is asking for the information, the impact of the data, or how to package information so it is actionable for the ground commander. For example, if the tactical consumer in his formalized collections request asks for information regarding the presence of armoured vehicles at a given set of coordinates, the analyst looks for and reports on that particular informational request at the specific place – not on the implied request for trafficability, presence of an artillery battery 10 kilometers away, or the presence or absence of a bridge or tactical fortifications. The communications connectivity and permissions rarely exist for a direct and timely dialogue between the tactical commander and the distant analyst to define and refine the evolving needs of the consumer. (Howcroft, 2007, p.21)
The model of vertically integrated intelligence collection and analysis has also been criticised in the context of new threats to national security. New models are being proposed, such as distributed intelligence networks supporting the exchange of information between decentralised groups of intelligence practitioners and subject matter experts (Atran, 2006).
In order to understand how different kinds of technological capabilities may affect the proposed profound change of intelligence practice, this change may also be interpreted as a choice of a model of social (communicative) interaction (Resnyansky, 2010b). Communicative interaction can be described according to an information‐cybernetic (parcel‐post) model. Having been initially suggested as a model of signal transmission in telecommunication systems, the parcel‐post model has also been used as a basic model for understanding human communication. However, in order to reflect the specificity of human communication as a contextualised interaction of social subjects mediated by socioculturally‐specific representation systems, alternative – interactionist – models of human communication have been proposed (Riva and Galimberti, 2001). The interactionist models aim to emphasise that both the sender and the receiver can actively participate in the construction of meaning, and that the receiver’s role is even more important, since the final result of communication depends on the receiver’s interpretation of the message. It has been argued, however, that the information‐cybernetic model can serve as a metaphor reflecting patterns of routine communication within stable hierarchical structures, military institutions, classrooms, and so on. Within this kind of structure, one of the participants is positioned as a source of information, instructions and control, and the other as a passive receiver. In other words, this model affirms the supremacy of the information sender. The naturalisation of the information‐cybernetic metaphor of communication can, therefore, contribute to the reproduction of the relationships of inequity, power hierarchy, and control. This model does not encourage innovative dialogue and creative perception of information; on the contrary, there is a bigger chance for misunderstanding and a considerable loss of information in this situation. In the context of this paper, this model’s ability to serve as a matrix enabling certain kinds of social relationships and information behaviour is particularly relevant. It is important to assess technological capabilities as tools that may contribute to the proliferation of the information‐cybernetic rather than a dialogical metaphor of social and communicative interaction (Resnyansky, 2002, 2010b).
Representation and consumption of knowledge
Knowledge representation and exchange are always mediated by technologies, the most basic being human language and the word processor (Lakoff and Johnson, 1980; Halliday and Martin, 1993; Kress, 2003). The results of intelligence analysis are shaped, among other things, by patterns of dealing with knowledge that are offered together with the technologies. Intelligence practitioners require knowledge that is compact and easy to use, but which retains its scientific rigor and epistemological significance. They need knowledge that enables the development of better situation awareness and an understanding of the effects of their own and others’ actions. They need tools that enable them to process information in an effective and purposeful way, without more and more time being spent on information gathering. Intelligence practitioners need methodologies that help restore the whole picture from fragmented data. They need technological capabilities which enable a systematic and comprehensive application of rigorous scientific knowledge within the intelligence area. They need tools which help transform raw information into a high‐quality intelligence product.
Information extraction tools assist the user in finding huge amounts of data. In the process of consumption by analysts and decision makers, the data have to be transformed, increasing its chances of being distorted, lost, over‐generalised, pushed beyond its limits, and applied uncritically and incorrectly. Scientifically rigorous knowledge, once transferred to the area of practice, may lose its rigor and meaning. Knowledge produced within different disciplinary areas needs to be linked and integrated. It often needs to be compressed, in order to be displayed in dot points on one screen of a PowerPoint presentation – a format that many of us are most willing, or able, to perceive. Deficiencies may occur during the course of processing knowledge into a form suitable for those who have no time or background for deeper exploration of issues (Tufte, 2003). There is, of course, a human and political dimension to this process, as the selection of versions of reality and the interpretation of existing information are strongly affected by narrow political interests, personal ambitions and power games within organisations. However, these issues are beyond the scope of this paper. The purpose of this paper is to highlight the role of technology as one more player that can contribute to the interplay of organisational, psychological and cultural factors.
The mainstream explanation of why intelligence business requires technological support refers to the large amount, diversity and complexity of data. It is argued that practitioners require tools that use mathematical techniques to find patterns of behaviour in large datasets. They also need tools that enable them to analyse that behaviour. In the literature, one can find comprehensive outlines of two kinds of technological tools that can facilitate these kinds of activity: information extractionand modellingtools. Information extraction tools can be used to find data presented in open sources, such as news websites, blogs, newsgroups, social network sites, virtual worlds, online games and videogames. These tools are suggested as aids for analysts to obtain data on media and public opinion, obtain information about specific groups in different parts of the world, or research violent events (Albanese and Subrahmanian, 2007; Fayzullin et al., 2007). Modelling is the application of computational methods in order to facilitate an analysis of multiple and diverse data, and to understand what kinds of data may be missing (Sliva et al., 2007; Ozik et al., 2008). Modelling tools aim to let analysts and decision makers experiment and explore different possible scenarios (Epstein, 2006; Johnson, 2008).
This distinction makes sense from the development perspective. However, it is not helpful from the perspective of understanding how technology can affect intelligence practice. In this respect, it is important that both kinds of tool are based on conceptual models of social phenomena. In the case of modelling tools, it is acknowledged that their key component is a conceptual (social science) model of the modelled processes (Miller et al., 2008; Resnyansky, 2008; Turnley and Perls, 2008). Similarly, the information extraction tools are based on conceptual models of both social phenomena and information sources. For example, Chen and Xu (2006) propose that the development of tools for gathering, processing and analysing data on terrorism can be grounded within a concept of terrorism as a form of organised crime. Nevertheless, the perception of information extraction tools as ‘pure’ technologies enabling access to data is quite common. This perception may contribute to the belief that the main thing about data collection is the number of information sources processed. Also, the perception of these tools as neutral tools may contribute to a proliferation of naïve notions of social phenomena.
In the knowledge society, practice needs to be turned into scientifically saturated activity. However, scientific information exists in forms that make its consumption difficult (Epstein, 2006). Technological capabilities – modelling in particular – may help create a bridge between the areas of information production (research) and information application (intelligence practice). It is necessary to use social science information to deal with the kinds of threats that have emerged in the last two decades. However, this information cannot be applied as easily when theoretical models and case studies are represented in traditional forms, such as hundred‐page volumes, specialised journal papers, and extended reports. Finding relevant information, assessing its heuristic significance, and applying it rigorously to a specific case is a complex task that often requires interdisciplinary efforts. Modelling tools may help integrate rigorous social scientific information into intelligence and political decision making.
Turning intelligence practice into a scientifically‐saturated activity implies the adoption of a critical reflection stance towards the information and data used by analysts, as well as towards their assumptions and pre‐conceptions. Intelligence analysts need to reassess critically the assumptions that shape representations of events and actors within various discursive practices – from social scientific research to media, from statistical data to politicians’ public speeches. In particular, analysts need to re‐examine the heuristic significance of fundamental concepts that have been naturalised within the mainstream sociopolitical and ideological discourse. For example, it is useful to assess critically the relevance and heuristic significance of such concepts as ethnicity, culture and civilisation that are used in order to explain processes and behaviour at different levels – societal, group and individual (Resnyansky, 2009b).
Practices are sociocultural, historically specific activities conducted within concrete institutional settings and affected by current political and ideological situations, individual biases, preferences, tacit assumptions and the availability of resources (Schatzki et al., 2001). Intelligence practice is not excluded (George and Bruce, 2008; Phythian, 2008). As several case studies have demonstrated, ‘leaders, policy makers and other consumers of intelligence may choose to use, abuse or ignore it, depending upon their own predilection, prejudices, biases, or political agendas, and sometimes altering the original intent of the intelligence’ (Poteat, 2000, p.1). Intelligence practitioners need instruments that facilitate critical reflection on their assumptions and the conceptual models made available by organisational traditions, political conjuncture or ideological fashion. Collective production of knowledge is a problem when the participants of interaction belong to different organisational and epistemological cultures, such as intelligence, policy makers and social researchers and computational scientists. It is necessary to develop ways of effective knowledge compression. Modelling tools may be a good solution – if they offer theoretically‐sound, interpretational frameworks. In this way, they can help reduce the ‘noise’ (e.g. analysts’ subjective opinions, ideological biases or organisational views) that might otherwise be introduced into the process of knowledge interpretation.
Conclusion
Technology affects intelligence analysts’ understanding of problems, formulation of questions, identification of data sources and gaps, and the communication of intelligence analysis results. Therefore, the development of technological capabilities needs to be shaped by the intelligence practitioners’ needs. However, it is not easy to understand what vision of the practitioners’ needs is more relevant, nor to incorporate this knowledge into the development, assessment and implementation processes. Within intelligence research, the discussion of information technology is often shaped by the need to advocate a particular way of doing things. The categorisations of technology proposed for intelligence analysis and decision making indicate that the intelligence researchers’ thinking is shaped mainly by the technologist discourse. The uncritical acceptance of this discourse does not allow for an adequate understanding of the practitioners’ needs. This may result in ineffective tools being offered to practitioners on the pretext that they help address new threats and challenges.
In order to understand the implications of the technologisation of intelligence practice, both the intelligence practice and the technology need to be linked by an intermediate conceptual framework. This paper suggests that the development of such a framework can be grounded within the view of intelligence practice as an activity of knowledge production, a process of social (communicative) interaction, and knowledge representation and consumption. Accordingly, technologies have to be approached as tools embodying particular epistemological cultures and imposing particular models of communication and social interaction, and as mediators between knowledge producers and knowledge consumers.
This framework can be used to enable collaborative interaction between intelligence practitioners and interdisciplinary research teams aiming to develop technological tools. The proposed framework can help formulate requirements for technology developers in terms that reflect the intelligence community’s perspective, but are not too specific. In particular, the assessment of the impact of technology may be enhanced by the adoption of the concept of technology as a mediator between the areas of knowledge production and knowledge consumption (intelligence analysis and communication of its results to decision makers).
Modelling tools are a promising means for an integration of social science knowledge into intelligence practice and political decision making. Modelling tools can represent knowledge in a compact yet theoretically and methodologically rigorous way. Because of the conceptual frameworks embodied in the models, these tools can enable practitioners to approach the chaotic world of information with more rigor, as well as to reflect on their own assumptions and problem statements. In order to develop such tools, practitioners and social scientists both need to be actively involved in the process of technological development.