1,565
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      If you have found this article useful and you think it is important that researchers across the world have access, please consider donating, to ensure that this valuable collection remains Open Access.

      Prometheus is published by Pluto Journals, an Open Access publisher. This means that everyone has free and unlimited access to the full-text of all articles from our international collection of social science journalsFurthermore Pluto Journals authors don’t pay article processing charges (APCs).

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Artificial intelligence and knowledge management: questioning the tacit dimension

      research-article
      a , * , a , b
      Prometheus
      Pluto Journals
      Bookmark

            Abstract

            Knowledge management (KM) has matured to the point that many organisations either believe they have such practices in place or at least understand they are relevant to the knowledge work commonly undertaken in many industries. What is lacking from the literature, however, is a solid foundation for the philosophies underpinning KM and particularly for how tacit knowledge informs the KM space. Research over decades shows tacit knowledge underpins all other forms of knowledge, enabling the interpretation and judicious application of knowledge, leading (at its highest levels) to the concept of wisdom. As an academic discipline, artificial intelligence (AI) was established before KM, has been grounded in the computing discipline for many decades, and is applied broadly in many domains. This paper explores how AI can inform the KM debate. Rather than simply provide examples of AI success stories as applied to KM in practice, it explores the theoretical and practical limitations of AI and KM in unison, providing at the same time a strong epistemological understanding of both disciplines as a means of furthering the knowledge debate, with particular emphasis on the role of tacit knowledge within this jurisdiction.

            Main article text

            Introduction

            The possibility of developing artificial intelligence (AI) to the point of matching its capabilities to the human mind is a work in progress (e.g. Morrison, 1984; Pratt, 1987; Bryant, 1988). There is open debate on the feasibility of achieving such a goal. Ridley Scott’s film, Prometheus (2012),1 illustrates the controversy clearly. Set in 2093, spaceship Prometheus carries David, a very advanced robot capable of performing physical and mental tasks faster and better than any human being. Nevertheless, it lacks the fundamental characteristics of an ordinary human being. David is incapable of having feelings. The differences between David and the human crew are immediately apparent. David’s will and intentions appear to be fuelled by external stimuli only (it does not engage in politics or power games), from which it acquires new information. Because David is incapable of synthesising this into knowledge to any depth, David lacks the wisdom of how to apply information. David is unable to perform creative activities encompassing tacit innate forms of self-awareness and deep knowledge of society.

            Concurrently, there has been significant discussion concerning the viability of codifying tacit knowledge and the role of technology in KM. On the one hand, there are strong criticisms of the artificial separation between tacit and explicit knowledge and the possibility of converting tacit into explicit knowledge (e.g. Nonaka and Takeuchi, 1995). Explicit and tacit knowledge blend so that it is not possible to use one without the other (Ray and Clegg, 2007; Cohendet, 2014). Additionally, tacit knowledge per se is not exclusively articulable or in-articulable (truly tacit), but lies along a continuum with street smarts and workplace skills being articulable, and sense-making, meaning, wisdom, emotions and feelings being in-articulable (Busch, 2008). Venters and Fernley (2009) note the disagreement in the literature over the role of technology, either to codify knowledge or to help in human collaboration (e.g. Hansen et al., 1999; Alavi and Tiwana, 2003). Yet, there is still the question of how ‘smart’ technologies might close the gap between codification and collaboration. This point is directly connected to the tacit knowledge debate; that is, the drive toward codification and commodification of knowledge implicitly embedded in the development of new smart technologies yet neglecting the knowledge-related limitations of this endeavour (Roberts, 2001).

            This paper extends the above debate. Its aim is to explore how KM can benefit from the AI experience and how KM can help to address some of the limitations of AI, considering the (ontological, epistemological, and practical) restrictions associated with tacit forms of knowledge. We argue that the strengths and limitations of AI and KM are closely related to the nature of knowledge – the idea of wisdom, the way the concept of mechanism is applied – and to power-related issues. We conceptualise AI as a branch of computing that endeavours to construct devices analogous to biological agents so as to understand their essence and capture their aptitude (Russell and Norvig, 2009; Poole and Mackworth, 2010). KM is approached as a method used by organisations to manage (i.e. gather, diffuse, exploit and create) codified and – where possible – tacit knowledge assets, to their competitive advantage (Davenport and Prusak, 1998; Hansen, 1999; Ng and Li, 2003; Rowley, 2003; Busch, 2008). AI is used as a case study in this instance as its developers and users make overreaching assumptions about its goals, roles, and processes, as well as about the nature of human beings. 

            We posit that AI and KM are closely related to the nature of knowledge (see Figure 1). While recognising knowledge is simultaneously tacit and explicit (Polanyi, 1983), the knowledge domain can be represented as a continuum including three instances of slightly overlapping tacit knowledge; that is, collective tacit knowledge (CTK), located in society and acquired within the social milieu; relational tacit knowledge (RTK), stemming from contingencies related to human interaction; and somatic tacit knowledge (STK), embodied within the self and manifesting through forms of physical energy coupled with tacit knowledge, and explicit knowledge (EK) that can be codified (Collins, 2010).

            Figure 1.

            Knowledge types versus AI development.

            Current techno-centric approaches rooted in the AI community limit the AI/KM partnership as they rely on the ambitious assumption that all tasks can eventually be algorithmically performed through codified explicit and tacit knowledge forms. Most successful AI developments have evolved at the top-right end and middle of Figure 1, while AI technologies successfully dealing with collective forms of tacit knowledge are non-existent or are several levels of abstraction above direct engagement. We posit that current developmental trends in AI move in directions showing little promise of furthering the concept of KM as it stands, and that at present we are gaining only incremental improvements in what are really sophisticated forms of information management, at least for the foreseeable future. Very recent examples of AI research (e.g. Vo et al., 2016; Hidayati et al., 2016; Lee et al., 2016) reinforce this point in that current AI is still focused on automating what is largely verbally articulated decision-making. We know that machines possessing limited behaviouristic sense as they react to stimuli are able to deal with some forms of STK. Other machines, such as the semantic web assigning meaning to relationships among objects (e.g. the ‘knowledge vault’ (Dong et al., 2014)), are able to articulate some types of RTK since each of these relationships potentially holds different meanings for each observer. Yet, despite our best efforts, more sophisticated computer models of biological agents are obtained by following a reductionist process that fails altogether to capture the essence of cognition, and how to apply it wisely, as wisdom is also an inherently human characteristic that can be only socially learned and thus cannot be explained/codified. Ultimately, what cannot be understood cannot be, at the very least, modelled (Pinker, 1997, 2005). Furthermore, because of the predominantly social nature of tacit knowledge, power relations permeate its mobilisation during AI/KM developments.

            Ontological and epistemological paradigms

            Placing AI and the self in an ontological frame from which we may draw inferences on their ‘adopted’ epistemologies appears a good place to start. Without overcomplicating the issue, drawing inferences on how an entity interacts with the environment over time allows us to gradually build more epistemologically accurate models. This process should answer questions about:

            • (1)

              the entity’s view of reality;

            • (2)

              the relationship between the knower and the known;

            • (3)

              what the process of knowing is; and

            • (4)

              how such knowing is passed on. (Vasilachis de Gialdino, 2009)

            Let us first consider the AI entity’s view of reality (1). AI needs a pre-established well-defined context in which to function purposefully/meaningfully. Ask and Reza (2016) examine a number of computational models of neuroscience; though some are complex they nevertheless operate in pre-determined domains. They are placed in a definable objective reality and react towards it in a predictable fashion. Expert recommender systems provide one case in point where ‘directories of employee expertise’ are made available to staff in the wider organisation (Lee et al., 2016) (see EK – Figure 1). In terms of AI’s awareness of its own knowledge, we imply an artificial agent’s meta-knowledge structure (2). AI’s awareness of its own knowledge needs to be codified in some manner so it knows how to handle or control inferencing procedures, and what to do with the outputs of these inferences. The idea is old, but still holds: for an AI to function, it has to know how to use and apply its knowledge base (rule-based, algorithmic, stochastic). Since this is codified in some form, it is predetermined at some level, even though it may be adaptive. In AI, the process of knowing relies on algorithmic-based inferences (3). Even if these are adaptive, they are quite limited in scope. They have to be as computers have no intrinsic value systems (or extremely limited ones) on which to base the use of outputs pertaining to unknown or unexpected stimuli. Thus, AI’s ability is limited in terms of its acquired knowledge by the algorithmic scope. AI’s ‘knowledge’ is passed on as information to either an artificial or a biological agent, or both (4). If it is passed onto another AI, then (3) can be reiterated.

            In summary, artificial agents operate in what is referred to as an ‘objective reality’ where, at least in principle, its every aspect should be verifiable. If this were not the case, we would not be able to model an artificial agent’s behaviour. Indeed, there are schools of thought which propose that, since knowledge is part of the natural world, it is reproducible or can be modelled, that it is an observable physiological phenomenon not requiring many of the ‘things we associate most deeply with being human’, such as self-reflection. All that is needed is a ‘single integrated agent with a large repertoire of states’ (Koch, 2004). While acknowledging the unfeasibility of cognitive awareness in computers, and recognizing, for example, methodological disagreements between rule-based and neural computational models, some commentators (e.g. Aleksander, 2004; Pinker, 2005) are open to the possibility of having cognitively-aware machines in the future. Tallis and Aleksander (2008) argue that the language of cognitive sciences is usually misused in discussing cognitive computational modelling. Specifically, ‘information’ has been freely used to describe both human and computer-based activities, implying that they perform similar activities in similar ways. In order to clarify this issue, epistemologies associated with ontological realism, relativism, and critical realism are discussed in this paper. It argues that knowledge has different meanings and roles depending on the perspective taken.

            Approaches to knowledge can be categorised as objectivist, interpretive, and practice-based. The objectivist approach conceptualises knowledge as an objective entity that can be defined, measured, articulated, codified, stored and transferred (Davenport and Prusak, 1998). Not surprisingly this approach is widely applied in both the IT (e.g. Alavi and Tiwana, 2003) and resource-based views of the firm (e.g. Zollo and Winter, 2002). While recognising the tacit dimension of knowledge, this view neglects the tacit component, focusing on the development of IT-based repositories (Argote and Ingram, 2000) or organisational memory systems to ‘manage’ knowledge (Olivera, 2000). The approach has been criticised because it neglects the tacit dimension of knowledge (Gallupe, 2001), overlooks the impossibility of measuring ‘intellectual capital’ (Bontis, 2001), equates knowledge with information, and continues to use misplaced epistemological assumptions about the nature of knowledge (Currie and Kerrin, 2004). The approach adheres to what Hirschheim et al. (1996) see as ‘control oriented researchers’, regarding cognitive computational models as information-based in which symbols can be formalised and social systems are seen as mechanistic.

            Conversely, the interpretive approach stresses the interpretation of meaning within a specific context, focusing on tacit and intangible aspects of knowledge. This strand of research argues that tacit knowledge is personal, relational, socially constructed, situated and emergent (Polanyi, 1983; Orlikowski, 2002; Tsoukas, 2005), recognising that uncertainty, diverse interpretations and ambiguity are inevitable. Furthermore, tacit knowledge is not only composed of both intellect and intuition (Styhre, 2004), but also has different degrees of tacitness (Ambrosini and Bowman, 2001). The interpretive school believes knowledge can be codified to different extents, but it is very difficult, perhaps impossible, to codify tacit knowledge completely. This means there are different forms of tacit knowledge (Collins, 2010). While some forms are difficult but feasible to codify, such as the articulable tacit knowledge mentioned above (Sternberg et al., 2000; Busch, 2008), other forms, such as in-articulable tacit knowledge, cannot be codified because of its intuitive, sensorial, emotional, sense-making and situational features (Anderson, 1983; Weick, 1995; Busch, 2008) for ‘much of it is not introspectable or verbally articulable (relevant examples of the latter would include our tacit knowledge of grammatical or logical rules, or even of most social conventions)’ (Pylyshyn, 1981, p.603). In the latter case, tacit knowledge can be demonstrated in actions of practice and doing (Tsoukas, 2005), a crucial point that takes us to the third approach of knowledge.

            The practice-based approach (Gherardi, 2012), rather than seeing knowledge solely in people’s minds, considers knowledge as embedded in practice. From this perspective, the most important feature of knowledge is the idea that it is embodied, since behaviour or human motor activity is deployed in order to perform a specific action (Collins and Kusch, 1998). Still, individuals might be unable to explain what they are consciously doing, or they might be unconsciously unaware of something they know (Collins, 1990). Knowledge is also seen as relational since it is mediated by artefacts that might have diverse logics of action and history (Gherardi, 2012; Guzman, 2013). Because knowledge focuses on ongoing actions deployed in a specific context and time, it is emergent and situated (Tsoukas, 2005). This is why individuals sometimes need to break rules in order to adapt performing actions to local conditions of operation (Collins and Kusch, 1998). As with the interpretive perspective, knowledge in this approach is also personal since it considers feelings, intuition and social identity (Handley et al., 2006). Because practice can be planned or unplanned, habitual or frequent, causal or unexpected (Collins and Kusch, 1998; Spender, 2005; Schatzki, 2001), it can be ‘learned’ only during action (Raelin, 2008). The above means that interpretive and practice-based approaches help in appreciating the limitations of state-of-the-art AI and, by extension, KM since there is recognition of the complex and multifaceted nature of tacit knowledge, which is the focus of the next section.

            Three forms of tacit knowledge and implications for the KM function

            Some forms of tacit knowledge are feasible to codify, and others are not. Relational and somatic tacit knowledge can be codified and mechanized to some extent. Collective tacit knowledge can be neither codified nor mechanized (Collins, 2010). This introduces significant implications for the KM discourse.

            Relational (or weak) tacit knowledge, despite its potential for explication, is deliberately or non-deliberately kept hidden. RTK is tacit because of the myriad contingencies of human relations rather than because of the intrinsic nature of knowledge or the location of knowledge (Richards et al., 2007). RTK is kept hidden because of the way societies are organized – via secrets, logistical (space and time) constraints, power and politics. Trade secrets – not telling your boss in advance you plan to leave the company, or not asking for a pay rise when the boss seems unfriendly – all constitute examples of RTK. RTK can be made explicit only if everybody agrees not to hide their knowledge and if all logistical contingencies are eliminated (Collins, 2010, pp.91–8).

            Somatic tacit knowledge is tacit because of our inability to explain rationally how our intellect is able to direct certain complex physical movements even though they can be extrapolated. Polanyi’s (1958) example of bicycle riding is a good illustration of somatic tacit knowledge. Somatic tacit knowledge can therefore be codified and automated if human action can be imitated. Currently it is possible to explain procedures of most human actions, such as riding bicycles, dancing, playing musical instruments and playing chess.

            Collective tacit knowledge is located in society and can be learned only when it is embedded in a social milieu. Dancing in a social setting, speaking a natural language, and riding a bicycle while negotiating traffic on a busy street are examples of CTK. This is a unique human characteristic constituting the ‘ability to absorb ways of going on from the surrounding society without being able to articulate rules in detail’ (Collins, 2010, p.125). The human body plays a crucial role in making sense of the world. Humans learn common sense (i.e. combinations of STK, RTK, and CTK) for, through and by their senses/bodies (Dreyfus, 2009). Thus, because the social milieu cannot be reduced to a set of rules, and only humans have the capacity to learn CTK, it is not possible to codify and automate CTK.

            Some, but not all, relational tacit knowledge can be explicated and therefore automated using, for example, expert systems and/or neural networks to extract corporate social responsibility values from company documents and match these values to the financial outcomes of a company. Somatic tacit knowledge can also be explicated and mechanized using neural networks (Hidayati et al., 2016) and robotics through, for example, conversational agents who provide interactive advice to patients on a range of issues, such as dealing with drug dependency, phobias, bed-wetting etc.; or mechanistically, as seen with bicycling robots. Collective tacit knowledge, however, cannot be codified, modelled, or mechanized. Machines cannot socialise or be meaningfully embedded in a social milieu since humans and machines are different in kind and materially. Society is part of self and rules are articulated by humans in controlling positions. Rules require a certain degree of acceptance by those not in such positions. Power relations represent yet again a form of tacit understanding; this means that AI can support the objectivist perspective of KM in terms of facilitating the circulation of relational tacit knowledge and the performance of some forms of somatic tacit knowledge. However, AI is unable to support KM in terms of enabling the circulation of collective tacit knowledge. All AI can do in practice is store articulated rules and apply these to increasingly complicated situations. To deal with collective forms of tacit knowledge, the subjectivist approach of KM uses social mechanisms, such as communities of practice (CoP), social networks and action learning (Wenger et al., 2002; Raelin, 2008).

            The crux of the epistemology imbroglio, therefore, seems to reside in the multiple interpretations of the roles and limitations of different forms of knowledge combined with users’ pluralistic goals and intentions. The point is that knowledge is neither explicit nor tacit, but both (Polanyi, 1983). Thus, from an ontological perspective, it is not possible, at this stage at least, to argue for the existence of cognitive computational models. The interchangeable use of knowledge and information by some in the IT and operations research communities, as well as the impossibility of AI coping with collective forms of tacit knowledge, cannot be attributed solely to misunderstandings of the nature of knowledge. Higher order philosophical reasons, including concepts of wisdom, mechanisms, and power relations, help to explain the belief as well as the impossibility (for the time being at least) of developing AI capable of cognitive awareness in the biological agent sense. Power relations help to explain the strengths and limitations of KM when dealing with tacit forms of knowledge.

            Independent action: cognition, cognitive computational models, and wisdom

            Codification of the essential ingredients needed to model CTK (that is, how human cognition works) has so far proved elusive.

            Human cognition

            Natural objects are assumed to exist. The human person may find intelligible the essential nature of the object by a process of abstracting (non-exhaustively, of course), from appearances (cf. Kretzmann and Stump, 1993, p.142). This essential nature contains those features involved in a definition that cannot be changed without giving rise to a different kind of object. The result of this abstraction is the concept or idea, which is separate from, but finds expression in, words or speech. The possession of ideas is proposed as a prerequisite for intelligence, and is what is meant by ‘knowing’. Reasoning is then the process of drawing conclusions from propositions whose meaning is understood, because the idea corresponding to each word is possessed, and the context in which the idea is developed and applied is mostly known. This means the very nature of human cognition is tacit. Such thinking also ties in with the tacit knowledge literature, where all human understanding has, as its basis, combinations of relational, somatic, and collective tacit knowledge (Busch, 2008). Human cognition can make abstractions and reflections in order to develop ideas. AI so far lacks this ability.

            Cognitive awareness

            Cognitive awareness (Meek and Jeste, 2009) is central to the human self. This elusive concept has been described as the subjective character of experience (Nagel, 1974). It includes the sum total of all knowing in terms of ideas possessed (particularly in the context of the experience of knowing that one knows), including feelings and emotions. To develop appropriate computational models that reflect this, there must be a reduction from multiple viewpoints to a single viewpoint and some aspects must be left aside. However, there is only a single point of view in cognition, the subjective character of the experience (what it is like for a bat to be, rather than to behave as a bat, to use Nagel’s analogy), so any sort of reduction is impossible. The phenomenological features of experience cannot be excluded in a reduction (as is usually the case in deriving models) because that is all there is to experience. An analysis in terms of functional and intentional states is not possible. Machines have these, but since they are not cognitively aware, they do not experience states or events.

            Ultimately, we argue that it is not possible to produce a physical theory of the mind (to be distinguished from physical processes in the brain).

            If the facts of experience, facts about what it is like for the experiencing organism are accessible from only one point of view, then it is a mystery how the true character of experience could be revealed in the physical operation of the organism (Nagel, 1974, p.385).

            It follows that it cannot be possible, by using cognitive computational models, to produce or even emulate intelligent human behaviour, and so produce a fully functioning ‘human’ cognitive system. That which cannot be modelled, cannot be symbolically represented in a computer program.

            AI-based artefacts are unable to deal with this form of knowledge because of its intrinsically tacit and practice-based nature. The limitations of cognitive computational models, therefore, do not seem to reside in the processing capacity of IT or in the many software development shortcomings. The root of the limitations resides in the mistaken idea of systems developers in attempting to perform with computers tasks that entail the utilisation of self-awareness, wisdom, and human cognition – tasks performed by humans only.

            To explore further the limits of AI’s contribution to KM, we debate the essence of human wisdom, a key form of CTK. Meek and Jeste (2009, p.355) propose six distinct components to wisdom, namely:

            • prosocial attitudes and behaviours (rising above self-interest);

            • social decision-making/pragmatic knowledge of life (dealing effectively with constant complex social situations);

            • emotional homeostasis (effective control and cognitive processes);

            • reflection/self-understanding (prerequisites for insight);

            • value relativism/tolerance (tolerance of value systems);

            • acknowledgement of, and dealing effectively with, uncertainty and ambiguity.

            Using neuro-imaging, they find a complex interaction among parts of the brain involved across, and in, each of these characteristics when measured. Aside from evidence involving genetic predisposition and particular cross-sections of neurotransmitters, these forms of behaviour were found at times to be tied to reward neuro-circuitries. Their work clearly shows the limitations in our understanding of the human cognitive system. We simply do not know how cognition comes about and what cannot be understood cannot be modelled (Pinker, 1997, p.2005). Despite our best efforts, the most sophisticated computer models of biological agents remain mere models obtained using a reductionist process that fails altogether to capture the essence of cognition because the dynamics of cognition remain a mystery. This also means that wisdom, an idea closely associated with concepts of human cognition and cognitive awareness, cannot be modelled/automated, since it is an inherently human, socially acquired characteristic that cannot be explained or codified. This indicates that objectivist views of knowledge not only ignore wisdom, but also overlook the impossibility of codifying, storing, transferring or applying this type of knowledge. However, the idea of reward neuro-circuitries cannot be dismissed as it constitutes a connection between the physiology of the brain and the power relations that emerge in the social milieu.

            Concepts of human wisdom/cognition/self-awareness are useful to explain potential roles applied to humans and technological artefacts. While AI-based artefacts are developed to perform highly routine tasks (e.g. expert recommender systems, natural language processing, etc.) in stable and controlled environments, cognitive awareness might be applied to generate new ideas and reflections by applying wisdom, meaning the roles are complementary rather than competing. The problem emerges when technology developers attempt to develop computer models (mechanisms) in order to substitute skilled, creative, reflective human action in AI-based artefacts. Unfortunately, these beliefs have gone a long way to elevate these mechanisms, based on a materialistic ideology, to a metaphysics.

            Mechanisms: limitations and implications for the AI/KM debate

            In order to understand the elevation of the mechanism to a metaphysics, it is necessary to start with the origins of science. The core of modern science originated in the sixteenth and seventeenth centuries. Bacon (1561–1626) proposes the scientific method, Hobbes (1588–1679) publishes Leviathan and becomes one of the founding fathers of materialism, Galileo (1564–1632) explains natural phenomena (using efficient causes and matter in motion), Descartes (1594–1650) employs the method of mathematics (requiring clear and simple ideas as axioms), and Newton (1632–1704) provides a comprehensive system of mechanics based on mathematical laws governing the behaviour of conceptual models (particles with mass concentrated at a point).

            Newton discovered that movements of natural bodies approximated those predicted by these mathematical laws and mechanistic models, making them intelligible, discoverable and useful for prediction. In such a modelling process, complex natural bodies are considered to be in a process of reduction from a single viewpoint: the quantitative or mathematical. The natural physical body (what is the case), which cannot be fully comprehended in its multidimensional richness, is reduced to the relatively familiar and comprehensible point-object particle in motion (what seems to be the case). The driving force of mechanism, with its accompanying use of efficient causality, eventually led to the spectacular success enjoyed by the Industrial Revolution.

            In his foundational search for the same absolute clarity and definiteness in the physical world as was found in mathematics, Descartes gained an apparently certain basis for knowledge from the existence of his mind (Watling, 1985). Distinguishing his mind or thinking self from his body, he placed the essence of bodies in their extension (composed of integral parts) with local motion as the only motion considered. This mind–body problem became a mind–matter problem when, as explained earlier, abstract models were assumed to be existing physical entities, a process suited to the rationalist thinking of Descartes as it provided clear and simple ideas upon which a system of knowledge could be built. It was then an easy step for Hobbes to deny mind as a separate substance, and to make mind and matter equivalent. Mind, then, considered to be fully explicable in terms of mathematical models, takes on physical attributes and is explained in terms of particles and motion.

            Hobbes equated quantitative models with the whole of reality, ignoring the reductive process involved in their origin, and so created a metaphysics of mechanism (Flew, 1985). To identify models (e.g. Newtonian particles) with natural bodies, or to regard them as equivalent, is to commit a category mistake of a type identified by Wittgenstein (1889–1951) (Dreyfus et al., 2000). Although they can appear in sentences of the same logical form, natural bodies and mathematical models do not enjoy the same form of existence. Mechanism, therefore, becomes a metaphysics when it is assumed all phenomena, including natural bodies, are adequately explained by intrinsically immutable quantity and local motion – the basic characteristics of machines. Upon such metaphysics, Hobbes developed an epistemology as well as natural, moral and civil philosophies that continue to influence thinking to this day. Thus, mechanism has become the common-sense method of understanding all physical phenomena, including human systems, such as politics, economics, and organizations. This objectified view of the world is ideal for the application of deterministic models, easily applicable (once defined) through various means of automata (automatons).

            Practical limitations of AI and implications for KM

            Building on the above conception of the human, the second argument details the application of the idea of mechanism to AI, and therefore the implications for KM. Contemporary attempts to construct AI ‘maintain that suitably programmed computers can literally be said to engage in processes of thought and reasoning’ (Lowe, 2000, p.193), thus emulating high-level functions of the rational human person. In AI, rather than mathematical models of mechanics, the primary data are formal symbols embodied in an electronic memory device. Physical laws are replaced by syntax of coded logical rules manipulating formal symbols with the power of a processing unit, according to procedures built into these rules, in order to simulate or model computational processes of the human mind. In this understanding of AI (termed weak AI by Searle, 1990), the objectives include both development of more powerful mind-simulation programs and an improved understanding of the workings of the human mind.

            The reductionist process of mechanism, again making a category mistake and identifying the model (here symbols and the processing unit) with the natural body (mind of the human person), leads to the claim (termed strong AI by Searle, 1990) that it will eventually be possible to create a mind, equivalent to a human mind, simply by designing a sufficiently complex computer program with the right inputs, logical procedures, and outputs. Further, the philosophy of logical mechanism holds as a central thesis that a finite deterministic automaton can perform all human functions (Burks, 1990, p.409).

            In summary, the ultimate goal of AI currently appears unreachable as computers lack not only the self-awareness and reflection characteristic of human intelligence, but also those properties acquired by humans through being embedded in a social milieu, such as double-loop learning (Argyris and Schön, 1978). Computers are unable to reflect on their own performance, to know that it knows, as a human person can. Computers can never step outside the code, reflect on the code, and contribute their own observations. All choices are determined by the driving code, even if this code includes the generation of data from a probability distribution, as in Monte Carlo simulations. Instruction manuals explain how to operate computer systems, but answers to such questions as how, when, why and for what purpose various systems should be used cannot be addressed without looking at a dimension that permeates all human activity: power relations.

            Artificial intelligence, knowledge management, and power relations

            The strengths and weaknesses of AI and KM cannot solely be credited to the nature of knowledge and to ideas of wisdom and mechanisms. Power relations are crucial to understand the AI/KM relationship because, during AI and KM utilisation, people continuously negotiate identities, rules, and destinies for supporting projects of either dominance or resistance (Courpasson et al., 2012). In situations where tacit knowledge prevails, soft KM issues become relevant as they affect AI/KM utilization (bottom-left end of Figure 1). In this case, a power/political perspective helps to understand the workings of soft KM mechanisms dealing with diverse forms of tacit knowledge (Stacey, 2007). Soft KM issues relate to such parameters as status, commitment, recognition, idealistic attitudes, and underlying assumptions about the role of AI/KM emerging during social interaction (Guzman and Wilson, 2005; Busch et al., 2008). Soft KM mechanisms are people-based and technology-supported. We focus on four representative mechanisms, namely organisational memory systems (such as intranets and electronic boards); people-based memory systems (such as social networks, F2F and online) and knowledge centres; CoP; and action-learning (Olivera, 2000; Wenger et al., 2002; Raelin, 2008).

            Organisational memory systems are technology and people-based means by which organisations collect, store, and provide access to their experiential explicit knowledge, knowledge that can be communicated verbally or via documents (Olivera, 2000). CoPs are formal or informal self-organised groups in which hierarchical relations are unimportant and in which members are mutually engaged, and share a repertoire of routines, concepts, tools and language, and goals – usually related to learning/mastering a practice (Wenger et al., 2002). Despite their limitations, CoPs appear suitable for mobilising tacit forms of knowledge (Roberts, 2006). Finally, action-learning involves learning to become a practitioner from the experience of participating in the solution of real-world problems, including learning with others by working on, and then reflecting on, actual actions occurring in real work settings. Individuals accumulate experience by devising workable solutions in messy, interdependent and dynamic situations (Raelin, 2008).

            Building on Casey and Olivera (2011) and Roberts (2006), we posit that the extent to which AI/KM works is related to the prevailing type of knowledge required, the KM mechanism used, and to how power-related mechanisms are applied and to what end. Briefly, the literature has discriminated power relations into episodic and systemic (Fleming and Spicer, 2014). Episodic power refers to a set of discrete actions performed by self-interested actors. Conversely, systemic power works through established routines, rules, ideologies, and traditions that favour particular groups. While the episodic dimension of power, using coercion and manipulation, is related to explicit knowledge, systemic forms of power (domination and subjectification) are related to tacit forms of knowledge.

            In situations with stable contexts and shared organisational goals performing repetitive tasks, explicit knowledge is utterly relevant (Stacey, 2007). We posit that KM mechanisms dealing with EK, such as intranets, electronic boards, blogs and wikis, work efficiently in this situation as they are supported by power-related mechanisms, such as formal authority, access to resources and (mostly) normative mechanisms (see row 1 in Table 1). The use of IT-based KM mechanisms allows not only access to knowledge, but also the emergence of risks associated with individuals creating counter-knowledge – the creation and diffusion of incorrect interpretations of events/facts, unsupported explanations and false beliefs, compromising security and privacy (Soto-Acosta and Cegarra-Navarro, 2016). This is possible because during knowledge collection, storage, and distribution there are people who might gain or lose as information can be circulated unequally and selectively, yielding status, legitimacy, and privilege to some groups, but not others (Wexler, 2002; Busch et al., 2008).

            Table 1.
            Relationships among knowledge types, soft KM mechanisms, and power mechanisms
            Types of knowledgeKnowledge creationKnowledge acquisitionKnowledge diffusionKnowledge applicationPower mechanisms
            1. Explicit Knowledge IntranetsIntranetsIntranetsFormal authority
            Electronic boards, blogs, wikis Electronic boards, blogs, wikisElectronic boards, blogs, wikisAccess to resources
            Normative
            2. Tacit knowledge     
            • STK

            Social networksSocial networksSocial networksSocial networksNormative
            • RTK

            Knowledge centreKnowledge centreKnowledge centreKnowledge centreSocial skills (agenda setting, rule interpretation)
            • CTK

            CoPCoPCoPCoPIdeology & symbol manipulation

            Conversely, in open-ended situations where issues are unclear, preferences are volatile and rational arguments lacking (e.g. unstable context and/or unshared organisational goals), diverse forms of tacit knowledge are likely to be crucial for AI/KM (Stacey, 2007). Because tacit knowledge is subjective, relational, situated, emergent, and open to interpretation (Tsoukas, 2005), episodic political processes (negotiation and the building of political coalitions) together with systemic power mechanisms, such as normative and ideologically driven control mechanisms based on conformity, are highly related to how soft KM mechanisms are used. Knowledge and power, therefore, are tightly tangled (Clegg et al., 2006). Power helps to realise (or block) benefits (or risks) of AI/KM, and adequately dealing with power issues in KM processes involves mobilising RTK, STK, and CTK. Therefore, in open-ended situations, people-based soft KM mechanisms seem to be more relevant than IT-based organisational memory systems. Action-learning, CoPs, as well as (F2F and online) social networks and knowledge centres are common soft KM mechanisms. Row 2 in Table 1 illustrates these situations.

            The mobilisation of tacit forms of knowledge necessarily involves the use of power mechanisms. For example, situations dealing with STK call for the use of action-learning mechanisms (e.g. learning by doing) that demand trust and collaboration. In turn, trust and collaboration inside organisations are usually built through ideological framing and use of political language to provide meaning to specific events (Edelman, 1985). Tenure and the ability to gain promotion are two examples embedded in knowledge of power structures, trust, and collaboration, drawing on a tacit understanding of how the organisation operates and rewards its employees.

            In situations dealing with RTK, social networks (F2F and online), supported by IT-based organisational memory systems, enable individuals to disperse and hide information. The use of KM mechanisms to deal with RTK is closely related to power and politics, as RTK is hidden not simply because of the constraints of space and time, but mainly for power and political reasons. Individuals deliberately do not disclose all their knowledge as ongoing political games in organisations demand negotiation and legitimisation that affect future career options, reputation, and the power positions of groups and individuals. A characteristic case here is of lawyers withholding their tacit (and explicit) knowledge in law firms (Terrett, 1998). Mobilising RTK involves manipulation and reinterpretation of rules, as well as personal skills, to convince people to set (or block) agendas.

            In open-ended situations, CoPs, social networks, and knowledge centres support CTK creation and sharing. Because CTK is highly tacit, its mobilisation usually involves the use of domination and subjectification power mechanisms. Organisational ideologies of efficiency, and normative mechanisms to convince people and construct consent (e.g. trust) are always at play during CTK mobilisation. Because IT-based tools do not enable trust-building, face-to-face social interaction (enacted through CoP and social networks) is crucial to create mutual understanding and build trust. Yet, the development of CoP is permeated by political processes in that pluralistic actors do not necessarily agree on means and ends and power positions are usually unevenly distributed inside groups. Therefore, power relations shape social interaction and perceptions during negotiation, trust-building, and mutual understanding processes (Roberts, 2006). This suggests the extent to which CoP members acquire, share, apply, and create knowledge is closely related to the mechanisms of power these members deploy.

            Discussion

            The root limitations of AI relate to the use of the objectivist view of knowledge, the equalisation of knowledge and information, combined with the application of the idea of mechanisms to human behaviour and thinking that are incompatible with the idea of knowing. The latter process involves human cognition and consciousness, which is constituted by tacit and experiential activities not possible to codify as they remain collective forms of tacit knowledge, learnt and executable only by human beings.

            The relationships between AI and KM have implications for wider society as to how humans design and use technology, and how power relations are continuously redistributed. A key aspect is the realisation of the limitations of both AI and KM in relation to the extent to which diverse forms of tacit knowledge can be effectively created, acquired, diffused, and applied. AI can adequately support KM only when dealing with explicit knowledge and with some forms of STK and RTK. To date, there is no AI machine able to ‘learn’ collective tacit knowledge (bottom-left side of Figure 1).

            We should try to envisage just how KM can help address some of the limitations of AI. Because KM is partly IT-based and partly based on social interactions, KM is able to deal with some forms of relational, somatic, and collective tacit knowledge. Provided AI technology developers enable adequate human–computer interaction, AI technologies can gain some access to the bottom-left side of Figure 1. For example, the use of new interactive/sensorial/spatial interconnected (internet) machinery, blended with F2F contacts, enables relational connectivity (Amin and Roberts, 2008), opening the possibility of sharing some forms of RTK and CTK. This is an item for the research agenda.

            The way AI and KM evolve is related to how society is organised and technological development trends (Collins, 1990). Power relations permeating AI and KM advancements help explain how and why these technologies are developed in some ways but not in others. AI/KM developments are a function of technology developers’ goals and assumptions about how the world works and their views on the role of technological artefacts (e.g. Noble, 1984). For the moment, two distinct developmental paths can be identified (Spector, 2006; Winograd, 2006).

            The first path refers to the development of machines that complement human actions/skills. In this approach, humans perform tasks that require wisdom and cognitive awareness CTK, and machines are assigned tasks requiring explicit knowledge and some STK and RTK. To ensure individuals are fully committed to company goals and use and that they share their CTK, formal authority and access to resources, combined with domination and subjectification power mechanisms, are necessary. While the human–machine collaboration idea is not new (e.g. Rosenbrock, 1990; Badham, 1991), technologies following this approach are rare. For example, the largest world robot manufacturer, FANUC (2017), launched its ‘collaborative robot’ (COBOT) only in 2015, promising that ‘robots would execute all strenuous tasks, enabling humans to dedicate their precious time to lighter, more skilled or demanding tasks’. Whether this is really happening, to what extent, and who is collaborating with whom are further items for the future research agenda.

            With the second path, the drive to codification and commodification of knowledge, machines compete with humans (Roberts, 2001). By adhering to the idea of mechanisms, approaching knowledge as an object and assuming all knowledge can be codified, knowledge becomes a commodity. Based on this premise, technology designers develop technologies attempting either to substitute human actions or to turn humans into prostheses of AI/KM. While substitution of human actions by machines brings unemployment, humans becoming prostheses of AI/KM might trigger individuals to modify their behaviour to complement the machine’s capabilities. So, this path addresses explicit knowledge and some forms of RTK and STK, but does not address CTK.

            As with the previous path, a combination of formal authority with domination and subjectification power mechanisms is necessary. Because this is deeply rooted in western society, it goes unnoticed. Similarly, our own beliefs can facilitate subjectification as individuals are not always aware of institutional and legacy soft issues already embedded in societal rules, practices, and technologies. To illustrate this path, we use Czarniawska’s (2011) study of news production in news agencies. News production represents the ultimate application of AI and KM, as it entails gathering, codifying, classifying, and diffusing news. On the one hand, the way technology is developed and applied contributes to knowledge commodification. Technology is acquiring a leading role in news production at the cost of human actions. AI technology (software) allocates codes to incoming data and the codes are the basis of classification into regional, national, and global news. Yet, AI technology is created by IT specialists and not by news producers, and this influences news production since algorithms that rule machine behaviour also rule user actions. For example, software limits the maximum number of words in a text or a heading; procedures to edit and send news are all set by software (Czarniawska, 2011).

            Further, there is a deliberate push towards the commodification path in this industry. Kristi Suutani, global business manager of algorithmic trading at Reuters, believes ‘there is a real interest in moving the process of interpreting news from humans to the machines’ (Czarniawska, 2011, p.191). On the other hand, the application of domination and subjectification power mechanisms enables computerised control of news production. This can happen only if implicit consent is given by journalists and news suppliers to comply with the software’s rules. News agencies educate and train staff to ensure compliance and subjectification. News sources are educated to send information in a predetermined format dictated by software. News agency journalists are trained to classify and package news in which the editorial software dictates the format and company culture provides the idea that, by achieving organisational goals, self-fulfilment is achieved. Additionally, two contextual aspects contribute to subjectification and domination. First, the high levels of redundancy experienced worldwide by journalists from newspapers and TV stations. Second, the western view that content is independent of form obscures the leading role of algorithms embedded in AI technology for news production (Czarniawska, 2011, pp.202–3).

            Conclusion

            This paper underscores the mutually complementary roles of AI and KM by noting the differences in human/technological traits. AI can extend socially based conceptual KM tools through supporting human agents, who manage most forms of knowledge as complementary prostheses of human capabilities. Nevertheless, current developments seem to be prone to the second technological development path: AI-led development of autonomous intelligent machines attempting to substitute human behaviour.

            There are ongoing implications for research. We highlight four aspects. First, the discussion of ethical implications of the objectivist view of knowledge (embraced by those following a mechanistic view of human beings), remains open. The ethical critique should therefore identify limitations of mechanisms as a materialist philosophy that, whatever its value, is adopted or imposed by but does not originate within computer science, specifically AI. Ethical implications of applying the idea of mechanisms to human knowledge seem to be an important avenue for future research.

            Second, knowledge sharing is a constant challenge in the KM field. Because expert knowledge is a combination of RTK, STK, and CTK, it can be acquired only through practical experience (i.e. on-the-job learning). The research challenge is to determine what combinations of AI and socially based conceptual KM mechanisms are able to deal with expertise. We suggest practice-based approaches (Gherardi, 2012) can be useful for examining interaction of AI technologies with socially based KM tools. Some open questions regarding this issue include supporting roles that AI-based technologies play in assisting the mobilisation of CTK, and the limitations of AI technologies in supporting CTK learning (e.g. online higher education). The implications are that further joint KM/AI research is required. This is a challenging task as both AI and KM fields have evolved almost in parallel. KM and AI fields need to overcome political barriers that are a particular form of relational and collective tacit knowledge.

            Third, as most work activities are performed by/with/through IT-based equipment, it is relevant to investigate relationships between humans and AI-based technologies in performing KM-related tasks. Some of the open questions regarding this topic include how AI-based technologies that are designed to substitute human actions affect the realisation (by humans) of KM-related tasks; how AI-based technologies designed to complement human actions affect the realisation (by humans) of KM-related tasks; and the design principles AI developers need to follow to support humans dealing with relational and somatic forms of tacit knowledge.

            Finally, AI/KM relationships trigger technology design implications that in turn affect power relations in wider society. On the one hand, the AI/KM relationships are strongly shaped by the limitations of AI to deal with some forms of RTK and STK, and with its inability to access CTK. Some of these limitations can be partially addressed provided AI developers introduce new design principles to differentiate three types of tacit knowledge. This is another item for the research agenda.

            On the other hand, AI-driven sorting and packaging of information (e.g. Google or news production at news agencies) involves delegating to AI tasks that have political implications. Mindless AI, completely unaware of the consequences of its actions, is commanding to what information the general public has access, what they read, and how they read (as form shapes content). More than a simple technological development trend, this can be interpreted as another domination and subjectification mechanism since AI, by itself, did not decide to proceed in this way. Small groups of people aligned to market logic and with access to knowledge and resources have already made these decisions. There are quite a few research questions awaiting empirical investigation. To what extent can AI’s embedded algorithms be considered domination mechanisms? How can technology developers consider AI’s political implications while promoting its positive aspects? What trade-offs are involved in AI’s further development? What resistance mechanisms should technology users apply to counter the domination mechanisms embedded in new technologies?

            Disclosure statement

            No potential conflict of interest was reported by the authors.

            Note

            References

            1. and ( 2003 ) ‘ Knowledge management: the information technology dimension ’ in and (eds) Blackwell Handbook of Organizational Learning and Knowledge Management , Blackwell , Oxford , pp. 104 – 20 .

            2. ( 2004 ) ‘ Advances in intelligent information technology: re-branding or progress towards conscious machines? ’, Journal of Information Technology , 19 , 1 , pp. 21 – 7 . [Cross Ref]

            3. and ( 2001 ) ‘ Tacit knowledge: some suggestions for operationalization ’, Journal of Management Studies , 38 , 6 , pp. 811 – 29 . [Cross Ref]

            4. and ( 2008 ) ‘ Knowing in action: beyond communities of practice ’, Research Policy , 37 , 2 , pp. 353 – 69 . [Cross Ref]

            5. ( 1983 ) The Architecture of Cognition , Harvard University Press , Cambridge MA .

            6. and ( 2000 ) ‘ Knowledge transfer: a basis for competitive advantage ’, Organizational Behavior and Human Decision Processes , 82 , 1 , pp. 150 – 69 . [Cross Ref]

            7. and ( 1978 ) Organizational Learning: A Theory of Action Perspective , Addison-Wesley , Reading MA .

            8. and ( 2016 ) ‘ Computational models in neuroscience: how real are they? A critical review of status and suggestions ’, Austin Neurology & Neurosciences , 1 , 2 , available from austinpublishinggroup.com/neurology-neurosciences/download.php?file=fulltext/ [accessed July 2017 ].

            9. ( 1991 ) ‘ Technology, work and culture ’, AI & Society , 5 , 4 , pp. 263 – 76 .

            10. ( 2001 ) ‘ Assessing knowledge assets: a review of the models used to measure intellectual capital ’, International Journal of Management Reviews , 3 , 1 , pp. 41 – 60 . [Cross Ref]

            11. ( 1988 ) ‘ The information society: computopia, dytopia, myopia ’, Prometheus , 6 , 1 , pp. 61 – 77 . [Cross Ref]

            12. ( 1990 ) ‘ The philosophy of logical mechanism ’ in (ed.) The Philosophy of Logical Mechanism , Kluwer Academic , London , pp. 349 – 524 . [Cross Ref]

            13. ( 2008 ) Tacit Knowledge in Organizational Learning , IGI-Global , Hershey PA . [Cross Ref]

            14. , and ( 2008 ) ‘ Generational differences in soft knowledge situations: status, need for recognition, workplace commitment and idealism ’, Knowledge and Process Management , 15 , 1 , pp. 45 – 58 . [Cross Ref]

            15. and ( 2011 ) ‘ Reflections on organizational memory and forgetting ’, Journal of Management Inquiry , 20 , 3 , pp. 305 – 10 . [Cross Ref]

            16. , and ( 2006 ) Power and Organizations , SAGE , London .

            17. ( 2014 ) ‘ Interaction between tacit and explicit knowledge in socio-spacial context ’, Prometheus , 32 , 1 , pp. 101 – 4 . [Cross Ref]

            18. ( 1990 ) Artificial Experts: Social Knowledge and Intelligent Machines , MIT Press , Cambridge MA .

            19. ( 2010 ) Tacit and Explicit Knowledge , University of Chicago Press , Chicago Ill . [Cross Ref]

            20. and ( 1998 ) The Shape of Actions: What Human and Computers can do , MIT Press , Cambridge MA .

            21. , and ( 2012 ) ‘ Rethinking power in organizations, institutions, and markets: classical perspectives, current research and the future agenda ’, Research in Sociology of Organizations , 34 , pp. 1 – 20 .

            22. and ( 2004 ) ‘ The limits of a technological fix to knowledge management ’, Management Learning , 35 , 1 , pp. 9 – 29 . [Cross Ref]

            23. ( 2011 ) Cyberfactories: How News Agencies Produce News , Edward Elgar , Cheltenham UK . [Cross Ref]

            24. and ( 1998 ) Working Knowledge: How Organizations Manage What They Know , Harvard Business School Press , Boston MA .

            25. , , , , , , , and ( 2014 ) ‘ Knowledge vault: a web-scale approach to probabilistic knowledge fusion ’, in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , ACM, New York , pp. 601 – 10 .

            26. ( 2009 ) On the Internet , Routledge , London .

            27. , and ( 2000 ) Mind over Machine , Simon and Schuster , New York .

            28. ( 1985 ) ‘ Political language and political reality ’, Political Science & Politics , 18 , 1 , pp. 10 – 9 . [Cross Ref]

            29. FANUC ( 2017 ) Collaborative Robots , available from https://www.fanuc.eu/uk/en/robots/robot-filter-page/collaborative-robots [accessed July 2017 ].

            30. and ( 2014 ) ‘ Power in management and organization science ’, Academy of Management Annals , 8 , pp. 237 – 98 . [Cross Ref]

            31. ( 1985 ) ‘ Hobbes ’ in (ed.) A Critical History of Western Philosophy , Free Press , New York , pp. 153 – 69 .

            32. ( 2001 ) ‘ Knowledge management systems: surveying the landscape ’, International Journal of Management Reviews , 3 , 1 , pp. 61 – 77 . [Cross Ref]

            33. ( 2012 ) How to Conduct a Practice-based Study: Problems and Methods , Edward Elgar , Cheltenham UK . [Cross Ref]

            34. ( 2013 ) ‘ The grey textures of practice and knowledge: review and framework ’, European Business Review , 25 , 5 , pp. 429 – 52 . [Cross Ref]

            35. and ( 2005 ) ‘ The ‘soft’ dimension of organizational knowledge transfer ’, Journal of Knowledge Management , 9 , 2 , pp. 59 – 74 . [Cross Ref]

            36. , , and ( 2006 ) ‘ Within and beyond communities of practice: making sense of learning through participation, identity and practice ’, Journal of Management Studies , 43 , 3 , pp. 641 – 33 . [Cross Ref]

            37. ( 1999 ) ‘ The search-transfer problem: the role of weak ties in sharing knowledge across organizational subunits ’, Administrative Science Quarterly , 44 , 1 , pp. 82 – 111 . [Cross Ref]

            38. , and ( 1999 ) ‘ What’s the strategy for managing knowledge? ’, Harvard Business Review , 77 , 2 , pp. 106 – 16 .

            39. , , and ( 2016 ) ‘ Combining feature selection with decision tree criteria and neural network for corporate value classification ’, Knowledge Management and Acquisition for Intelligent Systems/Lecture Notes in Artificial Intelligence (LNAI)/14th Pacific Rim Knowledge Acquisition Workshop , Phuket, Thailand , pp. 31 – 42 .

            40. , and ( 1996 ) ‘ Exploring the intellectual structures of information systems development: a social action theoretic analysis ’, Accounting, Management & Information Technology , 6 , 1-2 , pp. 1 – 64 . [Cross Ref]

            41. ( 2004 ) The Quest for Consciousness: A Neurobiological Approach , Roberts & Company , Englewood CO .

            42. and ( 1993 ) The Cambridge Companion to Aquinas , Cambridge University Press , Cambridge . [Cross Ref]

            43. , and ( 2016 ) ‘ Amalgamating social media data and movie recommendation ’, Knowledge Management and Acquisition for Intelligent Systems/Lecture Notes in Artificial Intelligence (LNAI)/14th Pacific Rim Knowledge Acquisition Workshop , Phuket, Thailand , pp. 141 – 52 .

            44. ( 2000 ) An Introduction to the Philosophy of Mind , Cambridge University Press , Cambridge . [Cross Ref]

            45. and ( 2009 ) ‘ Neurobiology of wisdom ’, JAMA Psychiatry , 66 , 4 , pp. 355 – 65 .

            46. ( 1984 ) ‘ An absence of malice: computers and Armageddons ’, Prometheus , 2 , 2 , pp. 190 – 200 . [Cross Ref]

            47. ( 1974 ) ‘ What is it like to be a bat? ’ in and (eds) Introduction to Philosophy, Classical and Contemporary Readings , Oxford University Press , New York , pp. 382 – 9 .

            48. and ( 2003 ) ‘ Implications of ICT for knowledge management in globalization ’, Information Management & Computer Security , 11 , 4 , pp. 167 – 74 . [Cross Ref]

            49. ( 1984 ) Forces of Production: A Social History of Industrial Automation , Knopf , New York .

            50. and ( 1995 ) The Knowledge-creating Company: How Japanese Companies Create the Dynamics of Innovation , Oxford University Press , Oxford .

            51. ( 2000 ) ‘ Memory systems in organizations: an empirical investigation of mechanisms for knowledge collection, storage and access ’, Journal of Management Studies , 37 , 6 , pp. 811 – 32 . [Cross Ref]

            52. ( 2002 ) ‘ Knowing in practice: enacting a collective capability in distributed organizing ’, Organization Science , 13 , 3 , pp. 249 – 73 . [Cross Ref]

            53. ( 1997 ) How the Mind Works , Penguin , London .

            54. ( 2005 ) ‘ So how does the mind work? ’, Mind and Language , 20 , 1 , pp. 1 – 24 . [Cross Ref]

            55. ( 1958 ) Personal Knowledge: Towards a Post-Critical Philosophy , Routledge & Kegan Paul , London .

            56. ( 1983 ) The Tacit Dimension , Peter Smith , Gloucester MA .

            57. and ( 2010 ) Artificial Intelligence: Foundations of Computational Agents , Cambridge University Press , Cambridge . [Cross Ref]

            58. ( 1987 ) Thinking Machines – The Evolution of Artificial Intelligence , Basil Blackwell , Oxford .

            59. ( 1981 ) ‘ The imagery debate: analogue media versus tacit knowledge ’ in and (eds) Readings in Cognitive Science: A Perspective from Psychology and Artificial Intelligence , Morgan Kaufman , San Mateo CA , pp. 600 – 14 .

            60. ( 2008 ) Work-based Learning: Bridging Knowledge and Action in the Workplace , Jossey-Bass , San Francisco CA .

            61. and ( 2007 ) ‘ Can we make sense of knowledge management’s tangible rainbow? A radical constructivist alternative ’, Prometheus , 25 , 2 , pp. 161 – 85 .

            62. , and ( 2007 ) ‘ Ethnicity-based cultural differences in implicit managerial knowledge usage in three Australian organizations ’, Knowledge Management Research and Practice , 5 , 3 , pp. 173 – 85 . [Cross Ref]

            63. ( 2001 ) ‘ The drive to codify: implications for the knowledge-based economy ’, Prometheus , 19 , 2 , pp. 99 – 116 . [Cross Ref]

            64. ( 2006 ) ‘ Limits to communities of practice ’, Journal of Management Studies , 43 , 3 , pp. 623 – 39 . [Cross Ref]

            65. ( 1990 ) Machines with a Purpose , Oxford University Press , Oxford .

            66. ( 2003 ) ‘ Knowledge management – the new librarianship? From custodians of history to gatekeepers to the future ’, Library Management , 24 , 8/9 , pp. 433 – 40 . [Cross Ref]

            67. and ( 2009 ) Artificial Intelligence - A Modern Approach , Prentice Hall , Upper Saddle River NJ .

            68. ( 2001 ) ‘ Introduction ’ in , and (eds) The practice Turn in Contemporary Theory , Routledge , London , pp. 10 – 23 .

            69. ( 1990 ) ‘ Is the brain’s mind a computer program? ’, Scientific American , 262 , 1 , pp. 20 – 31 .

            70. and ( 2016 ) ‘ Guest editorial. New ICT for knowledge management in organizations ’, Journal of Knowledge Management , 20 , 3 , pp. 417 – 22 . [Cross Ref]

            71. ( 2006 ) ‘ Evolution of artificial intelligence ’, Artificial Intelligence , 170 , 18 , pp. 1251 – 3 . [Cross Ref]

            72. ( 2005 ) ‘ An overview: what’s new and important about knowledge management? Building new bridges between managers and academics ’ in and (eds) Managing Knowledge: An Essential Reader , Sage , London , pp. 127 – 54 .

            73. ( 2007 ) Strategic Management and Organisational Dynamics: The Challenge of Complexity to Ways of Thinking about Organisations , Pearson Education , London .

            74. , , , et al. ( 2000 ) Practical Intelligence in Everyday Life , Cambridge University Press , Cambridge .

            75. ( 2004 ) ‘ Rethinking knowledge: a Bergsonian critique of the notion of tacit knowledge ’, British Journal of Management , 15 , 2 , pp. 177 – 88 . [Cross Ref]

            76. and ( 2008 ) ‘ Computer models of the mind are invalid ’, Journal of Information Technology , 23 , 1 , pp. 55 – 62 . [Cross Ref]

            77. ( 1998 ) ‘ Knowledge management and the law firm ’, Journal of Knowledge Management , 2 , 1 , pp. 67 – 76 . [Cross Ref]

            78. ( 2005 ) Complex Knowledge: Studies in Organizational Epistemology , Oxford University Press , Oxford .

            79. ( 2009 ) ‘ Ontological and epistemological foundations of qualitative research ’, Forum Qualitative Sozialforschung / Forum: Qualitative Social Research , 10 , 2 , available from https://www.qualitative-research.net/index.php/fqs/article/view/1299/3163 [accessed August 2016 ].

            80. and ( 2009 ) ‘ To codify or collaborate – Introduction to the special issue on Knowledge Management and e-Research Technologies ’, Knowledge Management Research & Practice , 7 , 3 , pp. 192 – 5 . [Cross Ref]

            81. , and ( 2016 ) ‘ Abbreviation identification in clinical notes with level-wise feature engineering and supervised learning ’, Knowledge Management and Acquisition for Intelligent Systems/Lecture Notes in Artificial Intelligence (LNAI)/14th Pacific Rim Knowledge Acquisition Workshop , Phuket, Thailand , pp. 3 – 17 .

            82. ( 1985 ) ‘ Descartes ’ in (ed.) A Critical History of Western Philosophy , Free Press , New York , pp. 170 – 86 .

            83. ( 1995 ) Sensemaking in Organizations , Sage , Thousand Oaks CA .

            84. , and ( 2002 ) Cultivating Communities of Practice: A Guide to Managing Knowledge , Harvard Business Press , Boston MA .

            85. ( 2002 ) ‘ Organizational memory and intellectual capital ’, Journal of Intellectual Capital , 3 , 4 , pp. 393 – 414 . [Cross Ref]

            86. ( 2006 ) ‘ Shifting viewpoints: artificial intelligence and human–computer interaction ’, Artificial Intelligence , 170 , 18 , pp. 1256 – 8 . [Cross Ref]

            87. and ( 2002 ) ‘ Deliberate learning and the evolution of dynamic capabilities ’, Organizational Science , 13 , 3 , pp. 339 – 52 . [Cross Ref]

            Author and article information

            Contributors
            Journal
            CPRO
            cpro20
            Prometheus
            Critical Studies in Innovation
            Pluto Journals
            0810-9028
            1470-1030
            March 2017
            : 35
            : 1
            : 37-56
            Affiliations
            [ a ] Department of International Business and Asian Studies, Griffith University , Brisbane, Australia
            [ b ] Department of Computing, Macquarie University , Sydney, Australia
            Author notes

            Accepting editor: Joanne Roberts

            [* ]Corresponding author. Email: l.sanzogni@ 123456griffith.edu.au
            Article
            1364547
            10.1080/08109028.2017.1364547
            b88881dd-f656-4499-8cdc-24169c37b399
            © 2017 Louis Sanzogni, Gustavo Guzman and Peter Busch

            All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.

            History
            Page count
            Figures: 1, Tables: 1, Equations: 0, References: 87, Pages: 20
            Categories
            Article
            Research Paper

            Computer science,Arts,Social & Behavioral Sciences,Law,History,Economics

            Comments

            Comment on this article