Artificial intelligence as a challenge for sociology (of work)
Even though sociology was reluctant to deal with the ‘products and problems’ (Rammert, 1995; own translation) of artificial intelligence (AI) until the 1990s, 1 this technology has increasingly become the focus of sociological research in recent years. A look at this research shows that newer forms of AI, in particular, seem to challenge the sociological discussion: how the development and use of technologies that can be classified as connectionist or neuro-symbolic AI (based on deep learning and artificial neural networks) can be understood and evaluated with regard to their effects on social contexts is still an open question. Nevertheless, the fact that the use of AI is accompanied by new possibilities of (not always desirable) knowledge and action seems to be a basic consensus in sociological research. The concrete questions, perspectives and findings are − as is to be expected from a multi-paradigmatic science − diverse and difficult to bring down to a common denominator, which may also be due to the fact that AI is continuously being further developed with partly latent, as yet hardly assessable consequences (cf. the contributions in Heinlein & Huchler, 2023). However, the starting point of most considerations relating to the impact of AI in society is the observation that AI is becoming quantitatively more important and is contributing to a qualitative change of social structures and processes (such as relations of power and dominance, political participation, social inequality and so forth). Various developments are responsible for this: the increasing penetration of social structures with data sources, the continuous generation of data in social processes, the advancing digital networking of organisations and social areas and the ever-increasing computing power available for processing and using the generated data. From a historical perspective, AI development is thus linked to processes of computerisation, digitalisation and datafication of society, but without being fully absorbed in them (Seising, 2021).
Furthermore, sociological research shows that the eclectic and generic attempt to define AI in terms of individual processes or technological foundations is necessarily incomplete. In contrast, the definition of AI as a technical solution for processes for which intelligent action was previously considered necessary is much too broad and general. It also implies an anthropological fallacy (Salles et al., 2020; Watson, 2019). In the following discussion, we try to understand AI ‘in action’ and at the same time to identify the inner logic of recent AI that sets it apart from other technologies. Our aim is to trace the socio-technical dynamics of AI and to propose how the simultaneous standardisation and increase in complexity of working practices can be conceptualised. We argue that recent AI is gaining ground in the world of work, because it can (at least to some extent) maintain and make productive the complexity and uncertainty of the work practices into which it is integrated.
Technological innovations that have emerged under the paradigm of connectionist or sub-symbolic AI are crucial to this. 2 In the last decade, these forms of ‘new’ AI have been analysed primarily in the context of social action and communication, societal narratives, cultural patterns and social structures (for example, Amoore, 2013, 2020; Cave et al., 2020; Crawford, 2022; Nassehi, 2019; Nowotny, 2021; Seyfert & Roberge, 2016; Zuboff, 2015). On the one hand, (social) robotics (Breazeal 2002) as a materialised or embodied form of AI has been brought into focus primarily in praxeological terms, and questions have been asked about what constitutes the sociality of technical artefacts, how interaction with ‘intelligent’ machines can be understood, and how robot development in the laboratory can be analysed as a social practice (for example, Alač, 2009; Bischof, 2017; Koolwaay, 2018; Muhle, 2023). Here, the tangible and visible machine form of AI as a part of social processes tends to be in the foreground. On the other hand, in a perspective more oriented towards general mechanisms, logics and principles, AI is understood as an algorithmic process. As a rule, these perspectives do without recourse to the concrete material form of AI or ignore it in favour of a concentration on the inherent functionalities of ‘intelligent’ algorithms. Such an approach can be found, among others, in systems theory, which has dealt with communication with computers at an early stage (for example, Baecker, 2011; Esposito, 2014, 2017; Harth & Lorenz, 2017), or in works on the ‘intelligent’ algorithmisation of the public sphere, power and control, such as in Zuboff’s concept of a new economic principle of ‘data, extraction, analysis’ (Zuboff, 2015), the ‘threat of algocracy’ (Danaher, 2016), an (in)visible ‘algorithmic life’ permeated by data (Amoore & Piotukh, 2016), or a narrowing of decision-relevant perspectives and future narratives through AI which is problematic in political and ethical terms (Amoore, 2020). 3
From a technological point of view, the development of connectionist AI aims to discover patterns in large, usually unstructured, data sets with the help of probabilistic methods. The term ‘connectionist’ refers to the fact that software systems programmed under this paradigm ‘de-emphasise the explicit use of symbols in problem-solving. Instead, they hold that intelligence arises in systems of simple, interacting components (biological or artificial neurons) through a process of learning or adaptation by which the connections between components are adjusted. Processing in these systems is distributed across collections or layers of neurons’, (Luger, 2005: 453) Having gained relevance in AI research from the end of the 1980s onwards (cf. Smolensky, 1988; Fodor & Pylyshyn, 1988), connectionist AI is therefore not about symbolic representations (for example, those found in human language and the subjects of symbolic AI, such as expert systems). Rather, the decisive factor is the structure of the algorithms: the algorithms used are functionally grouped (‘neurons’) and interconnected in such a way that their operations – modelled on biological neural networks – run variably in a networked manner. Symbols play no direct role in this networked architecture and become visible at best as a result: connectionist AI systems ‘still follow rules, but the rules are well below the semantic level. It is hoped that as a consequence of following rules at this low level, semantic properties will emerge – that is, manifest themselves in the processing and behaviour of the program – without having been explicitly programmed in. Consequently, when viewed at the semantic level, such systems often do not appear to be engaged in rule-following behaviour, as the rules that govern these systems lie at a deeper level’ (Chalmers, 1992:26; see also Smolensky, 2012). The advantage of this way of building AI is that the relations between the neurons can be changed depending on environmental stimuli and optimised with a view to the task to be performed. The quality of the algorithmic network increases with the use of the system.
The way in which the algorithmic complexity of connectionist AI is generated can be described as a ‘bottom-up’ approach:
One of the most attractive features of connectionist learning is that most models are data- or example-driven. That is, even though their architectures are explicitly designed, they learn by example, generalising from data in a particular problem domain. (Luger, 2005:845)
Connectionist AI is thus inscribed with degrees of freedom that serve to build its internal structures. These degrees of freedom become effective in the (always selective, due to programming) confrontation with a practice, by means of which the algorithmic structures prove themselves or are further developed. Accordingly, connectionist AI can be found in very different applications – for example, in digital tools with a user interface, running in the background with certain programs, or as a part of robotics – and with a very wide range of applications. The key point about the ‘intelligence’ of the new technologies that go by the name of AI is that the algorithms are capable of structuring unknown data independently and further developing their structuring methods self-referentially, that is, along internal selectivities gained from dealing with non-algorithmic environments. That is why, as Elena Esposito (2017) puts it pointedly (although too narrowly focused on unsupervised learning), an ‘intelligent’ algorithm is successful.
If it learns to learn by itself, that is, to develop a practice of unsupervised learning, in which the algorithm does not learn what others teach. Instead, it decides autonomously what to learn and what to communicate. (Esposito, 2017:261)
It should be added that this can only apply to the very specific purposes for which an AI application has been programmed. Yet this ‘learning’ can take place in different contexts and be harnessed for different purposes – for example, to support complex decision-making, to move vehicles in unpredictable environments, or to interact and communicate with erratic people.
If we consider connectionist AI as an empirical object of research on work, we can see that it is a technology that can be used and interpreted in very different ways in different fields and contexts (Brynjolfsson & McAfee, 2014; Ford, 2016). Broadly speaking, discussions on AI in the sociology of work focus on how it can replace human work (substitution), complement human work (complementarity) or empower working subjects (augmentation) (Huchler, 2019, 2022). This raises further questions concerning possible uses of AI in the world of work: 4 does AI accelerate automation by itself, or does it act as a tool? Does it serve to empower workers or to control them? Is it used to control work or to make it more flexible, to recentralise or decentralise it? What effect does it have on occupations, qualifications, skills and knowledge and, not least, on employment? AI thus appears as a new, yet influential part of digital transformation.
Since the beginning of automation and computerisation, digital technologies have increasingly been used, modifying the requirements and conditions of work in many ways (DeSanctis & Poole, 1994; Leonardi & Treem, 2020). Digital tools and systems bring with them their own (programmed) logics, which actively restructure work activities and must be dealt with and managed within the work process. This applies, for example, to questions of autonomy and control on the shopfloor (Mazmanian et al., 2013; Bader & Kaiser, 2017), changing relationships among professional groups (Barrett et al., 2012) and new forms of ‘behavioural visibility’ in organisational contexts (Leonardi & Treem, 2020).
With regard to work, AI can be understood as a new method of informatisation or computerisation of work and society (Baukrowitz et al., 2006), as automation (Huchler, 2022) or as a new driver of productivity, rationalisation, but also in relation to other trends such as globalisation, financialisation and networking (Schmiede, 2006). AI can serve to couple more tightly or more loosely the control of work and concrete work practices, as well as the formal and informal structures of organisations (Weltz, 1988). In studies on AI in the world of work, there has been little conceptual analysis of AI as a ‘general-purpose technology’ (Brynjolfsson et al. 2019; Crafts, 2021), separate from other change processes (for example, digitalisation in general, Industry 4.0, platforms). AI can function as an instrument of work rationalisation (in companies or on platforms), by controlling, deskilling, narrowing the scope of action and alienation (in the sense of a digital Taylorism, Butollo et al., 2017), and as a technology enabling monopolisation and proprietarisation in digital capitalism (Staab, 2019). AI potentially allows for an intensification of management driven by Key Performance Indicators (KPI), real-time optimisation, the anonymisation and automation of performance monitoring and the expansion of indirect control into the private sphere. But AI can also be used as a support system or tool (for example, broadening the scope of action or boosting the potential for learning and experience). Here, too, elements of work are initially replaced, productivity is usually increased and output generally intensified, but the emphasis is on the empowerment of workers. Such worker-oriented operational strategies must be distinguished not only from Taylorism but also from systemic rationalisation focused on value creation processes (Nies, 2021), as well as from strategies for the use of new technologies focused mainly on exploitation (Pfeiffer, 2020). This aspect will be discussed below.
From a sociological point of view, one of the essential achievements of AI seems to be to insert itself as a dynamic element with a ‘relative autonomy’ (Rammert, 2007:82; own translation) into practical contexts. This autonomy is ‘relative’ for two reasons. First, AI is always a matter of programmed algorithms tailored to very specific purposes, that is, systems with ‘none or very little ability to do anything beyond their particular domain of functionality’ (Dyer-Witheford et al., 2019:10). The strength of current AI thus lies in its specificity for certain tasks and contexts; hence it is far from being a ‘general purpose technology’ (Crafts, 2021; Brynjolfsson et al., 2019). Second, AI is consistently dependent on data and all the associated prerequisites, such as data quality. This includes training data, used to increase and optimise the accuracy of the algorithms designed for specific purposes in advance, but also large data sets accessed by algorithms in real time to observe and structure what is happening within a usage practice. Despite these relativisations, recent AI exhibits a new quality of unpredictability, turning technical action into something other than a fixed coupling of processes or pure repetition of procedures, as is the case with conventional technologies: ‘The future behaviour of a technical agent, be it a robot or an avatar, cannot be predicted or calculated with certainty’ (Rammert, 2003:7; own translation). Contingency thus becomes a crucial feature of AI’s operation.
Hence, understanding AI solely as a further phase in the standardisation and formalisation of work or solely as reproducing existing power structures is too simplistic. It is at this point that we identify a relationship between AI and indirect organisational control of work. Through new organisational concepts (for example, flat hierarchies and project-based work instead of bureaucratic control), work is opened up to a certain extent and contingency in work is made possible (for example, by extending the scope of action). However, this does not happen in an arbitrary way but is embedded in new indirect control methods restructuring and framing work in a new way. In a similar sense, AI can be understood as a flexible method of structuring that permits contingency but at the same time establishes new structures. AI is programmed to retain links to a work practice, thus forming a new logic of work through reciprocal connections of ‘intelligent’ algorithms 5 and work activities. Far from being an aimless process, however, this is embedded in the context of use. This embedding is linked to standardisation effects, but these are accompanied by new opportunities for action and knowledge. Instead of seeing complexity and standardisation as opposites, we must think of them as intertwined aspects of work and organisation.
In the rest of this article, we therefore outline a perspective that is sensitive to the link between social practices and AI technologies. We want to show how the technical principle of AI simultaneously provides both for more contingency and more standardisation in work contexts. In the first step, this argument is spelled out theoretically by analysing the technical principle of connectionist AI both with regard to socio-technically generated contingencies and with regard to existing restrictions and selectivities inherent in the technology itself. Against this background, it will then be shown how the principles of contingency and selectivity become effective in the context of management and control of work (which is addressed in the following section). Finally, we conclude with a summary of our argument and identify starting points for further research on artificial intelligence in the sociology of work.
Artificial intelligence: enabling a situational approach to complexity and uncertainty
One of the main promises of AI and machine learning is that it will offer greater adaptivity and flexibility in the use of technology. In this sense, AI is understood as a technology that can handle the challenges of a world characterised by volatility (such as rapid market fluctuations), uncertainty and unplannability, complexity (partly as a result of increasingly networked processes) and ambiguity (not either-or but both-and, making clear categorisation difficult), referred to by Barber (1992) as VUCA. There is therefore an increasing need for situational and adaptive responses to these challenges, both in organisations and in other areas. Against this background, traditional forms of formal or technical work and process control reach their limits (Huchler, 2018). It becomes obvious that formal planning in advance combined with stable, linear technical or formal value creation processes lacks flexibility. This does not make these forms of coordination obsolete. On their own, however, they no longer offer competitive solutions for the flexibility needed in the modern world of work. The strategy of controlling, and thereby reducing, complexity no longer covers all requirements. It must be supplemented with strategies for dealing productively with complexity (Heinlein & Huchler, 2021). The supplementary new principle involves decentralised control based on the process and the object of work and, connected to this, a correspondingly high level of adaptivity.
The idea of flexible, networked and decentralised coordination of work is not new. Flat hierarchies, group and project work, agreements on objectives and agile project management, self-organisation and trust are long-established organisational methods. Renegotiated under the terms ‘Work 4.0’ and ‘New Work’, they aim to boost productivity, flexibility and adaptivity by means of self-coordinated processes. These are, however, human-oriented approaches, which focus on work activity as the central unit of coordination. Technology-centred approaches are the opposite of human-centred ones. The ‘Internet of Things’ (IoT) (Li et al., 2015) and the ‘Industry 4.0’ model, introduced in Germany in 2013, presented the principle of the decentralised self-coordination of work in purely technology-driven terms: ‘intelligent’ learning AI systems facilitate the situational, autonomous self-regulation of processes of production, work and value creation ‘from below’, that is, based on the process and on the extensive data now available, allowing a digital representation (‘digital twin’) of all relevant processes in real time. Usually, human-oriented and technology-centred approaches are isolated from one another and tend to be in competition rather than being consistently considered together (Heinlein & Huchler, 2021).
AI-based technologies promise an alternative since they create a new paradigm for dealing with uncertainty. The aim is not to reduce complexity and uncertainty ex ante as much as possible, keeping them out of the work process. Instead, AI enables a situational approach to complexity and uncertainty in the work process. Learning algorithms allow for a partial transfer of uncertainty and indeterminacy into the AI system itself. Thus, AI can be seen not just as an information technology artifact, but also as a data processing method that even transcends the computer as a ‘universal machine’ (Schmiede, 2006:461). Therefore, the new expectations on AI (for a critical account see Brödner, 2019 and Heinlein & Huchler, 2022) are not only linked to its potential to represent relevant work processes in data (‘digital twin’) and make them calculable or objectifiable. Rather, the integration of AI into work processes means that complexity and uncertainty no longer have to be reduced to linear and formal digital processes based on expert knowledge. Thanks to AI, complexity and uncertainty can be productively maintained and strategically managed as calculable risks, taking into account the situation or context and using adaptive formal processes.
The combination of probabilistic or statistical inferences (or big data), Artificial Neural Networks and Deep Learning boosts the adaptivity and flexibility of technical systems for use in the world of work. In contrast to linear ex-ante programs, the contingency of new AI systems is not only due to data basis (including the possibility of data bias), but also depends on the procedures applied. Here, the ‘function approximation’ (Brödner 2022:34; own translation) underlying connectionist AI and its potential procedural bias prove to be central. These sources of contingency are complemented by other strategies of adaptivity such as weighted and goal-oriented learning methods (for example, reinforcement learning) enabling AI systems to form and test new hypotheses and to derive strategies from them. There are also attempts to make existing artificial neural networks usable for new situations that are similar to those for which they were trained. For example, robots trained via machine learning should not learn each task from scratch but build on the all-algorithmic architectures of similar activities already learned. Here, there is a danger that an error or bias, once inscribed, will be reproduced. A similar problem exists when AI systems are trained using simulations.
In light of the above, we can understand why new AI technologies have been interpreted as an evolutionary step from automation by technology towards the automation or even autonomisation of technology. With this step, AI is leaving the sphere of technology as a mere tool (Rammert, 1999), which was the underlying concept in previous phases of digitalisation. Sociologists have observed that new ‘hybrid work systems’ (Weyer, 2006:8) are evolving, in which agency is divided between humans and technology (Rammert & Schulz-Schaeffer, 2002). The ensuing questions about the actor status of AI, the transparency of deep learning (the ‘black box’), the attribution of responsibility to technical systems, and the scope for designing and regulating AI, form the horizon for a new range of research activities in the social sciences (Heinlein & Huchler, 2023). Alas, describing AI as a new phase in the technology-centred approach to complexity and uncertainty within work processes does not tell us how AI should be conceptualised in practice, what opportunities it generates in the world of work, or what limitations it is subject to. To answer these questions, we develop an approach that describes AI as a technology that introduces contingency into social practices below. However, these contingencies are subject to certain restrictions, which we next describe using the concept of selectivity. We propose to use the interplay of contingency and selectivity as an analytical perspective for the practice of AI. This perspective is tested in the following section using the example of the (indirect) control of work.
Artificial intelligence and contingency
To better understand AI in light of the simultaneity of complexity and standardisation, we need a perspective sensitive to the situational integration and the relational impact of AI in sociotechnical practices. This is in line with perspectives that see the practical situatedness of sociotechnical arrangements as a condition and as an enabling and limiting factor for interconnected human and technological action (Latour, 2005; Schatzki, 2002; Suchman, 2007). On an analytical level, this means that an active, sometimes an acting, role is attributed to technical artifacts. All human and technical ‘entities that do things’ (Latour, 1988:303; emphasis in original) come into view. Since the discussion on AI quickly falls into the trap of anthropomorphism, two things must be taken into account here. On the one hand, it must not be conceptually overlooked and empirically underestimated that AI technology becomes more and more (inter)active qua programming – in other words, something non-human that, although inscribed by human hands, nevertheless does possess quasi-human potential in concrete practical contexts:
Voice outputs, agent-oriented programming and intelligence embodied in robots give technical artifacts a wider range of action, a larger radius of action and a finer ability to act and interact. The question of technology must be posed anew in view of these artifacts that are becoming more active and more closely connected to human units of action. (Rammert & Schulz-Schaeffer, 2002:12; own translation)
On the other hand, this does not imply a blanket attribution of agency and intentionality to technical artifacts, a statement that has often been criticised in the literature (for example, Bloor, 1998; Collins and Yearley, 1992; Ropohl, 2005:399). Rather, it is a matter of determining more precisely in what way AI inserts itself into contexts of practice and how these contexts are thereby changed as spaces of reality and possibility within which action and communication take place. Our thesis is that AI is actively involved in the creation of chains of action and scope for action in constantly changing working practices. However, this ‘co-agency’ of technology can only develop and manifest itself in concrete work practices (Orlikowski, 2000; 2007), which in turn has a limiting effect on the contingent dynamics of AI. Talking about technology as an ‘acting’ entity is therefore not a normative demand, but methodologically opens up new research perspectives that are appropriate to the character of digital technologies (Faraj & Azad, 2012).
In public and scientific discourse, AI is preferably described in practice-related terms: AI learns, thinks, acts, perceives, analyses, sorts, decides, observes and so forth. Even if one is well advised to treat humanising interpretations of AI with caution, they can be read as an indication that AI is associated with a specific effect that goes beyond the role often attributed to technology within sociology as ‘neutral means’ (Rammert, 1999:168). This is all the more evident because AI is able to establish relations within a socio-technical practice in line with its programmed purposes. Generally speaking, an algorithm can be understood as a ‘state transition system’ that
starts in an initial state and transits from one state to the next until, if ever, it stops or breaks. […] In particular, a sequential-time interactive algorithm […] is a state transition system where a state transition may be accompanied by sending and receiving messages. (Gurevich, 2012:37)
Interactive algorithms, as they are found in AI applications, have an inscribed openness that can be described as a sequence of transitions: internal state transitions are based on an exchange in a practice that leads to different levels of information between the starting point and the end point of what are, in principle, an infinite number of exchange processes. However, this applies not only to the algorithm, but to the practice itself, as it changes with the operation of the algorithm, that is, it undergoes ‘state transitions’ of its own logic. If one takes into account the ‘intelligence’ of an algorithm or system of algorithms described above, these transitions of practice are not always predictable. The programmed openness of interactive algorithms acquires a different, contingency-generating quality through the ability of ‘intelligent’ algorithms to self-referentially structure themselves: practice must reckon with the possibility of unpredictable information, the emergence of which it itself has influenced.
In the case of connectionist or neuro-symbolic AI, the planned and purposeful ‘material delegation’ (Law, 2001), that is, the concrete transfer of tasks to and inscription of courses of action in a technology, does not merely refer to the close coupling of technical action and predictable result (for example, when a calculator performs the arithmetic operation ‘two times two’). Rather, the focus is on a loose coupling of input (cause) and output (effect), which depends on the variability of the algorithmic connections (for example, when an AI system analyses a very large and unstructured data set). This argument can be sharpened by means of a distinction, one side of which is provided by Ingo Schulz-Schaeffer (2008) when he speaks of attributions of meaning to technology as ‘generated selectivity’:
The respective shape of the interlocking of the mechanical components of a technical artifact or the algorithms of its computer-technical control programme is an expression of generated selectivity. Its technical processes are meaningful processes – from the perspective of its designer, who has set them up to give the artifact a certain functionality, as well as from the perspective of the user, who accesses this functionality. This feature of generated selectivity distinguishes authoritative causation by technical artifacts from cause-effect relationships attributed to nature. (Schulz-Schaeffer, 2008:3142; own translation)
As a category of meaning, generated selectivity has its correlate in the planned inscription of courses of action in technical artifacts that repeatedly come to bear in the practice of use and produce expected consequences – for example, when a pocket calculator repeatedly performs the arithmetic operation ‘two times two’ correctly. In the case of connectionist AI, however, this generated selectivity must be complemented by a generating selectivity that consists of breaking out of the idea and practice of reproducible repeatability of processes on the part of the technology and providing its own selectivities for subsequent meaning-making processes.
It is therefore not enough to simply ask for attributions to AI that make its work appear meaningful. Rather, ‘intelligent’ technical processes are capable of setting certain meaningful processes in motion and channelling them along technically generated selectivities:
The algorithm does not become more informed or more intelligent; it just learns to work better. But thereby it can produce increasingly complex communication with its users, who can learn unknown things about the world and about themselves. Even and especially if the algorithm is not an alter ego, does not work with a strategy, and does not understand its counterpart, in interaction with machines human users can learn something that no one knew before or could have imagined, which changes their way of observing. (Esposito, 2017:262)
This is made possible by the amount of contingency inscribed in AI as an adaptive and approximating problem-solving technology:
In AI technology, the detachment from hard-wired or clearly prescribed processes begins with the shift from master-slave architecture to agent-oriented programming and society-oriented architectures of distributed intelligent activities. (Rammert, 2003:7; own translation)
Generated and generating selectivity thus become intertwined: AI is endowed qua programming with the ability to generate contingent selectivities in a concrete practice, which transform this practice and cannot necessarily be generated repeatedly in other contexts. What an AI system ‘perceives’ thus depends on the socio-technical contexts into which it is introduced; how an AI system works in that context depends on the generating selectivity of the technology and the practical connections to this selectivity. The structural openness of AI thus becomes a decisive feature making it possible to establish contingent relations, which in turn lead to contingent dynamics of socio-technical practices. This is what is meant by ‘learning’ AI.
This form of ‘intelligent’ technical delegation, where the path of task solution is not completely predetermined, can be further differentiated. AI carries out tasks ‘in relative autonomy, equipped with the capacity for reactivity, oriented towards activities (‘pro-activeness’) and with reference to other agents (‘sociability’)’ (Rammert, 2003:8; own translation). Hence, AI has certain developmental predispositions providing a practical openness to non-technical contexts and possibilities for integrating itself into different practical contexts and transforming them – in the sense of a co-production of humans and machines. It is important to realise that these predispositions only become effective in practice. Therefore, the ‘relative autonomy’ of AI does not exist in general terms but is always relative to the practices in which it is produced. It is observable as relational autonomy of the technical and the human. This also applies to the other characteristics mentioned, which always refer to the socio-technical relations within which they become possible and effective and produce effects. AI can only be ‘reactive’, ‘proactive’ and ‘social’ in certain specific relations to humans and other technologies. It is not only dependent on these relations, but also creates, enables and changes them. In other words: AI ‘is’ and ‘does’ nothing beyond the socio-technical contexts into which it is introduced and which it changes. It is dependent on a dynamic process that continuously provides it with data and into which it can continuously feed data – with the consequence that socio-technical practice and its human and technical elements transform each other. As AI continuously obtains data from practical contexts, processes these data according to its own rules and protocols which evolve in practice, and feeds the result back into practice, a dynamic emerges that creates specific spaces of possibility. Due to fixed programming and functionalities on the technical side, however, this cannot be random.
Thus, by elevating AI technology to a technical principle, a contingency of delegation emerges that is linked to the concrete situation of use in a two-fold way. On the one hand, the contingency exists in the use itself, as is true for other technical artifacts. On the other hand, however, AI systems also react to changes in the situation, adapt and thus generate an independent dynamic. As Elena Esposito puts it pointedly:
The user receives a contingent response that reacts to his or her contingency and does not just reflect his or her indeterminacy. The algorithm makes selections and choices based on criteria that are not random, but that the user does not know and need not know. The algorithm reflects and elaborates the indeterminacy of all participants, and each user faces the contingency of all the others, which is infinitely surprising and informative. It is still virtual contingency, but reflected in a mirror in which everyone sees not him- or herself but the other observers communicating – generating a kind of ‘virtual double contingency. (Esposito, 2017:260)
Neither technology nor the social situation remains constant; they change continuously in a mutually responsive way. This argument can also be found in the discussion about the concept of affordance (Gibson, 1979; Leonardi, 2011) and the critique of the ‘dichotomy of constraint versus possibility’ (Pentzold & Bischof, 2019) reproduced in this discussion.
These potentials of a contingency-creating AI cannot be translated into random spaces of possibility. This obviously has to do with the limited potentialities of algorithmic operations themselves. The sociologically relevant issue here is the specific way that ‘intelligent’ algorithmic elements interconnect with social processes in practice and, relationally, produce effects that can be described both as contingent and standard. In the next section we will investigate the phenomenon of the preservation and reduction of complexity with regard to selectivity. While selectivity was understood in this section as an expression of algorithmic contingency, the limiting aspects of selectivity come to the fore in the next section. Particularly relevant here are the relations of the functionally determined structure of the ‘intelligent’ algorithm, programmed in the light of specific interests, and the non-random contexts of appropriation of materially formed AI, which are subject to a specific evolution or path dependency. The inherent logics of AI inscribed into the algorithmic architecture offer new possibilities for delegating control to the technical system as a means of indirect control. Thus, the operations of AI do not take place in a vacuum, but encounter well-established contexts of practice or use, pervaded by structures of power, which thwart randomness and excessive contingency from the start. Contingency and selectivity must therefore always be analysed simultaneously in their interplay.
Artificial intelligence and selectivity
In order to better understand how it works in practice, it is crucial to consider AI as a technology with distinctive characteristics and, at the same time, to examine how it becomes embedded in organisational- and work-related practices. In the context of work, this means classifying AI as an interest-driven instrument under the primacy of capitalist exploitation. To put it more pointedly: AI is productive in and highly relevant for working worlds because, on the one hand, it can maintain contingency and complexity, and, on the other hand, it creates new ways to restructure working worlds. However, the possibilities of AI are subject to certain material and social restrictions that are both inherent in the technology itself and determined by the dynamics of the use of the technology. We now discuss four restrictions that can be formulated as a specific set of selectivities: first, social selectivities in the embedding of AI; second, selectivities in the mastering of social complexity by digital technologies; third, selectivities inherent in the logic of AI and finally, latent selectivities through the anticipatory adaptation of the social environment to the conditions or requirements of AI.
Social selectivities in the embedding of AI
The social selectivity of AI begins with the design of the AI system and the decisions related to it (Brödner, 2020; Mittelstadt et al., 2015; Friedman & Nissenbaum, 1996). These include, among others, the five questions: What goals should the system serve and what expectations are associated with it? Which forms of data should it (be able to) collect? Which interfaces (input and output) are planned? In which form should the output appear? How is the system to be integrated into its social context of use? AI is brought into operation primarily based on interests and expectations, in order to shape the framework of action that is inscribed in it by its objectives and functions. Furthermore, AI systems are based on models that link learning algorithms with formalised objectives and functions (Brödner, 2020). The data output is interpreted, that is transformed into information, and thus made compatible with its social practice of use. In some cases, AI systems are based on very sophisticated and reliable learning algorithms and, at the same time, on very simplified impact models (Brödner, 2019). Informational modelling of work can easily fail (Rohde et al., 2017) especially because it is built on the formalisation of social practices (Schmiede, 2006; Brödner, 2019; Huchler 2022). For example, AI systems for personnel selection that promise to be able to draw conclusions about a person’s suitability for certain jobs on the basis of elaborate written, voice or video analyses, can at the same time rely on simple models borrowed from psychology (such as ‘red, green, blue people’) 6 . When looking at the social selectivities and bias of AI, the model assumptions behind the respective AI systems should be given more attention. Of course, it is also essential to consider possible data bias, and AI becomes selectively effective in social contexts (Mittelstadt et al., 2015; Friedman & Nissenbaum, 1996). The data basis of AI systems is specifically selected, it is always limited and often contains (especially in socio-technical applications) socially generated and thus multiply biased data (ibid.). Social data are always based on incomplete and partly distorted operationalisations, thus representing only a small section of social reality. In this way, AI reproduces the problems both of data collection and of the reality on which it is based. Furthermore, AI tends to perpetuate the past and exaggerate the tendencies inherent in the data. Beyond this, the various underlying learning processes must also be taken into account. The effectiveness of AI is also influenced by the training of AI, either by the assessment of results (supervised learning) or by appropriate weighting (reinforced learning). Biases arising from factors such as incompleteness, intentions and interpretations may also be inscribed in AI systems through training (Diakopoulos, 2015). If AI systems are trained using simulations, these problems can be reinforced automatically. Last but not least, AI is used in practice in a very concrete way that provides for some specific forms of use and excludes many others. Particularly in relation to work and AI, it becomes apparent that the effects of AI systems depend decisively on their concrete uses: whether they expand or close spaces for action, qualify or de-qualify, have a burdening or relieving effect, that is, depends on how AI is used in work, with which (capitalist) goals and using which resources. Thus, the social selectivities in the design of AI systems are not only a result of their use but also of the way in which they are used, thereby extending far beyond the widely discussed issues of data bias and data-based control.
Selectivities in the mastering of social complexity through digital technologies
Not only are selectivities socially inscribed in AI, but AI is also subject to technology-immanent selectivities. As a technology of information processing, AI is first of all subject to the limits of the data-based processing of complex socio-technical challenges. This includes the selectivity of AI, which is related to the naturally selective capturing of complex reality via sensors and representation of it via data. This is further complicated by the fact that socio-technical systems are continuously changing. For example, new work is constantly being created around technical automation processes (Huchler 2022). Last but not least, there is the problem of explicating and objectifying knowledge regarding the limits of the technical translation of data into information and knowledge (Schmiede, 2006). In practice, the selectivity of AI can be seen at points where AI systematically reaches its limits, has deficits, or causes conflicts − or even when AI’s promises (even including utopias and dystopias) do not come true. A prominent example is the predicted substitution effects of work by AI, a substitution that has yet to be estimated quantitatively with any degree of accuracy (cf. the comprehensive presentation in Spencer et al., 2021). Qualitative and quantitative studies seem to demonstrate that human work is remarkably resilient, with an ongoing need for experiential knowledge for dealing with uncertainty and complexity (for example, Nisser & Malanowski, 2019; Krzywdzinski, 2019). Contextual knowledge, experience and work ability are all necessary to integrate AI into work practice (Pfeiffer, 2020). This points directly to the limitations of digital automation. Three types of automation limits can be distinguished (Huchler, 2018, 2022):
Firstly, there are Socio-material limits. Algorithmic control is confronted with potentially infinite complexity (in terms of sets of factors, interrelations, contingencies, ambivalences, dynamics and so forth) of the physical world of hardware (from physical processes to limited resources) and permanently changing socio-technical systems in social practice. For example, the use of AI in the sociotechnical system of work (Trist & Bamforth, 1951; Sydow, 1985) systematically encounters limits, because of the complex constitution of work and working subjects. This also includes conflicts of interests and goals as well as complex dynamics of competing forms of coordination, guiding ideas and conflicts between technology-centered and work-centered, formal and informal, or objectification and subjectification (Böhle, 2009) as well as contradictory work requirements (Moldaschl, 2015). In a complex, changing and recursive system, AI solutions are therefore always selective in the sense that they are necessarily fragmentary, or never comprehensive, and also because they become outdated. As a result, work persists.
Secondly, there are Recursive limits. In order to understand the selectivity of AI, the dilemmas and side effects of automation processes must also be considered. Following the dilemmas of rationalisation (Berger & Offe, 1980) and the ironies of automation (Bainbridge, 1983), it can be shown that automation is always accompanied by prior, parallel and post-processing work, which has an enabling, ensuring/maintaining, follow-up effect. Automation dynamics are thus characterised by the permanent re-creation of work. This is systematically linked with the ways that AI is limited and selective in dealing with (socio-technical) complexity.
Finally, there are Limits of the non-formalisability of social action. Implicit, experiential, or body knowledge, competencies, subjectifying work action and so forth are considered to be only partially or selectively transformable into data (Polanyi, 1985; Rammert, 2003; Schmiede, 2006; Böhle, 2009). This points to the necessity of a complementary interaction between technology and human work. Nevertheless, the idea that reality can be represented in data and thus made computable remains at the core of AI development. Obviously, human work still creates the prerequisites for the transformation of data into information (for example, by developing AI systems and providing them with data and so forth), but more and more this can also be done by intelligent knowledge management systems. But ‘[t]urning information into knowledge and connecting knowledge with practice remains an intellectual task that cannot be separated from the subject’ (Schmiede, 2006:473). It can, however, be supported by means of production such as AI. Nevertheless, the fact that relevant parts of practice cannot be replicated or are lost in the process is not systematically taken into account. Moreover, the reduction of knowledge (bound to meanings and contexts) to (objectified explicit) information comes up against the limits of non-formalisability of tacit knowledge (Polanyi, 1985) and experiential knowledge (Böhle, 2009). Knowledge and non-knowledge are dialectically related. According to Schmiede (2006:473), social progress is accompanied by an increase in both knowledge and non-knowledge. That is, AI can be used to process contingency, to provide new information and to generate new knowledge from it. At the same time, AI is associated with an increase in complexity, which opens up new areas of non-knowledge. Of course, AI can also be used to fully automate comprehensive processes. Still, AI is not only surrounded by work, as just argued but must also be embedded in the existing socio-technical context in order to become productive. Failure to consider these limits has AI-specific social consequences (Huchler, 2019; Heinlein & Huchler, 2023). These consequences include intensified work and an increase in contradictions, pressure and friction between formal and informal but necessary work. The disparity between ‘AI activity’ and work (Huchler, 2022) can take various forms in practice: for example, between automation (substitution), division of labour (complementarity) or empowerment (augmentation): The inherent selectivity of AI facilitates automation. But it also enables – as an aspect of the difference between humans and technology – a productive and mutually beneficial ‘division of labour’ between humans and AI. Thus, understanding the selectivity of AI opens new perspectives both for human empowerment as well as for technological development. The parallel between new possibilities and the associated limitations is reflected in the high expectations and deep disappointments in the history of AI (Seising, 2021).
Selectivities inherent in AI
As mentioned above, the combination of probabilistic and statistical inference (big data) and deep machine learning aims to increase the adaptivity and flexibility or context sensitivity of technical systems. Consequently, some of the handling of complexity and uncertainty is transferred into the AI systems themselves – beyond management and work process design, but also beyond ‘if-then’ programming towards a more goal-oriented ‘in order to’. The situational adaptivity associated with this can be described as ‘assimilating adaptivity’ (Huchler, 2019), which differs from a ‘complementary adaptivity’ (ibid.) in that it is based on translating contingency in the system environment into the inherent logic of the AI system (that is, data that can be gathered and processed by AI). This is associated with greater openness (regarding the scope of the system), but also with a new selectivity in relation to perception and processing, and social connectivity. With respect to the social selectivities of AI, it is important not only to look at the problem of reproducing data biases, but also to focus on the biases associated with the AI methods themselves. This is because selectivity also exists in the actual mathematical methods that are based on probabilities and classifications. In contrast to symbolic AI, the inner workings of sub-symbolic or connectionist AI are no longer based on model assumptions and expert knowledge but, at their core, on correlations and a ‘function approximation’ (Brödner, 2022). With this approach, AI takes on all the problematics and limitations of social statistics – that is, statistics that attempt to capture complex socio-technical relationships (rather than just complicated abstract processes). These problems result from a diverse range of issues, including bogus correlations, statistical biases, self-amplification effects and problems associated with incomplete data. Various criteria and methods have been developed to deal with quality problems in (social) statistics. In addition, there are rules of application in quantitative social science, such as first form hypotheses and then look for significant correlations and not vice versa. This norm excludes both a permanent simple variation of existing assumptions and the mass random checking of possible correlations, since both are done free of theory. Learning AI systems, in contrast, are based on a ‘theory-free’ mass search for correlations (as automatic ‘hypothesis generation and checking’), on the basis of which categories are formed (Brödner, 2022). In this way, they often arrive at very stable results. Nevertheless, there is a mass linkage of selective captures of reality here, which can average out in the end or also make crucial differences. As to the structuring effects emanating from these new selectivities of sub-symbolic AI methods (which add to those of digital automation), there is still a need for research.
Latent selectivities through sociotechnical adaptation
AI selectivities also become latently relevant in that they manifest themselves in AI use in a low-threshold manner. AI systems are designed according to social and individual needs. But conversely, the system environment and also individual actions always adapt (more or less latently) to the needs of the technical systems so that their effects can unfold. These standardisation aspects need to be considered with regard to the ‘transformative effects’ (Mittelstadt et al., 2015) of AI. A typical effect of the quest for technological controllability of complexity and uncertainty is the standardisation of the social environment or practice (for example, the Taylorisation of work) (Böhle & Busch, 2012). Yet making the practical environment compatible, adaptable and controllable for formal systems can be at the expense of diversity, quality and individual freedom (Huchler, 2019). This also applies beyond the context of paid work: for example, autonomous road transport can more easily be implemented in a highly regulated environment. And the best customers are users who correspond to the profiles anticipated and planned during the development and implementation of the technology. The aim of some companies is therefore not only to assess and anticipate user behaviour (including user requirements), but also to imperceptibly influence and guide this behaviour (user education). Processes of formation and standardisation of social practice by AI are seldom based on explicit decisions, however. Instead, they tend to slip into the use process unnoticed. If, for example, emotion-sensitive AI based on speech and facial recognition is used for recruitment tests, if AI measures learning outcomes in lessons, or reads customers’ faces to detect individual wishes as they get into a driverless taxi, this changes the way individuals and societies perceive and deal with emotions. AI systems could then condition us to produce easily recognised or positively sanctioned speech and facial expressions, thus reducing diversity, alienating us from authentic emotionality and pushing us towards an instrumental approach. Advisors might prepare people for important recruitment processes by training them for the relevant AI systems; school pupils might feign attentiveness; exaggerated gestures, used to make the taxi lock its doors or to drive faster, might become customary at the AI interface. The diverse ways in which people express emotions such as joy, grief, happiness, anger and so forth contrast with the limited number of categories or compartments that learning systems work with, on the basis of probabilities. AI is based on a (high but) limited number of distinct classifications and does not allow for ‘grey areas’ (Brödner, 2020; 2022). At the same time, such AI systems reward (social) compatibility with functionality. By orienting to past data and to (equally distributed) frequencies, AI also tends to reinforce tendencies – whether of monopolisation or polarisation. It is therefore necessary to find ways of making socially interactive AI systems compatible with humans. But it is also important to understand how humans make themselves permanently compatible with the technical systems and what implications this has for people and society. This presents typical dangers of self-discipline or externally imposed discipline, or of standardising adaptation to selectiveness with regard to requirements and limitations of technical systems – especially in the purpose-oriented context of work. Therefore, AI systems run the risk of narrowing the scope for thought and action in order to reduce complexity and uncertainty – be it in anticipatory compliance prior to use, or, imperceptibly, in the process of use. In this way, AI selectivity has latent social effects (Heinlein & Huchler, 2023).
Between complexity and standardisation: artificial intelligence as an instrument of management and control
Building on the considerations of contingency and selectivity discussed above, we now focus on questions of management and control of work. AI can be used as a means of digital Taylorism by integrating AI systems into work in order to limit the scope of action and knowledge of working subjects through more direct control (Butollo et al.,2017). Depending on the implementation of the AI system and organisational strategies (Nies, 2021), AI can either control work directly and rigidly or indirectly through the enacted objectivity and factuality of its output. This applies to activities that take place before, in parallel with, and after the AI process. Digital Taylorism, on the one hand, uses algorithms and digital technologies to rigidly control work, master complexity and reduce contingency. On the other hand, AI is also being used as a means of indirect control. This is mainly because the ongoing dynamics of the capitalist economy have to be met. In this context, AI promises more flexibility, innovative capacity and, not least, an increase in productivity. In research on indirect control, questions of the increased scopes of action (for example, flat hierarchies, project work, agreements on objectives, trust-based working hours, mobile work and so forth), extended self-organisation (Böhle & Stadelbacher, 2016) and the blurring of the boundaries between work and leisure (Kratzer, 2003) are addressed. Moreover, indirect control is associated with a concentration of work related to performance control on the crucial framing operational context factors. Those are first and foremost objectives and resources (time and money). But they are also indirectly and informally combined with external demands (‘marketisation’) and internalised social aspects (Moldaschl & Sauer, 2000), such as corporate culture, group pressure and demands on one’s own work performance (producer pride, professional identity, quality and customer orientation and so forth). Indirect control leads to subjectification of work (Foucault, 1993; Moldaschl & Voß, 2003; Huchler et al., 2007), that is a reshaping of the subject according to the needs of work and an increased access to potentials within the working subjects which could not be directly addressed or activated by direct control methods (cf. the discussion on labour vs. work capacity [Pfeiffer, 2004]). Indirect control is based on an externalisation of the imponderabilities associated with dealing with complex challenges at work from the management sphere so that they become the worker’s responsibility. Employees are entrusted with this task according to the motto ‘We don’t care how you do it, only the result has to be right’ (cf. Moldaschl & Sauer, 2000; Moldaschl & Voß, 2003; Huchler et al., 2007). Complexity and uncertainty are thus maintained as far as possible and externalised into the employees’ sphere. The focus of the management is on the parameters of the framework in which this coping activity occurs.
What follows from this affinity between the logic of indirect control and the immanent logic of the practical operation of connectionist AI? Our thesis is that these are not coincidental parallel aspects of a technology-centric and human-centric approach to decentralised control of work. The introduction of new technology is always embedded in a political and cultural structure that is situated between a control approach and an empowerment approach (cf. Brödner, 2015; Grote, 2015). Looking at the task of organising and controlling socio-technical work systems (such as Industry 4.0, Internet of Things and Services, Cyber-Physical-Human-Systems and so forth), the use of AI can be understood as a catch-up technology-based imitation of corresponding work organisation concepts of the last 30 years related to post-Fordism. But lessons can also be learnt for AI as a whole from the comparison with indirect control of work, both in terms of productive and problematic aspects. In general, connectionist AI systems involve a shift of command and control from direct personnel management to system design and higher-level control, as well as to concrete integration into the sociotechnical work context. This places increased importance on such issues as goal setting, data input, interpretation and integration of output at subsequent processes, interfaces while the concrete algorithmic processes remain ‘in the dark’. Consequently, new forms of work demand new forms of quality and performance control, assessment, reward systems and support as well as trust and confidence. These are all topics that are currently being discussed in the context of AI (explainability, reliability and trust, among others) for example in the context of the EU AI Act (EU 2021).
Furthermore, AI also externalises parts of the responsibility of management (and even of development) to the functionality and adequacy of the learning algorithms. The handling of entrepreneurial risks is now, to a certain extent, shifted to the technical system. Combined with the impenetrability of neural networks and the supposed objectivity of data-based processing, a new operational regime of technical facticity emerges in this way. However, a specific selectivity goes along with the data processing of AI systems, for several reasons: they are only able to process what is available in terms of data; they have to deal with the data in a selective way; they are specifically designed for certain contexts and corresponding goals are inscribed in them, and they have no understanding of meaning, embodied knowledge, or identity. This makes AI systems suitable only for solving specific sociotechnical challenges. If this selectivity and specific suitability is not considered when applying AI systems in work, this can lead to quality problems and processing gaps, as well as contradictions and conflicts with work practice. These, then, have to be worked on again ‒ often through unrewarded and informal additional work. The consequence is an intensification of work (Brödner, 2020).
In addition, AI opens up new questions about the organisation of the interplay between work and technology. AI is a new, extended form of technical handling of the contingency of work. As a specific form of rationalisation through automation, AI can both replace human labour (substituting automation) and complement it (complementary automation) (Huchler, 2022). By processing complexity in real time, AI promises more flexible and productive solutions than previous linear or ex-ante programmed systems. In particular, in view of the similarity between the forms of modern work and new AI solutions, it will be a crucial question how both ways of organising and processing can be brought into a fruitful relationship. For a successful and sustainable use of AI, the interfaces to and integration into the socio-technical system of work will be decisive. This requires new organisational concepts that are systematically conceived from a hybrid interplay of AI and (human) work (ibid.).
AI also raises new questions about how knowledge, experience and skills are handled in the world of work. The introduction of AI is always accompanied by both an opening and a closing of scope for knowledge and action. The decisive factors are the proportion and the particular context. AI can be understood as a technique of objectification, formalisation and informatisation (the translation of data into information that can be connected to (contextual) knowledge (cf. Schmiede, 2006). Although AI, as a highly adaptive technology, can keep contingency open to a much greater extent than previous technologies, AI is also associated with a high degree of selectivity. Because of the dependence of AI on data but especially because of the processing methods used, an explication of knowledge (or reduction to data and, depending on the method, also information) or abstraction is inherent in the application of AI, for example, a focus on probabilities and correlations, with the risk of recursive self-confirming processes. With respect to the future interaction of AI and human work, the question thus arises how this (explicative, reductive) form of dealing with knowledge can be brought together with other forms of knowledge (such as tacit knowing [Polanyi, 1985] and experiential knowledge [Böhle, 2009]).
Closely related are questions about how the use of AI systems affects society: whether, for example, there are increased technology-induced formation processes. The specific processing form of AI is accompanied by a structurally-channelled access to reality, which latently shapes social interaction and practice with AI and, in use, leads to implicit normalisation and standardisation of this social practice. AI is thus always accompanied by an adaptation of the environment to the processing logic of the AI systems. However, the quality and quantity as well as the social implications of these mutual adaptivities are crucial (Huchler, 2019).
The goal of using AI in work is to open up potential new areas for automation that were previously considered inaccessible due to their cognitive and/or manual complexity. This is described as a step from automation to autonomisation, based on the idea of not only replacing work in a formal-technical way, but also mimicking human work capacity (Pfeiffer, 2004) in a comprehensive way (for example in relation to flexibility and creativity) and inscribing it into the technical systems. This explains the euphoria of the recent AI boom, the human-AI comparisons and debates about the status of AI as an actor, and the utopian and dystopian scenarios derived from them ‒ as well as the associated effects of enactment and imitation. This new quality of AI as a method of making activities previously exclusively assigned to human work or work capacity accessible to machine processing permits a new range of rationalisation objects ‒ from unskilled and skilled labour to highly qualified knowledge work, with the corresponding skills, abilities, competencies, expertise and experience. This is associated with changes in the relationship between work and technology, work and means of production and different activities, their perceived value, recognition and position in the production process. Questions therefore arise about the appropriation of new technologies in work activities and work practices, about occupational and professional change, and about organisational, workplace, institutional and societal integration.
From the perspective of the control of work, the dynamics of AI can be understood as a social constellation with several starting points: the externalisation of the processing of entrepreneurial complexity to an impenetrable system, the anonymisation and indirect access to the control of work via AI outputs, the facticity and ascribed objectivity of data-based processes and the latent shaping of the social environment to the needs and logics of the system. Overall, it can be said that an essential social functionality of AI lies in the fact that it brings together two possibilities in a very flexible and productive way: on the one hand, AI can (conditionally or selectively) maintain contingency and thus contribute to the digital subjectification of work; on the other hand, it can be used to reduce contingency and exert a rigid Tayloristic form of work control. This also points to the fact that AI is negotiated and designed in a sociotechnical context in contrast to a technological determinist one (Lutz, 1987).
Studying artificial intelligence in the sociology of work
This paper has proposed that an analytical perspective on AI can be fruitful for addressing questions in the sociology of work. The practical logic of connectionist AI has been described as an interplay of social and technical processes of opening and closing possibilities of knowledge and action. To develop this argument, it was first shown in which sense and in what way AI can be understood as a contingency-generating technology (Heinlein, 2023) in socio-technical contexts. The architecture based on neural networks was elaborated as a decisive feature of AI that not only opens up technical possibilities but can also shape social processes and structures in a selective manner. However, this shaping does not occur solely on the part of the AI, but only becomes apparent in the interplay with specific restrictions that lie both in the social context of use and in the algorithmic architecture of AI itself. For research in the sociology of work, this means that contingency-theoretical approaches must be linked with approaches that emphasise the limits of (‘intelligent’) digitalisation. The yield of such a perspective was outlined in this article in relation to the question of the control and management of work with AI. Nevertheless, this only opens up starting points for further research, which can take place on three levels.
At the micro level of work action and work practice, research can examine which possibilities AI offers for concrete work processes and how these possibilities are at the same time guided and limited in certain ways. This double perspective places the technical and social impact of AI between the poles of action and structure and makes it possible to look at the dynamics of the digital transformation through AI at both levels of work processes. AI is thus not the sole trigger of transformation processes; only the practical interplay of algorithmic logics and well-rehearsed social practices enables a change in work that is both contingent and limited (in other words, selective) in specific ways. An open question is how, in this process, actions and knowledge are redistributed in socio-technical relations and which new work practices that provide the framework for a lasting use of AI are established.
At the meso level of processes in and between organisations, research should focus not only on the restructuring of activities and organisational methods, but also on the ‘intelligent’ digitalisation of organisational control. In this sense, AI is not only a tool for organisational change, by delegating certain operational tasks to digital technology. Rather, AI is additionally entrusted with tasks of controlling organisational processes, which – according to our thesis – can also be analysed with a view to the interplay of contingent and limiting processes, with respect to selectivity. Accordingly, the question must be addressed how organisational structures and processes are rearranged through the embedding of AI by opening and closing strategic spaces. These spaces become all the more complex when the use of AI extends to value chains and value creation systems. The embedding of AI at these levels cannot be equated with the establishment of a technically neutral information space as a social space of action (cf. Boes et al., 2020). Rather, as a digital technology that generates contingencies and is subject to specific restrictions, AI co-forms trans- and inter-organisational spaces along its inscribed logic and mediates new forms of organisational knowledge and action.
At the macro level of professions and overall societal discourses, the (undeniably major) question remains open as to what profound, partly latent and insidious consequences for people, their work and social coexistence are associated with the development and use of AI (Heinlein & Huchler, 2023). AI stands for great technical progress. But how is the relationship between humans and technology changing with AI and how is this change to be evaluated when one systematically looks at the interplay of contingent and selectively limiting realities and possibilities? What opportunities, but also risks, does the use and development of AI open up for people and society? What are the medium- and long-term social implications of the new possibilities of use? What are the limits and (selective) patterns of AI-driven change and what are the (contingent) opportunities for shaping it? And last but not least, who and what determines the development paths that AI takes − with what consequences for whom?
© Michael Heinlein and Norbert Huchler, 2023.