Value change through information exchange in human–machine interaction

An essential component of human–machine interaction (HMI) is the information exchanged between humans and machines to achieve specific effects in the world or in the interacting machines and/or humans. However, such information exchange in HMI may also shape the beliefs, norms and values of involved humans. Thus, ultimately, it may shape not only individual values, but also societal ones. This article describes some lines of development in HMI, where significant value changes are already emerging. For this purpose, we introduce the general notion of eValuation, which serves as a starting point for elaborating three specific forms of value change, namely deValuation, reValuation and xValuation. We explain these along with examples of self-tracking practices and the use of social robots.


Introduction
Human-human interaction (HHI) and human-machine interaction (HMI) are inextricably linked to the collection and exchange of information to achieve certain effects in the world or the interacting partners. However, while information exchange in HHI mainly implies emotional and semantic information (such as linguistic contents, facial expressions, body posture, gaze, or tone of voice) at least certain forms of information exchange in HMI raise novel questions.
In the early stages of informatics and computer science, the primary concern was the successful direct and explicit exchange of predefined data or information between humans and computers. In recent years, however, the focus in HMI research has been changing toward developing artificial intelligence (AI) applications with a particular view to neural networks and approaches of machine learning (ML). Medical decision support systems, automated image recognition applications, social media and internet search algorithms, and autonomous vehicles benefited from this development. They resulted in new forms of acquiring, processing, analysing and using data and information. In particular, the exchange of data and information was not necessarily understood as direct and explicit any more.
Moreover, novel human-machine interfaces like, for instance, haptic or gesture-driven interfaces, brain-computer interfaces (BCIs) or virtual reality applications changed the way of information exchange in HMI. Consequently, issues of responsiveness, adaptivity or the embodiment of information exchange processes play a decisive role in recent technological developments. BCIs, for instance, can be used to generate information from brain data about movement intentions or affective states and control specific devices or give informational feedback to human users. Self-tracking technologies (STT) measure vital, mental and affective signals to provide users with novel information about their inner states or certain behaviours. Social robots collect and process information about their user's behaviour as well as preferences and provide information to human users in ways that increasingly resemble human reactions (sound, facial expressions, etc.).
Indeed, such applications may benefit humans. On the other hand, however, information exchange in HMI may also shape the beliefs, norms and values of involved humans in a way that we can often assess only retrospectively. Changes in values also occur during HHI, of course, but the nature of the change through HMI is different. Against this background, we describe some lines of development in HMI in which significant as well as specific ways of change in norms and values can be expected. We also aim to discuss the importance of reflecting and evaluating these changes. For this purpose, we first provide a brief overview of some terms relevant in this context, such as data, information and value. Second, we present two exemplary areas of technology development, i.e., STT and social robotics, to analyse potential value changes and discuss possible implications.

Data, information and value
In the context of HMI, it is important first to distinguish data acquisition from interpretation of information. Users usually only provide data to machines they interact with, which are subsequently converted into information by signal processing programs or ML-based processes. Thus, the initial data are processed, structured, organized and interpreted to obtain information and, hence, meaning (for someone or something).
Second, regarding how value changes take place in and through HMI and how to assess such changes, the concept of value has to be clarified. The term 'value' is a complex concept in the history of philosophy, which is not concerned only with ethics. In general, value can be understood as providing validity, meaning and orientation. Thus, the reference to values (e.g., truth, the good, the beautiful, etc.) represents a condition of giving meaning and (conscious or unconscious) points of orientation in the world. Humans shape themselves and the world according to these points insofar as values are of normative (i.e., action-guiding) character. Thus -and this is of particular importance for our argument -values may give validity and meaning to information and therefore guide people on how to deal with information gained through HMI as well as how to act and decide accordingly.
In addition to this general understanding of values, another distinction is important: the distinction between intrinsic and extrinsic values. This distinction focuses on the question of whether values can be characterized as something that is non-derivatively good (i.e., whether it is valuable for its own sake (Zimmerman and Bradley, 2019) or for the sake of something else). Given this, it can be asked whether the exchange of information in HMI is valuable for its own sake or serves to achieve an external goal, such as providing the basis for a relationship. We will argue throughout the article that information or information processes in HMI are primarily valuable for the sake of something else and, hence, exemplify extrinsic values. Moreover, a crucial question in this context concerns what kind of things can have intrinsic value (properties, states of affairs, facts) (Zimmerman and Bradley, 2019). We ascribe inherent value to phenomena, such as self-awareness or relationship, which, as the article will show, are subject to change in current HMI.
To analyse value change in HMI, we introduce a new umbrella term, 'eValuation'. This term indicates novel ways of evaluating information and giving meaning to information in HMI, resulting in value changes. We believe it is useful to introduce a new term for the context of novel HMIs, since at least certain processes of gaining and exchanging information that can be observed here as well as their implications for individual and societal values differ decisively from informational processes in HHI or interactions between humans and traditional artefacts. To differentiate further novel ways of giving meaning to information in HMI with the result of changed values, we introduce additional sub-terms of eValuation: first, 'reValuation', in which regression of individual or societal values to earlier values (now overcome) can be observed as a result of novel evaluation processes. Second, forms of 'deValuation' denote a depreciation or loss of values. And third, 'xValuation': here, machine-generated information is presented in such a way that it shows similarities in presentation to information in HHI, thus being evaluated differently from ordinary machine information, leading to the emergence of novel HMI-specific values (e.g., concerning the value 'trust' which was previously exclusive to HHI). These suggestions for differentiation do not claim to be exhaustive. Instead, they are intended to serve as an initial orientation for the HMIs considered in this article.

Self-tracking: the general shift from evaluation to eValuation
The general population increasingly uses STTs. The most common form is smartphone apps for counting steps, keeping a cycle diary and collecting other health information. While these selftracking methods and techniques are nowadays still used mainly at an individual level and ostensibly voluntarily, their presence increasingly permeates various central areas of social life. This applies, for example, to the area of mHealth (or eHealth), where pay as you live (PAYL) tariffs lure health insurance members with tariff bonuses for which they have to record their own physical data (Wiegard and Breitner, 2019). Self-tracking in these two exemplary areas is characterized mainly by the fact that a technically mediated objectivity is generated by data and information (exchange) that would otherwise not be available. How many steps we take, how many calories we consume or what we interpret as a healthy lifestyle becomes supposedly transparent and objective only through the recording, processing, interpretation and visualization or representation of digital information (Lupton, 2013;Wajcman, 2015). Here, we have a typical case of eValuation, insofar as the collection of information is accompanied by the generation of numbers, graphs and symbols, which users always must interpret, thus leading to a novel meaning of the given information. The novel interpretation and evaluation processes in the context of self-tracking, therefore, result in changed values in terms of obtaining a healthy life and a state of well-being, which can be seen, for example, in the transformation of the (former) evaluation of one's body perception: it is replaced by data, and health and well-being are now evaluated according to HMIspecific categories (de Boer, 2020).
Suppose one considers the effects of such processes in STT at the individual level at first glance. In this case, they do not appear to be a cause for concern since they produce supposedly scientifically neutral data. However, a closer look reveals that something changes in the perception of oneself and others because of this additional information: people practising self-tracking in a daily manner, for instance, are much quicker to trust allegedly objective facts gained through STT than their own intuition, even their own judgement or self-perception (e.g., Lupton, 2013). At the same time, self-tracking can also appear as a new path to more self-knowledge. It thus increases awareness and enables a critical distance to previously unquestioned habits or passions (Gerlek et al., 2018;Friedrich et al., 2021). In both cases, it can be said that a relationship with the collected data is established which is valued in itself insofar as it is interpreted and therefore from the beginning is information. New knowledge is gained with the aim of changing one's behaviour and often comes along with changed self-understanding and new interpretation of former values (e.g., habits in stressful situations) (see Gerlek et al., 2018).
With objectively measured data, one can indirectly conclude a further value change, as the example of a simple heart rate variability (HRV) tracker shows. It measures the distance between two heartbeats, calculates the regeneration level and recommends a training design based on this. For athletes, this means that, although they can collect information about themselves, the use of this information leads to a lack of trust in their own body, resulting in a deValuation of body trust. Lived body experience has less meaning and validity to the person after using STT than before. This is because the body knowledge that the athletes acquire through training, everyday life and throughout their career is devalued by the creation of a data double (i.e., the measured data that build their bodies) (Hogle, 2005;Lupton, 2021). The difference with value changes in HHI is the novel form of information acquisition and mediation. Novel informational feedback loops, for example, lead to a devaluation of the traditional sources of meaning in embodied experiences.
Another aspect of eValuation emerges with the predefined design of the app or device; for example, when one has to choose between different emoticons in STT for measuring happiness in order to evaluate one's own feelings (Havens, 2015;de Boer, 2021). At this level, it becomes clear that selftracking is accompanied by a new or changed evaluation of technologically provided information and thus with an eValuation from the outset. In this context, the novel and HMI-specific way information is presented, displayed and prepared for exchange must also be considered, as has been happening for some time in the field of interaction design (Verbeek, 2015). Thus, it can be observed that the design of STT is gaining importance based on the evaluation of user behaviour as well as on an assessment of the general purpose for which these apps are used. This ultimately means that the individual reasons for using an application for self-tracking have long since been placed in a broader, socially relevant context. Thus, we understand the changes in the social framework as the basis of individual action and value setting -here as a change in social values, as a change in the meaning or validity of information at the societal level (i.e., as a form of eValuation) (see also Barrett et al., 2016).
Furthermore, not only does the view of one's own activities, habits, achievements change in the light of STT, but so does the view beyond one's performance in comparison with others. This is especially true for STT, where you can compare your progress with others (eHealth, schooling apps, etc.). Thus, on the one hand, competition is increased: on the other hand, attention is drawn to a novel kind of information that is produced. Without used and processed data, one's performance would not be available; for example, in curves, ratings or with other statistical means. Further, these statistics and resulting feedback mechanisms (e.g., emoticons) now become the basis for competition with others. Thus, one might now behave in such a way that one becomes 'better' in terms of the information that one gained of oneself, of others and in relation to the expectations that come along with the collected, presented and compared data. In this way, one aligns oneself with the expected results and focuses on activities and behaviours depending on generated information. What were previously values in themselves are now focused on external goals (e.g., success in competition) through the generated information. We see this as another aspect of eValuation, resulting from transforming what people know about social expectations in specific situations and their performance compared with others. This form of eValuation testifies to a new form of integrating HMI-specific information into one's own practice.
Further, eValuation occurs as a result of novel evaluation of information in HMI, as former values (such as health care, including sportive or healthy activities without a competitive claim) are transformed into the value of competitive self-improvement. In the same way, the value of one's diligence also changes because of visibility and the resulting increased competition between students or pupils. The importance of information is thus shifting more and more to an orientation towards external or competitive goals. Information with a competitive character is uncommon in HHI. However, the possibility of a constantly available source of information on competitive conditions in HMI which adapts to each other's circumstances in real time changes the importance and meaning of information compared with HHI. While these expectations of ourselves, of others and the expectations of others of ourselves change through the use of STTs, but of what, precisely, do these expectations consist? Or, to put it differently, do these expectations represent new values?
This question can be tentatively answered from a cultural-philosophical perspective: individual self-measurement, which can be understood as a form of self-observation, is embedded in the context of a cultural practice that is not only reproduced, but also provoked by capitalist incentives, such as lifestyle marketing, and thus advances from a cultural practice to a societal phenomenon. In this process, different values are established, which initially still emanate from the evaluation systems that are offered by STT. Happiness measurement, for instance, refers to values of a good life through predetermined areas of life in which the 'good' means something specific (such as more resilience, less stress, etc.). Living a good life is a value that most people would consider intrinsic. The information that SST should provide for a good life or happiness, on the other hand, is an external value. It becomes clear here that these pieces of data do not have an intrinsic meaning, but only acquire a specific sense through one's practice and interpretation of self-measurement (Duttweiler, 2018).
In this way, the predefined measurement system of STTs is linked to specific social roles. For example, it is considered good or right if a biological woman inhabits socially traditional female attributes, just as certain values are tabooed. Thus, menstrual cycle apps, for instance, suggest certain female images, which always go hand in hand with idealizations and can hide other grievances (Hendl et al., 2019). The intention of having one's own ovulation predicted leads to a new value appearing next to the intended goal (a conscious handling of one's own body or, in this case, of one's own cycle), that of a female ideal which stands for a critical re-tradionalization, which can be seen as a specific form of eValuation, namely, a reValuation. The information gained through STT leads to a regression of individual or societal values to former conceptions of female self-understanding. A similar observation can be found in studies that highlight the reValuation of gender roles that can occur when using health apps in general (Lupton, 2013(Lupton, , 2015. In such cases, the information generated by STT causes meaning-backwards-shifts in a rather subtle way. The value change in the examples described does not represent a progressive development but affects intrinsic values, such as justice, by endangering them. In this way using certain forms of health apps and STT can reinforce injustices, counteract emancipatory attempts or enable discrimination in the first place.

Social robots: effects of HMI on HHI
The development and application of robots cover a wide field. For example, robots are used to help with everyday tasks (e.g., as automatic vacuum cleaners), or in the therapy of children with autism and as companions in hospitals and care facilities (e.g., Paro, Lovot, Milo, Nao, AIBO, Pepper). A crucial factor in the development of social robots is demographic change, which is expected to result in a growing proportion of older people in the near future. Against this background, many have already pointed to the necessity of social or assistive robots in the healthcare system and have addressed the needs, expectations and preferences of older adults that should be met in the design, development and use of such robots (Goher et al., 2017). Furthermore, recent developments in AI and ML and in sensor and microtechnologies allow social robots to imitate human qualities, such as being intelligent, autonomous, emphatic or emotional (see Robinson et al., 2014;Royakkers and van Est, 2015;Jones, 2017).
To illustrate how this has been realized so far, we describe some exemplary features of Pepper, 1 a widely described robot. Pepper is a humanoid service robot that has already been used in care facilities, hospitals, hotels, banks and education. Pepper has a human-like appearance, can communicate through spoken words as well as through a tablet installed in its chest and LEDs behind its eyes. The LEDs allow Pepper to show if it is listening (blue), processing information (green) or not listening because it is speaking or not detecting any human presence (white, pale blue). Additional sounds signal when Pepper starts and stops listening.
Further, Pepper can interpret or decode facial expressions, gestures and the tonality of humans. It can therefore identify the interacting person and adapt to the interaction according to data profile. It reacts to the detected information with corresponding gestures, communication or movements. In addition, Pepper is able continuously to improve the adaptation process in the interaction through real-time analysis and embedded (as well as cloud-based) cognitive computing services. As described at the homepage of SoftBank Robotics, the humanoid shape and the perception of the robot as friendly and non-judgemental allow the robot to motivate patients for exercising, be more acceptable to humans and give humans more confidence in answering questions. In a nutshell, Pepper can adapt its interactivity to its user's data profile and thence to their preferences and needs. It learns from every human-robot interaction through new data, which enable it to improve its adaptive, reactive and interactive skills. Before discussing the underlying values and eValuations that come with such robot design, we will briefly mention some psychological studies that address the design and use of social robots.
Several studies show positive impacts of social robots on human well-being by causing positive emotions, such as hope or love, impacting relationships, engagement or stress (Prescott and Robillard, 2020). Further, studies show that people increasingly trust social robots and are prone to build up feelings of attachment towards the robot (Kahn et al., 2006;Hancock et al., 2011;Prescott and Robillard, 2020). Of course, such feelings and effects must be preceded by a general acceptance of robots. Acceptance is, however, influenced by various factors, such as adults' uneasiness with technology, age, the feeling of stigmatization and ethical/societal issues associated with robot use (Louie et al., 2014;Wu et al., 2014). Another trait that both increases acceptance of social robots and feelings of connectedness or trust is the resemblance to an animal or human. Thus, studies show that older people are, for instance, less affected by the 'uncanny valley effect' and prefer humanlike over non-human-like robots (Chu et al., 2019;Tu et al., 2020). 2 It seems evident that human tendencies to anthropomorphize technical systems (attributing humanness or human likeness to a non-human object) will increase as robots become more human-like.
We are not interested in evaluating whether the psychological effects of changed trust, new feelings of attachment to machines or tendencies of anthropomorphism are good or bad (Złotowski et al., 2015;Damiano and Dumouchel, 2018). Instead, we want to look at the correlation of changes in the evaluation of objects of trust, attachment and anthropomorphism and value changes through eValuation. Thus, it can be noted that the information we gain about the robot due to its design induces a change of evaluation of the robot relationship and thus of core human values, such as trust, friendship and intimacy. Therefore, a particular form of eValuation can be identified in social robotics, namely xValuation. Accordingly, it can be observed that information conveyed by the design of the machine is presented in such a way that it shows similarities to information and features in HHI. This leads to new possibilities for interpreting machine information that can reinforce the human tendency to relate to machines like to humans.
As has already been mentioned, social robots have a user-friendly design. Thus, they appear friendly, non-judgemental and cooperative by meeting the preferences and needs of the users, based on the data profiles they obtain. Here, machine-generated information about the users and the information about the robots conveyed via the design leads to a novel way of interacting with an artefact and changes in the evaluation of the object. We find here a striking form of xValuation, insofar as technological developments show a tendency towards an ever more perfected form of adaptation to people's needs, thus changing evaluation processes of interaction information and the relation to the artefact. This implies a change of values and the meaning of the information in the interaction, which does not occur in HHI and is clearly new here.
This, on the other hand, has consequences for human-human interaction. Interpersonal relationships and HHI are usually characterized by the fact that the persons involved act according to their own needs, ideas and wishes and those of the other person. Such forms of interpersonal interaction are full of contradictions, resistance on the other person's part, negotiation processes, balancing of needs, values, etc. Developmental psychology shows impressively how meaningful such experiences in the counterpart are for the development of children (e.g., Zelazo, 2013). The same applies to interactions among adults. The design of social robots, as we have demonstrated for Pepper, however, tends to create robots that adjust to the needs of their users with little or no resistance. The interactions between humans and robots thus established are likely to have a negative effect on the willingness to deal with unruly, non-adaptive human counterparts. A massive change in interpersonal values could accompany this. Taking one's own needs seriously in relationships is just as valuable as the ability to deal with conflicts in relationships and the appreciation of difference, which always includes non-adaptive 2 The uncanny valley effect is the preference for robots that are human-like, but not too human-like (Tu et al., 2020). behaviour on the other person's part. If the information about the human in combination with the userfriendly adaptive design of the robot results in a general decrease in these abilities and the willingness to rely on these values in HHI, we can speak of a deeply concerning form of xValuation. Some criticism of social robots points in a similar direction, concerned that relationships with robots could damage our capacity for emotional investment with others (Sharkey and Sharkey, 2010;Prescott and Robillard, 2020). Sherry Turkle addresses this point by stating that our normal desire to engage in human-human relationships could be undermined by machines' easy, convenient, non-challenging design (Turkle, 2011;Prescott and Robillard, 2020).
Thus, if the robot counterpart with which we enter into an emotional relationship is built to adapt constantly to our needs and habits, and behaves with as little resistance as possible, the human unwillingness to engage in complex human interactions could grow. 3 Should one bother with relationships that mirror one's own weakness, in which there are conflicts, and in which the relationship partner does not always act with foresight and only carries out what one wants when at the same time one can have one's wishes and needs satisfied by a robot that is always friendly and cooperative? If we are willing to answer this ironic question negatively, we enter into processes involving a significant shift in values. The valuable, good life should then be as free of conflicts as possible, always corresponding to one's own interests, without being boring and without confronting (divergent) interests, needs, wishes etc.

Conclusion
The examples of STT and social robots show that new ways of collecting data about people and generating new forms of information in HMI could lead to new evaluations or meanings of HMI information. For this, we proposed the term eValuation. This is a situation where lived and embodied experience is no longer trusted and technologically obtained or conveyed information about body/mental states is preferred (deValuation). As a result, certain aspects of the self-relationship can fall prey to objectification. Similarly, the new types of information in HMI can contribute to social expectations shaping the behaviour of individuals more strongly, as well as transforming intrinsic values, such as virtue, into instrumental values; for example, comparing performance in a peer group. Overall, the new information technologies discussed in this article can lead to a stronger competitive ideal in society, leading to a change in values. On the other hand, we have also seen that some HMI can lead to the reintroduction of values and discriminatory determinations, mainly through HMI information (reValuation). This is often tied to implicit values in the design. Processes in the context of social robots, which we have called xValuation, are also influenced by information design in HMI. Here, it is crucial that the robot not only has a large amount of information about its human counterpart but also that the information is conveyed to the human by the robot in such a way that the impression is created that the robot is a human-like or at least animal-like being.
Moreover, the robot's programming presents the human as a friendly, adaptive, less resistant counterpart. This can lead to a significant change in values. Trust towards artefacts changes, as does the willingness to relate to robots as relationship partners. However, what seems most crucial to us about is the possibility that the user-friendly design of robots as social interaction partners could change the interpersonal willingness to interact, insofar as HHI may be perceived as troublesome, unpleasant and unaccommodating. In summary, we can say that, though value change through developments in the field of STT and social robotics has already taken place, it is still ongoing. Against this background, our proposal to capture the changes in value with the term eValuation and to differentiate them by further distinctions (deValuation, reValuation and xValuation) is an attempt to sharpen the view of further-evolving phenomena.