Hy-breed: Growing a responsive organo-mechanical agent

One of the considerations in the use of AI in the last years is its progression towards deeper, more opaque handling of data that are often hidden beneath increasingly smooth and ‘user-friendly’ interfaces. With its increasing use as a tool for automated decision making at the level of the consumer but also in the way sites of intense vulnerability such as detention centres and refugee camps have become sites for experimental testing of these new technologies (Molnar 2020), the pervasive reach of Artificial Intelligence within our socio-political infrastructures are raising our consciousness to the ways machines digitally represent our lives, activities, contours and movements, and the ways these are translated computationally as intent. Information theorist Philip Agre (1994) has considered the way these activities have formed a language in itself through “representation schemes” that use “linguistic metaphors and formal languages for representing... activities”. (Anderson & Pold 2018). Such examples from motion tracking, migration and the collecting of micro-movements of the head, torso, hands etc through VR headsets encourage the inference of limited amounts of movement as intent. These reframe the reverberations of activity as a form of utterance, movement as digital conversations.


INTRODUCTION
One of the considerations in the use of AI in the last years is its progression towards deeper, more opaque handling of data that are often hidden beneath increasingly smooth and 'user-friendly' interfaces. With its increasing use as a tool for automated decision making at the level of the consumer but also in the way sites of intense vulnerability such as detention centres and refugee camps have become sites for experimental testing of these new technologies (Molnar 2020), the pervasive reach of Artificial Intelligence within our socio-political infrastructures are raising our consciousness to the ways machines digitally represent our lives, activities, contours and movements, and the ways these are translated computationally as intent. Information theorist Philip Agre (1994) has considered the way these activities have formed a language in itself through "representation schemes" that use "linguistic metaphors and formal languages for representing... activities". (Anderson & Pold 2018). Such examples from motion tracking, migration and the collecting of micro-movements of the head, torso, hands etc through VR headsets encourage the inference of limited amounts of movement as intent. These reframe the reverberations of activity as a form of utterance, movement as digital conversations.
In this work I explore the co-existence of living microbes (Euglena Gracilis), and a rudimentary machine learning model to explore how designing a system for reading and understand motion, in particular non-human motion, might expose the inherent contradictions in the digitization of our bodies and selves.

HY-BREED
In its practical development, the project involves the cultivation of a hybrid machine-biological agent entitled Euglena, a fine tuned GPT-2 text engine controlled by a live dish of the freshwater alga Euglena gracilis. Beyond vital maintenance of the system, they are "trained" to perform together as a hybrid conversational AI system. For Euglena, a container of the single-celled alga was purchased from Carolina Biological and instructions for keeping the community healthy included regular exposure to sun (it is photosynthetic) and ensuring that the cap to the container was opened to ensure aeration. As a robust microbe, E.gracilis is a very independent organism and has a high tolerance for environmental stressors. Its variegated responses to its environment are also well documented, in addition to moving towards light, it also exhibits extensive contortions of its body (metaboly) that while are methods for self-preservation, could also be framed as performative and expressive gestures. It was fascinating in this project because of its rapid responses to stimulus that were almost predictable, and at a time scale that was available to us as a human. "Circumstances" such as the evaporation of its liquid medium on the slides and cloudy weather) would cause it to change its shape and roll into a dormant ball. Similarly, in crowded and/or darkened dishes, it would spiral around seemingly looking for a way out. Using OpenCV through a variety of platforms, I was able to locate the movement of blobs across the screen and also provide rudimentary understanding of the shapes of the E.gracilis (whether the contours tended towards a circle or if it was elongated), and from there was able to infer its emotional state.

Figure 2: Euglena Gracilis in spiral (left) 400x, and elongated (right) in response to lowering amount of fluid medium in the dish
In breeding the mechanicalcomputational side of the project, my goal was to create a rudimentary conversational AI that would be able to respond to simple text prompts with comprehensible outputs. I opted to fine tune an existing model instead of training one from scratch for this version. Starting with a 'gpt-2' model from Hugging Face's open source repository, I proceeded to fine-tune it using the "Transformers" module and instructions available on the website (https://huggingface.co/docs/transformers/training). As an early test, and to generate content that resembled a poetic or abstract nature, I chose to use the writings of Samuel Beckett as my training dataset. (Of particular significance is the philosophical and temporal resonances in "Waiting for Godot" and the way this project could be seen as an existentialist conversation). The trained model eventually yielded 5 possible responses to a short promptthis number was chosen to maximise its potential for a 'live' conversation, and lower the latency from input to outputand depending on the inferred and aggregated emotional state of the E.gracilis community, one of the outputs would be chosen.
This project was performed as a live conversation/panel with 2 other AI and Art researchers at the POM 2021 conference in Berlin, and presented as a stand-alone visualization at IEEE VISAP 2021. Further work will also be discussed at the paper presentation.

CONCLUSION
I approach this research-creation practice through two lenses: (i) The metaorganism the body recontextualised as an object of knowledge, or better a 'resource of information' (Thacker, 2004).
The multispecies assemblages within the emerging paradigmatic computational planet (Gabrys, 2016)a cybernetic vision of feedback and control that was increasingly validated through expansive networks of sensing technologies, and the mobilization of animals and machines in hyperlocal environmental monitoring at unprecedented scale.
Given the meteoric rise in AI and machine learning capabilities, a project such as this also allows a process of slow scholarship and an expanded aesthetic of care to take priority. A more drawn out process of learning, listening and co-inhabiting with these agents is currently underway and will be described in the paper presentation.

Gabrys,
Jennifer. 2016. "Program Earth: Environmental Sensing Technology and the Making of a Computational Planet". University of Minnesota Press.