concept


Keywords: AI, Machine Learning (ML), Reinforcement Learning (RL), ecology, interspecies communication, octopus, first contact, the unknown, interspecies collaboration


What

An AI that learns from an octopus in its environment; an experiment in aesthetic communication between octopus and human, mediated by a reinforcement learning (RL) algorithm.

The aim is to evolve an AI system that has not learnt from a humanly modelled environment, but from a fellow distributed consciousness (an octopus) that operates within and through a highly uncertain and fluid environment (the sea). As the Encounter Diagram above proposes, this may give agency to the unknown and change the terms of engagement. We suggest that developing an AI in this way, we might extend the boundaries of knowability in the area of interspecies communication, through the use of multisensory emitters in the form of artworks and feedback from multisensory detectors. Rather than positioning ourselves as the observers, we will put the AI in the place of observer-learner, enabling the octopus's responses to determine the AI’s learning. We believe this kind of role reversal is necessary for rethinking communication with other life forms, whether organic, synthetic or combinations of both. Conceptually and philosophically, this is underpinned by the understanding that the animal we are attempting to communicate with (in this case the octopus, but it could equally be fish, bird or human) is a part of and inseparable from their environment; that the medium for communication is the whole environment.

In order to extend this invitation, a ‘mesocosm’ is created at the edge of the ocean where the octopus lives. The mesocosm is the whole environment, a boundaried area that includes the octopus, within which our experiment takes place. It also includes an installation of underwater holographic and flat-screen emitters, haptic play objects and a sensor array (detectors). The emitters will intermittently stream audiovisual content based on trying to interpret octopus patterning behaviours and ways of seeing from extensive desk research investigating what is understood about octopus visual cognition in the field of Marine Biology, and working with a traditionally-trained Interspecies Communicator. The machine learning algorithm will learn about the octopus through the medium of its environment, as inseparable from its environment. It will learn about all of the environmental stimuli that can be measured by the sensor array, and all of the octopus’s behaviours and responses.

By communication, we mean a response to our aesthetic provocations that could be interpreted as such. We do not wish to make any assumptions about the octopus’s desire to communicate, how they may communicate; whether we can produce communication through audiovisual and haptic object artworks that the octopus can extract meaning from. We invite the octopus to respond, by offering a feedback mechanism. Octopuses are known for their curiosity and capacity for play; if they wish to, they can edit (play with) the video through their engagement with it.

Prototyping

The video is intermittently streamed over ten months. The streamed aesthetic content will metamorphose - mediated by the RL algorithm - according to the changes detected by the sensor array. This array consists of a wide range of sensors, from video and audio recording to bespoke chemoreception devices and haptic play objects. It is designed to detect responses mediated through the whole environment, including the octopus, and to learn from them. Thus the octopus may have some control over their environment, or be able to communicate through their interaction with it. While we, changed by trying to imagine into octopus cognition, will facilitate a feedback loop, attempting to produce more of any kind of image the octopus does respond to.

We do not know how an audiovisual video stream and our play objects might be perceived, understood, or interpreted by an octopus, if at all. What our system can do is to show where these limitations exist; they can determine whether our questions are ones we can ask. iscri assumes that directly communicating with an octopus is best undertaken by a third (vastly limited) intelligence, the AI artefact. The role of the humans in this approach is limited to devising the situation and engaging with the process and results. Research suggests that if the AI can successfully learn to communicate with an octopus, it will be able to learn to communicate with other animals too. It will be able to learn more quickly having done it once, as demonstrated in previous research using reinforcement learning (Silver et al. 2017; Carr, Chli and Vogiatzis 2019).

AI Ethics

Bateson researcher Stephen Nachmanovitch said about ISCRI:

“AI has mostly been used to attempt to model a cutout caricature of Aristotelian-Newtonian “man”. But if it (still imperfectly) is used to model cephalopods, we might learn something quite new. The issue is how to get away from digitally defined methods and rules and get to the analog reality of living organisms.”

The idea of the digital learning from the analogue, directly from other animals’ behaviour, points to a central implication of this project for AI ethics, addressing the well-known problems of bias in datasets that AI learns from as well as the biases of engineers programming it; the anthropocentric perspectives inherent to much AI development.

The creation of the AI and the whole process – the double experiment, the participatory programming in an art environment - will help us to think about applications for this new form of AI, to put it out there in the wider world. Our intended art gallery / mixed reality / XR experiment (‘Mesocosm 2’) will shift the experiment to a new, human environment, an immersive installation. The AI will now be responding to changes in an environment that involves a high degree of human agency. So it will learn from the collective analogue behaviour of humans in this new environment, as detected by a wide array of appropriate sensors, and in the process – over time – becoming less like the distributed cognition of one individual octopus and more of a hybrid octopus human aggregate. In treating the human participants as an octopus in a fluid lifeworld, it may also, reciprocally, affect human responses.

We want to present imaginative alternatives, new ways of (collectively) thinking about AI ethics, such as offering new possibilities for adapting AI to environments rather than adapting environments to AI. We’re also interested in creating new kinds of VR or video game experiences, or VR for teaching about octopuses and ecology. Speculative applications range from outer space and underwater robotics to wider interspecies communication.


References

Carr, Chli and Vogiatzis. Domain Adaptation for Reinforcement Learning on the Atari (2019). https://publications.aston.ac.uk/id/eprint/39572/1/DAAAMAS.pdf

Silver, D., Schrittwieser, J., Simonyan, K. et al. Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017). https://doi.org/10.1038/nature24270