The Interfaces of AI Art Practices
If machine learning is a new paradigm in computing, then why is there no new interface?
If machine learning is a new paradigm in computing, then why is there no new interface? The Creative AI Lab’s second public panel event invited theorists and practitioners Christian Ulrik Andersen, Agnes Cameron and Rebecca Fiebrink to find out how art and programming practices can answer this question.
Following the successes of the Lab’s first online panel event, Christian Ulrik Andersen, Agnes Cameron and Rebecca Fiebrink joined hosts Eva Jӓger (Associate Curator of Serpentine Galleries Arts Technologies) and Dr Mercedes Bunz (Senior Lecturer in Digital Society, KCL) for a deep dive into the purposes and possibilities of interfacing and interfaces. The event was accompanied by a reader of texts written by, and influential for, the panelists, and can be downloaded via the link.
The event forms part of a research strand for the Lab into contemporary approaches to ML (machine learning) and AI (artificial intelligence) interfaces, described by Jӓger as a “critical site” in art making. The Lab have simultaneously issued an open call for survey contributions: if you are an artist who works with machine learning and would like to participate by sharing photo or video documentation of your virtual workspace, you can find more details here.
Machine learning is a relatively new approach to programming, since machine learning systems programme their own rules, instead of being written by a human. This is generally done by feeding data into a deep learning neural network which infers the rules itself. Bunz brought this central problematic, posing the question: why, given this new approach and computational architecture, is there no new interface? From here she delineated three research questions:
- Can the ways machine learning systems evaluate data be reflected in an interface?
- Could users become more involved in granular decisions, in an intuitive way, through art making?
- Can there be a way to playfully manipulate machine learning in those more granular decisions that are being analysed and manipulated – text, images, audio and video?
Jӓger’s proposed approach to ‘see how various practitioners tinker with the operations and representations of the systems that their interfaces connect us to’. The question of a new interface in the end leads to a impetus for “access”, as articulated by Jӓger’s proposal to understand ‘how the internal toolings of artists allowing us to interact with machine learning’s meaning-making’.
‘How as an artist do you address the interfaces that hide? How do you critique the implications of wider AI interfaces that are not there, so to speak?’ Christian Ulrik Andersen, Associate Professor in Digital Design at Aarhus University and co-author of The Metainterface (MIT Press, 2018, alongside Søren Bro Pold) provided an initial clarification: interfaces are not just the representations that appear to us on the screen, but that ‘computers can be considered layered constructions of interfaces’. Acknowledging the divergent definitions of interfaces as a ‘threshold, zone, or path between spaces, or worlds’; or ‘allegorically, as a technical term that captures a human condition’. He leans towards a more technical understanding, a site where ‘language meets computation’. ‘When interfaces are embedded into an intelligent environment, interactions tend to become less driven by standard graphical user interfaces, and more by interfaces that escape our attention, reading our behaviour in the background’. Citing Wendy Chun, Andersen proposes that although computation, including machine learning, operates via the abstraction of a bureaucracy – one that always mirrors the bureaucratic conditions of its development – art practice can bring these operations of power into view.
Agnes Cameron is a hardware and software developer, with an interest in complex systems and simulation, and currently a resident at Somerset House Studios. Agnes used her presentation to get ‘close to the metal’ by presenting her work in her computer’s terminal. Through her projects The First 10,000 Years, Permaculture Network ( both collaborations with Gary Zhexi Zhang), Cameron showed the possibilities of artists’ interfaces. While “feeding” her project Slime Moulds, a ‘simulated analogue for this artificial lifeform’, as it sat in the terminal, Cameron explained that the ‘project was thinking specifically about ideas of morphological intelligence – things that are smart because of the way that they are embedded in their environment’, a conceptual framework that informs her engineering work at more broadly, and resonated with the Rolf Pfiefer text she selected for the panel’s reader. Demonstrating artificial life simulation for conversions by Agnieszka Kurant which Cameron produced alongside Owen Trueblood, the terminal becomes host the emergent behaviour of simulated agents, the initial analogue reference giving way to the abstracting interface as the process was scaled up. Outlining the limitations of Google’s Dialogflow, Cameron’s demonstration of the alternative she constructed in the terminal for BOT OR NOT gave insight into the critically reflective process of making machine learning interfaces for deployment in a creative user interface context, a problem which for Cameron was, in part, one of workflow payoff. Suggesting the benefit of working in the terminal is a matter for some users of having ‘agency over your computer’ but contrarily notes that there is an element of machismo and event of entertainment. Fundamentally, the terminal ‘allows you to talk to a computer on its own terms’, since they ‘expose you to the affordances, quite immediately, of a computer’.
Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London. Her research develops new technologies to enable new forms of human expression, creativity, and embodied interaction. In her talk, Fiebrink demonstrated the Wekinator: a free, open-source, interactive machine learning software that she developed in 2009. The Wekinator allows for external input data to be trained into an operable model and deployed to make music, interactive games, visual art and more. Fiebrink’s demonstration of the interface provided insight into the experience of working with machine learning, but also underlined that such operations, when built into simple and versatile tolls like Wekinator, can be highly accessible, but also highly malleable. Here machine learning offers an alternative to coding: ‘using example data is the way that you’re communicating your design decisions’ and thus becomes the interface, allowing for a more intuitive transferal of ‘tacit knowledge and embodied practices’. For Fiebrink, we don’t need access to deep learning or GANs to generate interesting outcomes: the supervised learning algorithms are often sufficient for effective interfacing between computer, user and world, and facilitating ‘surprise and discovery’.
Following Bunz’s question about users’ technical literacy and the interface, two strands of thinking emerged in the discussion: the responsibility of those developing the interfaces to make certain processes visible, and the responsibility of the user to be literate in interpreting what occurs. For Fiebrink, working with machine learning always requires trade offs and is a ‘constant negotiation’, since it doesn’t always lend itself to easy interpretability. She viewed confronting this as part of her work: ‘what can I open up for people, what can I make legible?’ and ‘conversely, what are the sorts of things people want to do that I can pull into the machine and make legible to the machine’.
Cameron viewed ‘how visible different processes should be made’ as a nonlinear historical debate. Agreeing with Andersen, she pointed to the work of Ted Nelson who, before graphical user interfaces, argued that we should be able to see these processes, a line of inquiry also pursued by Rob Pike of Bell Labs. Anderson points out that Nelson’s thinking has been widely used not just by artists but system developers; ‘just as it’s important for us to understand computers, it’s important for computer developers to think of what they do as cultural production […] we are coming close to the computer in completely different ways today, so perhaps we need to develop different kinds of literacies to reflect this closeness’. For Jӓger, the question became ‘where does the computer on its own terms start, and the cyber crud begin – and can you wedge yourself in there?’. For Andersen, artists like Ben Grosser develop ways of engaging with the interface that reveal some of its workings, without recourse to code. Cameron posited (via Emma R Norton) that getting closer to the metal – ‘even if you go all the way uptrain and you’re talking with the electrons’ – doesn’t necessarily mean more authentic engagement.
The Creative AI Lab is a collaboration between the R&D Platform at Serpentine and the King’s College London’s Department of Digital Humanities. It follows the premise that collectively, we are at the early stages of understanding the aesthetics of ‘AI’: locating a new poetics, investigating what it means to work with systems that are able to calculate meaning, and practicing art-making in the so-called ‘black box’ of machine learning. More Creative AI Lab events are in the pipeline – head to https://creative-ai.org/ to sign up to the newsletter and explore a wealth of commissioned and compiled resources.
Text by Alasdair Milne, recipient of the LAHP/AHRC-funded Collaborative Doctoral Award at King’s College London Department of Digital Humanities in collaboration with Serpentine’s R&D Platform. His work is concerned with collaboration – how to theorise practices of thinking and making which incorporate both the human and the nonhuman. His PhD will examine creative AI as a medium in artistic and curatorial practices.