Still from forthcoming ML/AI Interfaces Tutorial Series, 2020. Image courtesy of Trust, Berlin
Still from forthcoming ML/AI Interfaces Tutorial Series, 2020. Image courtesy of Trust, Berlin

A Different Research Agenda: Julia Kaganskiy Introduces the Creative AI Lab

artists and art institutions have the potential to pursue a different kind of research agenda and a different set of methodologies with which to compr...

Julia Kaganskiy

Over the past few years, artificial intelligence (AI) has captivated the imagination of the art world. In 2019 alone there were group exhibitions on the subject at The Barbican Centre in London, the Museum of Applied Arts (MAK) in Vienna, the House of Electronic Arts (HeK) in Basel, Kunstverein Hannover, and the de Young Museum in San Francisco, as well as dozens of solo presentations of AI artworks by artists such as Hito Steyerl, Trevor Paglen, Ian Cheng, Refik Anadol and Pierre Huyghe. Artists of all media, both emerging and established, have begun exploring AI’s potential as a creative instrument, non-human collaborator and as subject of technocratic inquiry and critique.

We can attribute this recent flurry of interest, at least in part, to the growing role and importance of AI in society. In particular, over the past few years we have witnessed the rapid rise of machine learning (ML), a sub-discipline of AI in which algorithms trained on vast quantities of data autonomously ‘learn’ to solve problems through the extrapolation of pattern from information. As more and more of our material reality comes to be represented and interpreted as data, this kind of algorithmic pattern recognition is increasingly being used to organise and govern our lives, functioning as an invisible substrate that’s embedded in everything from advertising to financial trading to policing to education to healthcare.

Yet what is often cited as particularly troubling about AI and ML is the way that human values, beliefs and biases are encoded and reproduced within these technologies through the use of data that reflects historical social bias or presents a partial or incomplete picture. Matteo Pasquinelli and Vladan Joler describe ML as ‘an instrument of knowledge magnification’ and diffraction, comparing it to optical instruments of perception such as the telescope or the microscope, which have the power to extend human insight but always come with inbuilt aberrations that distort how we ‘see’ through them. Algorithmically derived conjectures about the present and the future that are based on flawed and incomplete representations of what has come before thus inadvertently produce a skewed worldview and have the potential to radically circumscribe the space of future possibility. However, theorists on the subject like Yuk Hui and Ramon Amaro point to an epistemological break that would allow for possible futures where the technics of machine learning, like recursion (a non-linear approach to learning) and individuation, could reposition technology and its relationship to the human today.

Still, normative ideals generated through ML’s statistical methods of inference are typically presented as empirical, objective and scientific, even while the underlying rationale for how an algorithm arrived at a particular decision is often hidden away within a ‘black box’ of inscrutable computational reasoning and proprietary software. Artists challenging data directly like Karen Palmer, Adam Harvey, etc. surface the biased nature of these ‘ideals’. All of these factors combined make AI and machine learning crucial sites of critical inquiry and political contestation.

The question of artificial intelligence also opens up a fascinating discourse on the philosophy of mind ­– what, exactly, constitutes ‘intelligence’? Or, for that matter, ‘consciousness’? And how do we know when or if a machine has achieved either? The lack of precise definitions has made the history of AI, a quest to produce ‘intelligent machines’ that can reason and perform tasks typically requiring human intellect, one of endlessly shifting guideposts. Each time a previously inconceivable milestone is achieved (such as a computer beating a human at chess) it is no longer seen as a reliable measure of ‘intelligence’. While some proponents of Artificial General Intelligence believe that it will result in the creation of super-intelligent AI whose intellectual powers will supersede our own, many computer scientists remain sceptical, especially since much of the AI technology that exists today still relies heavily on invisible human labour. Some critics have noted that perhaps ‘intelligence’ itself is a misnomer here, preferring a more precise description like ‘nonconscious cognition’ (N. Katherine Hayles) or the more irreverent ‘artificial stupidity’ (Hito Steyerl) in order to more accurately discuss the way AI actually functions.

The question of whether an AI can be ‘creative’ has similarly been at the centre of discussions around the potential and limitations of AI ever since the field was established as a discipline of study in the 1950s. Perhaps because ‘creativity’, like intelligence and consciousness, has historically been viewed as a uniquely human attribute (a belief that has been challenged and dismantled in recent years by the humanities and sciences alike), it has been the source of much speculation, debate, anxiety and hand-wringing amongst creative professionals of all stripes. With AIs writing poetry and screenplays, producing artworks that are fetching nearly half a million dollars at auction, composing musical scores deemed more Bach-like than Bach, and soon to be curating biennials, it seems that creative jobs are no less safe from automation than those of factory workers or truck drivers. Still, as generative artist Casey Reas notes, even these examples require human input and intervention in the form of selecting training data, ‘coaxing’ the algorithm to produce the desired aesthetic output through a process of trial and error, and finally choosing and presenting the finished work. Furthermore, while ML processes can imitate artistic styles, often combining elements in new and surprising ways, they currently lack the ability to generate new meaning. To quote Umberto Eco: ‘no algorithm exists for the metaphor, nor can a metaphor be produced by means of a computer’s precise instructions, no matter what the volume of organized information to be fed in.’

Perhaps a more interesting question than whether AI can be creative is to ask what it means to think with and through these tools? What does it mean to shift the centre of agency from the subjective to the inter-subjective? What kind of knowledge do these practices produce or obscure? What kind of action do they enable or prevent? While much of the research happening around AI and ML occurs in academia or, more often, within the sequestered halls of technology corporations, artists and art institutions have the potential to pursue a different kind of research agenda and a different set of methodologies with which to comprehend these tools. This is precisely why collaborative research initiatives like the Serpentine and King’s College’s Creative AI lab are so timely in their efforts to bring together knowledge and expertise from different cultural institutions and practitioners towards the exploration of how the arts can contribute to this emerging field.

Julia Kaganskiy is an independent curator and cultural producer working across art, design, and technology. She is currently curating the forthcoming ‘AI, Ethics & Health’ season at Science Gallery London. Previously, she was the founding director of NEW INC, the first museum-led cultural incubator, an initiative of the New Museum in New York. She was also the founding Editor-in-Chief of The Creators Project, an international cultural platform dedicated to art and technology co-created by VICE Media and Intel

Julia Kaganskiy in Conversation with the Creative AI lab’s principal investigators, Eva Jäger (Assistant Digital Curator, Serpentine Galleries) and Mercedes Bunz (Senior Lecturer at the Department of Digital Humanities, King’s College London)

Julia Kaganskiy: Artificial intelligence is notoriously difficult to define, since the benchmark for what constitutes ‘intelligence’ keeps shifting with every milestone achieved. Can you tell us a bit about how you’re framing AI within the context of the Creative AI lab? What kinds of technological practices or theoretical conceptions do you hope to encompass?

Mercedes Bunz: We are indeed interested in technological practice: the research of the Creative AI lab focuses on exploring the actual skills that computer systems have acquired through what’s usually dubbed ‘deep learning’ or ‘machine learning’, in particular the capacity to analyse and process language or images. Both forms of communication have so far been a field in which computer systems more or less failed. Thanks to machine learning, they’re now, for the first time, able to calculate meaning, which is an achievement we haven’t yet fully come to terms with. So this is where we position the project: we’re interested in the grey area between the human understanding of meaning and the non-human calculation of meaning.

One could maybe say that the intelligence of culture has always been a technological practice in which skills and tools are used. The Creative AI lab is interested in exploring the tool of machine learning and its qualities, maybe even its agency. But we definitely avoid assumptions that position AI as a super intelligence. Quite the opposite: we aim to show AI as a process, including all the human labour, decision-making processes, but also the computational logic. We’re interested in looking into the ‘black box’ that AI is too often wrongly considered.

Eva Jäger: Cultural institutions play a pivotal role in offering a space for societal critique and reflection. We can see that machine learning has started to feature in the production of new artworks and the Serpentine Gallery itself has produced some of those art shows, like Pierre Huyghe’s UUmwelt, Ian Cheng’s B.O.B, Hito Steyerl’s Power Plants, and I Magma by Jenna Sutela, Memo Akten and Allison Parish. Overall, however, the cultural sector still needs to acquire adequate media literacy to engage with this technology. The Creative AI lab is for us a platform to explore the issues more methodologically and beyond our temporary exhibitions.

JK: How did the relationship between the Serpentine and King’s College London come about? What kind of similarities or differences in your interests and approach made you think that a joint programme would be mutually beneficial?

MB: Excellent question! We’re fascinated by the same technology, to which we’re coming from different sides. My research approach at the Department of Digital Humanities at King’s College London comes in part from internet studies. Originally, I was pondering the question: why has machine learning brought a new programming paradigm but no new interface? Maybe I should explain why it is a new programming paradigm: instead of computer engineers writing code, in machine learning an engineer sets up a system that learns to write the right code itself through analysing thousands of examples. Interestingly, aspects of this learning are usually not reflected in the mainstream interfaces we see or read about. Most contemporary AI systems operate in the mode of black boxes, from automated decision-making in justice systems to self-driving cars or the analysis of medical images.

Together with Sumitra Upham from the Design Museum London, we explored how those interfaces could be opened up in a series of workshops, thereby shifting the focus from automation to collaboration. We invited designers, computer scientists, artists and digital curators such as Eva, who was at that time designer-in-residence at the Design Museum. In one of our conversations, Eva took out her phone and showed me that these interfaces actually exist and are being used in art productions … and my eyes widened. I immediately knew that we can all learn a lot about opening the ‘black box’ of AI by researching cultural productions using machine learning.

EJ: Totally. Advanced approaches to AI interface design, development and deployment exist in art making; this is work being done by artists, engineers, producers, etc. However, when institutions present works that utilise AI those interfaces become secondary to the work itself, rarely shared with the general public. I showed Mercedes a video-editing interface that Jules LaPlace made with Hito Steyerl and Damien Henry, that allowed them to easily change parameters in a neural network able to forecast further video footage, 0.04 seconds into the future. This was in the lead up to her show at Serpentine, Power Plants. This was around the same time Mercedes was conducting research on the missing interfaces in service products that deploy machine learning. It struck us that not only in its conceptual framework but also in production, work like this could inform research into how we make sense of AI’s role.

There are also examples, in which artists embed the interface in the work itself like Refik Anadol who shows unique interfaces for each work but this is a rare instance and it is not always relevant for artists to embed the tools into the work itself.

JK: That’s really interesting. Can you say a bit more about why interfaces might be an important area of study, both from the perspective of human-computer interaction and from the perspective of cultural production? What do they reveal about these technologies and how we relate to them?

MB: As a researcher who’s always embraced digital interface studies, I’m tremendously interested in getting a better understanding of the role of the interface for machine learning systems and the algorithms they produce. Cultural producers rarely get the chance to talk about the technical aspect of the work, which is often looked down upon as a mere support of the artwork. Both aspects are connected: it’s the interface that reflects the technical aspects. The interface decides how we position machine learning in a human environment. Is it becoming a black box automating human tasks or is it set up as a collaboration with the human environment?

EJ: I feel it’s important for institutional capacity to be built in order to understand the technical aspects of work better and take it more seriously from a curatorial and institutional perspective. Hito Steyerl’s practice is a good example of an artist working against institutional norms to surface these ‘back-end’ processes, by including full team credits and demonstrating the technology she’s built with collaborators (see Hito and Jules LaPlace at Castello di Rivoli) but as a general rule, the research and development that go into the creation of artworks with emerging technology remain in the background. More than anything, we’re interested in the people we can connect with on this topic and who make up the complex teams needed to construct such projects. In October 2019, when we held our first lab roundtable in Berlin at Trust, we were able to convene researchers, engineers, artists, designers, people from the world of gaming and simulations around the interfaces they use, build and research. Artists could feel free to talk through the technical aspects of their work and we all could link theoretical practice to infrastructure design. We know that many of the people who attended are still in touch or even working together now.

JK: What do you see as the key role(s) of artists and arts institutions with regards to AI? What can we contribute to the discourse and practice? What do you see as the potential impact of our contribution to the broader development and implementation of AI in society at large?

EJ: While advanced technologies have had a transformational impact on the corporate world, the cultural sector, including contemporary art institutions, has been slower to acquire an adequate media literacy to address technologies such as AI at both theoretical and practical levels. With AI/ ML becoming a medium for contemporary artists, the established knowledge and skill base of most curators and cultural institutions working in contemporary culture has been deeply challenged. While contemporary art institutions are expected to translate the societal impact of this technology into a critical but also creative response, they often struggle to understand the functioning of the technology as well as its creative capacity. Often the solution is to talk about the technology in opaque terms that keep it at a distance. This has in turn reinforced ‘black-box’ narratives around AI/ML when the work is communicated to art audiences or in the media.

MB: This is counterproductive, particularly since public cultural institutions could offer a much-needed societal space to critically engage with the infrastructure of advanced/deep machine learning technologies.

EJ: We see this lab as a place to grow that internal capacity, not just at the Serpentine but more broadly to our peers. In a recent round table at King’s, we brought a group of London-based curators who work on these kinds of projects to a full day workshop where we dove into the technical intricacies of neural networks with Leo Impett, who researches iconographic computer vision.

JK: Eva, you mentioned earlier that the Serpentine has played the unique and interesting role of commissioner and co-producer of several AI projects in collaboration with artists like Ian Cheng and Hito Steyerl. What kind of insights has this produced with regards to the changing nature of artistic production, institutional infrastructures, and the relationship between artist and institution?

EJ: The disembodied nature of this technology triggers all kinds of questions around authorship and agency in the creative process, something that Ian Cheng touches on in Emissaries Guide to Worlding. Equally, it confronts conservatism in contemporary art – for instance a binary between engineering/computation and ‘artistry’. If we sideline team members involved in what we call ‘technical production’, we risk disengaging with the technical as a source of meaning-making that’s equal to the ‘front-end’ of an artwork.

In terms of what the Serpentine can bring to the table: with the Creative AI lab we have an opportunity to consolidate and share some of the thinking and behind-the-scenes work that’s gone into producing commissions that use machine learning and/or artificial intelligence. Working with artists and programmers, our team is typically deeply embedded in the production and interpretation of this work.

From our work commissioning and producing projects at Serpentine, we’re well aware that practitioners, including the headlining artist, have a diverse set of skills that usually includes technical knowledge that was either self-taught or learned in other industries. Take Ian Cheng, for example, who comes from an animation background; or Allison Parish, a computer programmer, poet, educator and game designer who collaborated on Jenna Stella’s I Magma App. Troy Duguid who worked as the Unreal Engine Developer on Jakob Kundst Steenson’s The Deep Listener is also part of AAA Berlin, an artist collective that makes tools for artists and game makers who work within game engines. We are in a moment of transition where both artists and those that work on artist’s projects want to shift toward a model of accreditation that acknowledges collective effort and extends creative attribution to technical roles. Hopefully, this platform can support a growing interest in the ‘back-end’ of artworks that employ advanced technologies–AI in this case–as well as the changing nature of artists’ skills.

JK: What are your hopes and expectations for the Creative AI lab? What kinds of questions do you want the programme to address in its first year? What kinds of knowledge do you hope to produce? What kinds of networks and connections do you hope to facilitate?

MB: My expectation is that I get the opportunity to talk to smart, interesting people and so far, I haven’t been disappointed. On an academic level, the most important aspect seems to be the ability to study and shape the shift that’s happening now that we’re able to calculate the meaning of images and languages – and to explore and map out the logic of that calculation, for which cultural institutions are an excellent context. But of course, we also have a series of official research questions, such as: what role do artists give AI (i.e. antagonist, collaborator, tool, etc) and how do they position its influence? How are datasets and interfaces considered? How do the production teams for AI/ML projects differ from traditional digital media and does this lead to a reconfigured relationship between artistic and technical roles? But also: what is the technical and aesthetic history that creative AI artworks draw upon and how could we understand its specific aesthetic-technical manifestation from there?

EJ: An exhibition or a project launch isn’t always the appropriate platform to share technical details, but with this lab, we can excavate these projects and reflect on the infrastructure or ‘back-end’ together with those involved in creating it.

Our goal here isn’t novelty: so-called ‘creative AI’ has been a fast developing field for the last ten plus years and artists – including some of the artists we’ve been in conversation with at Serpentine like Lynn Hershmann and Rebecca Allen – have been using AI as a tool and conceptual starting point for much longer than that. I can only emphasise the rich history on the subject and underscore our humble endeavor which is very much about building and supporting a community network at the slim intersection of computer science, digital humanities/media studies, museum studies and art-making (in the broadest sense).

Our just-launched website will be an aggregator for current texts, research and also AI/ML tools database. To launch, we commissioned Luba Elliott, a curator and researcher specialising in artificial intelligence in the creative industries, to populate the database with an extensive list of current creative ai/ml tools. She’s been key to building and facilitating the community around this type of work with her well-known Creative AI London meetups and newsletter which she’s been circulating since 2016. We will to continue to commission contributors to the database to build on, complicate and diversify its contents.

MB: We’re also reaching beyond Europe and the UK. This year, together with Rhizome and NYU’s Digital Theory lab, we received funding from AHRC to put together some live events and a series of webinars.

artists and art institutions have the potential to pursue a different kind of research agenda and a different set of methodologies with which to compr...

Julia Kaganskiy

Archive

Discover over 50 years of the Serpentine

From the architectural Pavilion and digital commissions to the ideas Marathons and research-led initiatives, explore our past projects and exhibitions.

View archive