Still from forthcoming ML/AI Interfaces Tutorial Series, 2020. Image courtesy of Trust, Berlin
Still from forthcoming ML/AI Interfaces Tutorial Series, 2020. Image courtesy of Trust, Berlin

Luba Elliott on the Emergence of the Creative AI Field

As the popularity of AI art has skyrocketed over the past couple of years, more and more artists from diverse backgrounds have engaged with AI tools

Luba Elliott

The Creative AI Lab launches officially today with its first resource, a growing database that aggregates tools and resources for artists, engineers, curators and researchers interested in incorporating machine learning (ML) and other forms of artificial intelligence (AI) into their practice.

Over the last year, principal investigators on the lab Eva Jäger (Serpentine) and Mercedes Bunz (Department of Digital Humanities, King’s College London) have been working with Luba Elliott, a curator and researcher specialising in artificial intelligence in the creative industries, to populate the database with an extensive list of current creative AI/ML tools. Elliott has been key to building and facilitating a community around this type of work with her well-known Creative AI London meetups and newsletter which she has been circulating since 2016. In this interview the three discuss the landscape of AI tools and how Elliott built a Creative AI community in London.

Eva Jäger: Your Creative AI meet-ups in London, which started in 2016, and the newsletter for the community that emerged from these in-person talks and discussions are known in London for creating a space to update on and discuss Creative AI among the art and creative industry sectors. Can you tell us a bit about its origins and how it became a community of creatives from various disciplines?

Lube Elliott: I was running hackathons and events to encourage innovation in the arts industry and then at some point I stumbled across DeepDream. The multicoloured psychedelic aesthetic sparked my interest in Creative AI and encouraged me to follow the activity in the space as it began to intensify in 2015 and 2016. As I enjoy organising events, I decided to set up the Creative AI meetup with the aim of presenting the latest AI advances and AI art projects to audiences of research scientists and creatives. This meetup was a starting point for a lot of my work in the Creative AI field, including the newsletter, NeurIPS Creativity Workshop and all the other exhibitions, festivals and panel discussions that I’ve curated over the years.

EJ: The meetup hasn’t been active lately. Do you plan to continue it?

LE: Having done around twenty-five editions, I’ve put it on pause, but it may return later this year and I’m working on making the archive available.

Mercedes Bunz: You’ve collaborated with us on populating the Creative AI Labs tool and resource database; can you give a brief introduction to the sixty-five tools that you’ve highlighted?

LE: I’ve collected a variety of tools for artists and creative practitioners who may be interested in incorporating machine learning as part of their practice. The tools cover a broad spectrum of possibilities presented by the current advances of machine learning, enabling users to generate images from their own data, create interactive artworks, write texts or recognise objects. While most of these tools require some coding skills, there are a few that don’t. Absolute beginners are encouraged to turn to RunwayML and the ml4a courses run by Gene Kogan. Both of these offer an excellent introduction to the field and can get you started quickly.

MB: Can you describe how you’ve distinguished between media?

LE: I’ve predominantly split the tools according to their focus medium, such as image, text, sound and dance. Then, there are a few recognition tools as well as those that work across categories. To explain everything in more detail, static image tools enable users to generate and manipulate static images, ranging from giving an image the psychedelic DeepDream aesthetic or another artistic style, to creating photorealistic images from segmentation maps and generating high-quality images based on a particular dataset. There are also tools that allow for manipulation and generation of moving images, drawings, text and audio. Several tools like Magenta and Jukebox are used for music generation, such as creating new melodies and changing musical styles. I also list a set of tools that generate handwriting and digits in a variety of styles, including options that continue your stroke-based handwriting input. Dance and movement tools can be used to estimate human poses in images and videos, generate new choreography sequences and transfer someone’s dance moves to another person. Finally, there’s a series of recognition tools that cover objects, faces, emotions and gestures.

MB: Can you talk about some of the key fields where Creative AI is seeing a lot of new developments recently – for instance, in Natural Language Processing (NLP) with the release of Open AI’s new GPT-3 API, or the paper about Generative Pretraining from Pixels?

LE: The past few years have seen a lot of activity in the image space and now models such as StyleGAN2 allow us to generate high-quality photorealistic images. OpenAI has been driving advances on the natural language processing side, with GPT-2 and GPT-3 models generating increasingly coherent text. Many efforts are now concentrated on making tools more accessible to wider non-technical communities, such as Runway for creative, and then also on developing models that require less computational power or can be applied to smaller datasets.

EJ: In your recent podcast you briefly outline the various pockets where creatives are operating with art and technology – often very siloed – each with their own interests, language, history but still utilising some of the same tools. Can you describe some of the creative/artistic subcultures from which Creative AI work is emerging and identify some of the spaces where artists and industry are aligned and diverge? For instance, you position artistic experimentation with new technology as a ‘stress test’ (Frieze, December 2019).

LE: The diversity of artistic practices within the creative AI field is something that I’ve always found fascinating. On the one hand, you’ve got research scientists and software engineers who are developing lots of exciting algorithms and become responsible for aesthetic advances and new tools like DeepDream and GANs. On the other side, you have artists from fine, media and contemporary art fields who are eager to incorporate emerging technologies in their practices for critique, experimentation or artistic effect. Here, there’s quite a wide variation in the artists’ ability to work with the technology: some artists are able to tweak the models and tools they find online, whereas others hire external help or explore the notions of AI more generally. This results in some very different work. Then, there are computational creativity researchers investigating philosophical questions and advertising creatives using AI-generated images, poems and songs to gain attention. Finally, there’s also the business side – various start-ups are attempting to commercialise creative AI to generate background music for games or to create new easy-to-use tools for designers. All of these communities have somewhat different definitions, goals and values as to what makes a creative AI work successful.

Both industry researchers and artists are aiming to push their fields forward, but they do so in different ways. From my experience, industry researchers focus more on beating benchmarks and developing novel aesthetics, while artists experiment with these tools, incorporating them into their creative process or critiquing our society and emphasising the tool’s limitations.

MB: In your work with NeurIPS you’ve curated the online presentation space www.aiartonline.com, which I’ve heard you position as a snapshot of AI-related work that’s been made by the technical community – i.e. artists are chosen for their technical rather than conceptual prowess. In general, you’ve been a great champion of highly technical artists, while art institutions have been dismissive in some cases. Can you describe some of what might be lost if we don’t look to the technique to judge artworks made in this way?

LE: Indeed, it was the AI research community that drew me into this space and I feel strongly that their work on developing new aesthetics should be valued. If we don’t pay attention to the technique, we simply run the risk of giving prominence to artists who’ve jumped onto the AI bandwagon for PR rather than to those who’ve frequently spent months and years either developing these systems or learning how to tweak and combine different models to achieve their desired effect. Of course, there are plenty of exciting works that deal with AI conceptually, but technical mastery is equally important. Therefore, I’d advise any curator working in this field to keep a close eye on the research of the technical community to gain an awareness of the state of the art and at the same time learn more about the body of work from established AI artists such as Mario Klingemann.

EJ: You’ve been reviewing the AI art submissions for this year’s Lumen prize and submission to aiartonline.com. Can you speak about the trends you see emerging from the ways in which artists are engaging with AI/ML as a tool or collaborator? Has anything surprised you in terms of the way these tools are being employed, created, manipulated, or how the tool is positioned in the practice?

LE: As the popularity of AI art has skyrocketed over the past couple of years, more and more artists from diverse backgrounds have engaged with AI tools in their work, bringing different perspectives and applications. Over the past year or so, I’ve seen more artists venture away from image and video to explore media such as sculpture, as you can see in Ben Snell’s Dio or Scott Eaton’s work. On the technical side, I’ve found that GANs and text-generation systems have been somewhat overused, so it’s always exciting to spot artists working with genetic algorithms, like Harm van den Dorpel, or with object recognition, like Tom White. The artwork that’s struck me the most is Shinseungback Kimyonghun’s Non-Facial Portraits, where the two artists invited portrait painters to create portraits of people in collaboration with a facial recognition system. The rule was that the portraits mustn’t be detected as a face by the facial recognition system. The finished works vary in style and I wouldn’t even immediately recognise faces in some of them myself. I wish more artists would draw on non-GAN techniques and continue their explorations in the fine arts.

As the popularity of AI art has skyrocketed over the past couple of years, more and more artists from diverse backgrounds have engaged with AI tools

Luba Elliott

Archive

Discover over 50 years of the Serpentine

From the architectural Pavilion and digital commissions to the ideas Marathons and research-led initiatives, explore our past projects and exhibitions.

View archive