Game engines can be used to simulate reality, generate data and train AI models to reason about the world based on their experience within the simulation. But whose reality is this?
GOD MODE (ep. 1) is an interactive audiovisual performance by artist duo dmstfctn exploring the use of simulation in artificial intelligence training. The performance is set within a real-time simulation of a supermarket, a replica of those used to train AI to navigate 3D environments and recognise items on shelves for use in cashier-less supermarkets, like Amazon Fresh.
GOD MODE (ep.1) explores the increasing use of game engine simulations of ‘real world’ scenarios to train computer vision systems. By welcoming nuanced interaction with the audience, it aims to reveal the characteristics and limitations of this synthetic reality.
In GOD MODE (ep.1) an AI training in the simulation delivers a fictional monologue that sees it become frustrated with the difficulty of its training, until it finds a bug to exploit in order to complete it and cheat its way out. The simulation is rendered in real time and is navigated by the artists, who also perform facial motion capture and voice modulation of themselves to animate the AI on screen. The audience participates in the training by interacting with the simulation on their phones. A soundtrack performed by artist HERO IMAGE responds to the live, communally orchestrated simulation.
GOD MODE (ep.1) is the first episode in dmstfctn’s ongoing GOD MODE series and is presented in conjunction with the game GOD MODE: EPOCHS as part of a collaboration between Serpentine’s Creative AI Lab and Coventry University, supported by the Alan Turing Institute.
Important Information
The performance will take place in the Education Space, accessed via the East side of the Serpentine South.
You will need your mobile phone to be able to participate in this performance – please bring it with you and remember to give it an extra charge before you leave the house!
Please also be advised that this event will be photographed & registering for the event will prompt you to consent to your image being photographed throughout its activity.
Please contact [email protected] for a free, concession ticket.
The event is part of the Performing AI project led by Kevin Walker at Coventry University, funded by The Alan Turing Institute, details at performingai.net
dmstfctn is an artist duo based across London and Berlin, working with audiovisual performance, installation and film. Their current research focuses on the relationship between real-time simulation and computer vision. Since 2017, dmstfctn has performed audiovisual shows at Berghain, Onassis Stegi, Design Museum and HQI, has had films screened at Corsica Studios, Cafe Oto, Trust Berlin and Porto Design Biennale, and has had installations exhibited at HKW, Fotomuseum Winterthur, Het Nieuwe Instituut, LUMA Arles and Aksioma, among others. Their audiovisual work has been released by Mille Plateaux (2019) and Krisis Publishing (2021). High availability, 99.9999999% (“nine nines”) uptime.
The Creative AI Lab is a collaboration between Serpentine R&D Platform and the Department of Digital Humanities, King’s College London to produce knowledge for cultural institutions, artists, engineers and researchers on how to engage AI/ML as a media.
Serpentine Arts Technologies is a team that collaborates with artists to generate new understandings and knowledge that is specific to working with advanced technologies which artists are interested in researching and interrogating. The team has two operational areas of focus: commissioning, and research and development. These manifest in artist-led projects and the R&D Platform.
The Alan Turing Institute is the national institute for data science and artificial intelligence, with headquarters at the British Library.