Collaborations with
the Artificial Self
Exhibition Information
Date: Jun 10 - 16, 2024
Artworks: 56 on view
Location: space25, Basel
Discuss: Discord
Download Exhibition Catalogue
As humans, we have always had a deep degree of fascination with ourselves – early prehistoric art features statues of human bodies in burial grounds or hand markings on cave walls. There has been a desire to leave a trace or a representation of ourselves through any possible medium including sculpture, engraving, or painting. It is therefore not surprising that one of the first applications of new technologies has always been art – artificial intelligence (AI) included. Ever since John McCarthy coined the phrase ‘artificial intelligence’ in 1956, artworks exploring the artistic affordances of early computer-based systems followed, grounding these new systems in human creativity, and presenting a new frontier for the human artist to realise themselves: the machine.
Looking at the only self-portrait by Harold Cohen’s groundbreaking system AARON, the first art-making machine, we see a man standing in front of an abstract artwork, the five-fingered hand in focus: even though the self-portrait had been done by a rule-based AI system, a set of instructions running through the cold hard electronics of the computer, it is the human hand and the bearded face that represent the artist. As we follow the AI-based technological depictions of the human form in the last decade, we see imaginary faces crop up, sometimes with unusual forms and lopsided features as in Mike Tyka’s Portraits of Imaginary People, at other times as hyper realistic deep fakes of celebrities long gone as in Rachel Maclean’s DUCK – all made possible through the ever-advancing tsunami of technology.
The first wave of artistic practice in recent years began with Alex Mordvintsev’s development of DeepDream in 2015, an algorithm that emphasizes certain aspects of an image to bring out the instantly recognisable aesthetic of swirls, pagodas and puppy-slugs that captured the public imagination with its unusual style and unexpected machine creativity. Then came the GAN. The generative adversarial network became the definitive tool of artistic creation with AI in the second half of 2010s, its structure of two neural networks – the generator, creating images based on a dataset and the discriminator, distinguishing those images between real and fake – producing a steady stream of innovation over the years, beginning with models that exhibited low-resolution abstractions and ending with photorealism. Artists flocked to the various open-source GAN models made available by the research community, making datasets from their own drawings, training their own models, learning techniques for curating the myriad possible outputs and shaping the public narrative of the technology.
These explorations of new territories are presented as animations, videos, and snapshots, with artists offering us front-row seats for the journey into the unknown. Gene Kogan’s A Book from the Sky, considered the first intentional artwork made with a GAN, draws inspiration from the eponymous Chinese artwork from the 1980s, which depicted a series of fictitious Chinese characters, in Kogan’s case a test for the potential of AI to create new representations of meaning and languages of communication. Jake Elwes’ Latent Space charts the image possibilities that lie within the latent space of a neural network as we travel through the in-between spaces between recognisable points, the morphing colorful abstracts presenting a taste of what is to come. Damien Henry takes us on a train journey in A Machine Learns a Landscape, in which the all-consuming void of the black square, itself perhaps an allusion to Malevich’s history-defining work that broke the ground between representational and abstract painting, gradually develops frame-by-frame into a more recognisable train scenery.
Meanwhile, Helena Sarin’s Latentscapes, a term coined by the artist as a portmanteau of latent space and landscapes, are snapshots taken by the artist of a scenic walk in the latent space of an AI model. In an era where bigger is better, Sarin made a name for herself by honing the craft of training smaller models on her own datasets – here an obscure SNGAN model on her travel photography - to exercise a greater creative control over the aesthetic of the output.
The pairing of AI technology with established image-making techniques and physical processes hone the affordances of these tools into broader artistic traditions. Sofia Crespo’s Temporally Uncaptured depicts the transitions in the life cycles of organisms based on their early historical depictions as mediated through neural networks and hand-printed cyanotype techniques, whereas Robbie Barrat and Ronan Barrot consider the final stage of the human life cycle: the hollowed-out skull. In Infinite Skulls, we find traces of the correcting human hand at work on top of the printed GAN-generated skull, the hallmark of the collaboration between the artist Robbie Barrat, known for AI, and the painter Ronan Barrot, who over the years produced the dataset of skull paintings based on the paints left on his palette. In addition to blending human skill with that of AI, the latest septyque images blend subject matter – the skulls and the landscapes - this time made using diffusion models that work by initially representing an image as noise and then reversing the procedure to uncover new images.
Once deemed a discovery that was decades away, high-fidelity text-to-image generators took the world by storm in 2021, when the technology company OpenAI released DALL-E 1, a tool that enabled anyone to create realistic images based on a text prompt and found itself in its testing phase in the hands of Holly Herndon and Mat Dryhurst, known for their artist-centered explorations of technology. The series Infinite Images, the first and only artistic works with DALL-E 1, create never-ending images from a single prompt by extending the style and subject matter of an image through a series of vignettes that are then stitched together into a large-scale canvas. With their ability to render images to a stunningly precise level of detail for all those able to master the skill of prompting, text-to-image models have attracted established artists like Laurie Simmons, whose In and Around the House II presents fresh renditions of the 1970s dolls in domestic scenes for a new era of digital perfection and Niceaunties, whose infectiously entertaining depictions of glamorous and fun-loving Asian aunties break all stereotypes. The widespread popularity of DALL·E and Stable Diffusion shed the limelight onto the first text-to-image model, alignDRAW, developed in 2015 by the AI researcher Elman Mansimov, its smaller lower-resolution images portraying the infancy of a technology that would then rapidly mature and come to define the artistic practice of the early 2020s. The small squares with barely recognisable shapes are significant as proof of concept of a new form of communication between machines and humans, one that involves our own language as opposed to code: finally the day has come when we can exchange ideas with machines as equals.
From the historical firsts that paved the way for a diversity of artistic expression to the employment of AI tools for the realization of vivid imaginary narratives, the exhibition builds on the legacy of Harold Cohen’s AARON through the introduction of two other art-making machines: Mario Klingemann’s wooden cabinet of Memories of Passerby, where an AI brain lies in hiding, generating portrait after portrait anew and Botto, a decentralized artist working with text-to-image models and guided by a community of thousands - the culmination of all the latest innovations in the digital art space 30 years on. We can only dream of what technologies the next decades will bring, but one thing is certain: AI is here to stay.
- Luba Elliott
Artworks on view