Culture – Art made by AI is selling for thousands – is it any good?
Earlier this year, a cryptic press release landed in the inboxes of journalists. Black and white, stylised like the ‘game over’ screen from an arcade game, it intoned: “CREATIVITY IS NOT ONLY FOR HUMANS”. The makers were a French trio known as Obvious, and their claim was that their artificial intelligence (AI) had managed to create art. It was the first of a stream of publicity that heralded the auction of an uncanny portrait. Christie’s had been expecting less than $10,000 (£7,800). In the end, it fetched $430,000 (£335,000).
The portrait itself looks grainy and unfinished. Squint, and it could almost pass muster in London’s National Portrait Gallery. Eyes wider, it is vague and strange: a round white face emerging from a murky canvas, with three dark areas that suggest two eyes and a mouth. The ‘brushstrokes’ seem pixellated. In the bottom right corner, the signature is the algorithm. So, is this how a machine ‘sees’ us? Perhaps, stripped of the biases of human perception, this is what we look like.
The portrait was billed as the first piece of AI art to be sold at auction, catapulting Obvious into the media as standard bearers for a new kind of art. The marketing had tapped into anxiety about AI to whip up excitement, though they only had to make the gentlest suggestion for people to lose their minds. Panicked questions rang through the media. Was this art? Who is the artist and the owner here? Are machines now creative too?
All valid questions – but premature. The technology is nowhere near advanced as Obvious implied and the public is fundamentally confused about what AI is, and what it is capable of. And Obvious’s marketing took lucrative advantage of that.
The intelligence of AI
AI art has been around for 50 years, but the Obvious portrait is part of a new wave. In the past, people using computers to generate art had to write code that specified the rules for the chosen aesthetics. By contrast, this new wave uses algorithms that can learn aesthetics by themselves. Then they can generate new images along those lines using, for example, a Generative Adversarial Network (Gan).
The AI did not produce the artwork alone, and it is not creative in any human sense
The signature in the bottom right of the Obvious portrait is the Gan algorithm. In essence, instead of having one network working alone, you pit two against each other. It mimics the interaction between a forger and an art detective. They are both trained on the same data set to learn its aesthetic, then one generates new images – trying to imitate what it has been shown – while the other judges whether or not they are real or fake. When the forger is found out, it adapts. And so it goes, until the detective can no longer tell what’s genuine and what’s bogus. The image sold at Christie’s is one that got through.
But it wasn’t the only image the AI produced. In fact, it is just one of almost infinitely many it could produce. It was the trio behind Obvious that chose this one because, for whatever reason, they deemed it apt. And they intervened at other steps in the process, too. They programmed the AI to start with, and then they chose the 15,000 portraits to train it on. Signing the painting with the Gan algorithm was a cunning bit of marketing – in no sense did the AI produce the painting on its own.
In fact, it isn’t even their AI. In the fallout of the Christie’s sale it emerged that the AI was actually the work of another artist, Robbie Barrat. He had programmed it, trained it on works from Wikiart and used it to generate very similar portraits, before he posted the code online with an open-source licence, so others could use it freely. So not only is the Obvious portrait not attributable to the AI – it’s not even really attributable to Obvious.
A machine is much easier to glitch, or bring off course, than a human brain – Mario Klingemann
Knowing this, the frenzy around the Christie’s sale deflates. The AI did not produce the artwork alone, and it is not creative in any human sense. It’s certainly not what’s called artificial general intelligence – the kind of machine we see in science-fiction films that is sentient, goal-driven and thinks for itself. But it is a tool that does interesting and unexpected things, and dozens of artists are using the same techniques as Obvious, but with imagination.
The art of AI
Artists using AI aren’t worried about being replaced. They build these machines and work with them everyday; they know how limited they are. What interests them is co-creation: the way AI lets them go beyond their native capacity. Mario Klingemann, one of the pioneers of using AI in art, sees it as a way to push the limits of human cognition.
“In the end, you are confined to what you have seen, heard or read, and it’s very hard to glitch that,” said Klingemann. “Some people take drugs to do that – to make even more absurd connections. But a machine enables you to forcefully provoke that. Because it’s much easier to glitch, or bring off course, than a human brain. In the process of doing that often some interesting things happen which are unexpected.”
Rather than simply copying code and hitting run, AI artists use the set up in their own way. Klingemann builds systems of generative models where he chains them together, using the output of one to train another until the end images are a distant, warped refraction of the original input. Anna Ridler creates unique data sets to train her models, for example taking thousands of photos of tulips and training an AI to generate video of them blooming, controlled by fluctuations in the price of bitcoin. Sougwen Chung trains AI on her own drawings and has it transfer what it has learnt about her style to a robotic arm that works alongside her. The result is a kind of paintbrush duet, a spontaneous interplay between an artist and her machine version.
When you read it, you are becoming the author, because there’s no human intent behind the words – Ross Goodwin
At a glance, the AI art community does feel dominated by visual artists, giving the impression that AI must be better at creating images than, say, text or sound. But the reality is that when AI tries to imitate what it’s trained on, it makes mistakes – and the visual arts are more tolerant of them. “The eyes are much more forgiving than the ear,” as Klingemann put it.
Still, there are artists exploring AI in text and sound, too. Ross Goodwin is one of them. He works at the intersection of text and computation, and his last project saw him hit the road in a black Cadillac wired up to a camera, a microphone and a computer spitting out what looked like an endless supermarket receipt. “The idea behind it was to write a novel with a car as a pen,” said Goodwin. The sights around them, the chatter in the car and the time and location fed into an AI that transformed them into prose. By changing the diet of poetry and literature that he trained the AI on, he controlled the voice. “When you read it, you are becoming the author, because there’s no human intent behind the words,” said Goodwin. “You get to project meaning onto them. The reader becomes the writer.”
That vacuum of intent gets to the heart of the conceptual shift with AI art. “It is a chance to reflect on what it means to be human and what it means to be intelligent in the first place,” said Kyle McDonald, an artist who uses AI in dance. “If we’re building these algorithms that imitate our own intelligence, we get a chance to figure out: what does it mean to be creative? Why is art good or bad, why do we relate to it? How important is authorship – if I hear a really good song, does it matter whether it was composed by an AI or by a human?”
Most artists scoff at the idea that AI is creative – but it depends what you define as creative. They certainly create things, sometimes in new and effective ways, but they do so with no intent and with no sense of what’s relevant. It’s the human who interprets and sifts through their output. “The machine has no intent to create anything,” said Klingemann. “You make a fire and it produces interesting shapes, but in the end the fire isn’t creative – it’s you hallucinating shapes and seeing patterns. [AI] is a glorified campfire.”
Rather than asking whether a machine can be creative, perhaps we should ask: what would it take for us to believe in the creativity of a machine? Douglas Hofstadter, one of the grand old figures of the field, once wrote that “sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” The same could be said of creativity: the more the machines achieve, the higher the bar rises – and the more we understand human creativity. “In the end, competition always forces us to get better,” said Klingemann. “To see what makes us as humans still special.”