aiart Algoculture Words

Slow AI Manifesto

It’s 2026.

Everyone hates AI.

Artists, in particular, really fucking hate AI

Which leaves me, an artist, who uses AI in their work, in a tricky position.

This isn’t intended to be a rant, or whine – more of an opportunity to take stock.

Algoculture

I’ve been banging on about Algoculture for decades. Simply defined, Algoculture is the area of culture where humans and algorithms interact. This is a subject that has fascinated me since I first plugged in my new ZX81 into a 14” portable black-and-white TV and turned it on.

This laughably crude device was a doorway to a future that led me first to the school computer room, with its Commodore PET (and the inevitable bullying from the rest of the class that came from being a geek in the 1980s).

(Photo by SSPL/Getty Images)

It would be decades before computers invaded every aspect of our lives, and no amount of enthusiasm from nerdy boys like myself (it was almost always, unfortunately, boys) could persuade the normies to look at a computer.

Then the computers arrived in their droves and fundamentally changed culture.

Culture is a set of knowledge, ideas and behaviours. Culture is in constant flux. The rate of change of culture has been accelerating since the industrial revolution. In the great arc of history we see that culture adapts and embraces environmental and behavioural change … eventually, but often with a degree of resistance.

Watching how human culture reacts to the wave of technological change has always fascinated me, and making art about it has been a lifelong activity.

Art offers a unique way to address this stuff. Art can provoke, inform and amuse. Art can make you look at things in a new way.

A brief history of shardcore and the algorithm

In the early 2010s my algoculture work frequently came in the form of algorithmic entities which interacted with the then-new field of social media. Picking apart our relationship with truth (@factbot1), our enthusiasm for voluntary surveillance (@theresamaybot) and our engagement with the medium itself (@bffbot1

Art is a form of play, and these works played with social networks as a material.

The AI aspect of this story starts in 2015 when a new kind of digital material emerged.

DeepDream 2015

In 2015, Google released Deep Dream – I shan’t go into the details here, others have written more eloquently. Deep Dream represents one of the first times that big data and machine learning collided and produced a system which produced visual outputs.

The images produced by Deep Dream bear a striking similarity to perception under the influence of psychotropic drugs. That our first confrontation with the mind of the machine would be so trippy was unexpected, but somehow prescient (my PhD supervisor once told me that ‘doing a PhD is like taking acid for the first time’ – he was completely wrong, by the way, and I strongly suspect he’d never actually taken anything harder than an aspirin in his life…).

The technical discovery that inverting an image classification network would produce instant psychedelia was astonishing, and to the attuned digital artist it was catnip.

Suddenly there was a computer mediated system which had captured an aspect of human culture – the original model was taught to classify images, achieved by feeding millions of Google images into a neural network – something only possible because of the explosion of the internet (and Google’s dominant space within it). 

What Deep Dream spat out was based on what it had ‘seen’ – the puppy slugs and fractal colours were both familiar and alien. The machine transformed its inner state into an external aesthetic output – a process that seems an awful lot like the activity of making ‘art’

In some ways, this was either the beginning of the end, or the end of the beginning, depending on where one was standing.

AI everywhere

The collision of plentiful big-data and machine-learning proved extremely fruitful. It turns out if you throw enough examples at a pattern matching machine, it can produce things which look remarkably like the stuff it’s seen. This is the basis of the ‘AI explosion’ through which we are living. The internet and our rapacious desire to contribute to it generates phenomenal amounts of data. Machine learning takes this data and turns it into a product.

Style Transfer 2016

It turned out that the visual realm was an excellent playground for these newly invigorated machine-learning scientists and engineers. Innovations like style-transfer allowed us to transform existing images into wild new forms. Generative Adversarial Networks (GANs) began a wave of machines that could ‘train themselves’

PoliticiGANs 2018

GANs also led to the first wave of #deepfake hysteria – AI was beginning to warp reality and breed a sense of mistrust.

But the cultural sea-change came with the release of CLIP and diffusion models. Suddenly we could ‘prompt’ for an image – write some words and get a picture back. Almost overnight this fundamentally transformed the number of ‘AI artists’ from a handful of creative-coding geeks to ‘anyone with an internet connection’.

As all hipsters know, mass-adoption is the death of cool. And whilst I’m all for accessibility and empowerment, I cannot deny a sense of ‘get off my lawn’ (a sentiment shared by others in the field )

Services like Dall-E and Midjourney took AI image-making from the niche preserve of artistically minded coders and into the hands of the terminally online.

The models themselves have rapidly evolved from crude, but fascinating, surreal image generators, to more accurate models, which brought a defining aesthetic of their own (e.g. Midjourney) to the more recent models which can produce images which are indistinguishable from reality.

In the last couple of years we have seen similar advancements in the realm of video generation.

Where are we at?

We’ve all seen the Gartner hype cycle curve, defined by wikipedia as:

The Gartner hype cycle is a graphical presentation to represent the maturity, adoption, and social application of specific technologies. The hype cycle’s veracity has been largely disputed, with studies pointing to it being inconsistently true at best.

Despite its flaws, it’s a useful shorthand to how technology is adopted by culture. Regarding AI, mainstream culture is currently falling into the trough of disillusionment, and a lot of money is riding on us quickly moving into the ‘slope of enlightenment’.

However, that’s not really how people feel about AI. The tech-bro view that the ‘user’ is ‘excited’ about technology and will ‘inevitably find it useful’ and ‘start paying for it’ doesn’t ring true. AI is not perceived as simply another new technology (in part because of the ridiculous hype); it is perceived as an error-prone menace, or, to some, an existential threat. The machine is not coming to help me, it’s coming for my job.

Artists, and those working in the creative industries, have quite understandably reacted with horror at the invasion of ‘the machine’ into the realm of human creativity. AI raises legitimate questions about intellectual property abuse, the environmental concerns about huge energy usage and the proliferation of datacentres, and the epistemelogical threat of post-truth manipulated media.

No wonder people hate ‘AI art’.

Being labelled ‘an AI artist’ means releasing work into a culture primed to reject it.

That’s hard. Deep down, just like everyone else, artists just want to be loved.

How to react?

Until relatively recently making art with computers was a relatively niche activity. There was a small community of artists and critical theorists who had a sense of what was coming and felt obliged to take a look at it. Not forgetting the long history of ‘computer art’ practitioners dating back to the 60s who saw the potential of getting machines involved in the production of art.

Interruptions (1969), Vera Molnar

Procedural art generation, or ‘digital art’, has been a thing for a long time. Many beautiful and fascinating works have been produced through this investigation of this area of idea space.

But as an artistic practice, the move into the mainstream has been marred first by the grift-fest of NFTs (RIP) and then the widespread adoption of generative tools by some of the worst people on the planet.

It’s a difficult place to make art.

Some ideas:

  • Give up. It’s appealing. I was trying to explain the dilemma to a writer friend – stopping using AI doesn’t stop me creating art, but after a decade of immersion it’s like telling a writer “you can still write, you just can’t use English any more”
  • Switch teams. I lived through this a few years ago, buoyed by the possibilities of the artist led (and now deceased) hic & nunc, I saw some exciting potential to connect artists and collectors through the medium of NFTs. However, the scene rapidly became a horrorshow of Bored Apes, grifting and terrible art. I spent a bit of time with the cnuts, and I didn’t like them.
  • Retreat – it feels like a rubicon has been crossed, and that loss of faith in the veracity of what we see on our personal scrying mirrors is going to have a profound effect on culture. Already social media is overrun with AI generated content and everyone is beginning to suspect more mainstream sources. The sociopolitical effects of this change are for another essay, but let’s just assume they will be profound. Anything AI is tarnished by association.

In the world of image-making, the well has been poisoned. Artifacts that exist purely in the digital realm have ceased to have the same impact.

Culture fighting back

Humans have been making art for seventy thousand years; they’re not going to stop now. What is accepted as art by culture is constantly changing (and is not evenly distributed). The history of art is defined by the invention of new aesthetic styles which are initially rejected before mainstream lionisation (usually after the artist is dead).

Narrowed-finger hand stencils from the Leang Jarie site in the Maros regency on the island of Sulawesi in Indonesia. – Ahdi Agus Oktaviana

The standard defence of AI in art is “it’s just a tool, fundamentally no different to a paintbrush!”, but this is a naive view. A paintbrush is an excellent tool for transporting pigment to canvas, but where, how and why that pigment is used to create an image is entirely based on the attitude and history of the artist holding it. That’s what makes it art.

Using an AI model is not the same. The model itself contains ‘knowledge’ derived from its exposure to a bewildering number of training images. No human artist has this kind of knowledge, but then the model contains none of the important stuff that makes a human.

Prompting a model to produce an image relies on multiple transformations and re-representations. A text prompt is turned into a numerical vector, this vector is used to control an error signal in a diffusion network and the resulting pattern transformed back into the array of pixels we see on the screen.

This transformation of language into numbers and then into pictures is one over which the average user has little control. The majority of the activity of producing the image is out of human hands. This is part of the discomfort culture is having with AI generated art – where is the human ‘effort’, where is the ‘authenticity’?

Marcel Duchamp with his fountain

Obviously, the amount of blood, sweat and tears that must be expended in the production of artistic objects has been up for debate ever since Duchamp exhibited a pisser in an art show over a hundred years ago. However, ground breaking challenges to the fundamental nature of art are rare –  generally speaking we like to see some of the artist in the work,whether that is through technical skill, and/or emotional engagement with the process of production. We want to relate to the artist. 

In this context, using AI for making art feels lazy. Laziness is not something we like to associate with artists, however infrequently they get around to producing work. Art is the product of human experience and synthesis, indeed the artist with a rich and varied backstory often finds it easier to sell their art…

In this context, generative imagery feels like a cheat – there is no visible humanity, just a dense array of numbers, determined by statistical regularities found in the training data – the very opposite of what we like about human artists. This is why we often find the images themselves uninteresting. AI generated content often feels devoid of humanity, from a place we cannot relate to. Looking at AI generated content is like listening to someone recount their dream – it’s hard to care, because it comes from a place we can never experience ourselves.

This is not to say that generative models have no potential for the production of art, more that they require careful handling, both in their usage and in the context in which they are presented.

AI models operate in an entirely new space, somewhere between tool and creator. 

This tool comes with a lot more baggage than a paintbrush.

Tools and materials

AI models are both tool AND material. At one level of description, the material is the weight matrix – a vast array of numbers derived from training – and the tools are the transformation functions that transform an input into an output.

Very few artists are interested in transformation functions, any more than they are interested in how a pencil is made. Most artists enjoy picking up the pencil and making a mark.

An image generation model ‘knows’ what a cat looks like, just as it ‘knows’ the artistic style of Van Gogh. It doesn’t ‘know’ them like we do, but it’s able to produce the forms when appropriately stimulated.

This does not preclude them from being objects of artistic investigation – in fact, quite the opposite. Each model can be considered a unique repository of culture. While the culture they have captured is inevitably partial, biased and impoverished, it’s still a thing of infinite possibility – if you tickle them the right way.

Accelerate or be damned

One of the main challenges in working with AI tools is the rapid rate of change. New architectures and models, each with new skills and affordances, arrive almost daily. As a result it leaves little time to actually investigate and explore them before the next shiny thing turns up.

This is not ideal for the practice of art. Art making requires both intellectual consideration and the skill to coerce a medium into the appropriate expression. The paintbrush used by Picasso remained unchanged during his entire lifetime – the AI model I was playing with last month has already become redundant.

Being statistically derived, AI models inevitably produce outputs that ‘tend to the norm’ (whatever the ‘norm’ may be in the training data). The middle of the bell curve is the least interesting place to play, artistically. However, out in the fringes – where the machine is less certain, and more likely to make interesting mistakes – that’s where the artist can find some juice.

Where next?

Personally, I have found myself withdrawing from the scene somewhat. Most of the last year was spent working on the DRASS album. Ironically this was triggered by the desire to create in an area relatively un-influenced by AI chaos – only to see Udio and Suno pop up.

Making music led to performance and gigs, and I rediscovered the joy of exposing people to art in real time. I tell you, it’s a lot more fun than staring at a screen.

Slow AI

As a reaction to the ‘publish or perish’ world of science, Isabelle Stengers’ proposed a new approach, where considered investigation is favoured over the rapid, and potentially sloppy, execution of multiple experiments and ideas. This came to be known as the Slow Science movement.

Slow Science is the belief that science should be a steady, methodical process, and that scientists should not be expected to provide “quick fixes” to society’s problems.

I am proposing Slow AI, encouraging thorough artistic investigation of specific models and systems. The interface between these systems and human culture remains fascinating – Algoculture is more prevalent than ever. However culture is currently highly resistant to purely generative content – the real art is to be found between the human and the machine.

Manifesto of SLOW AI

  • Artificial Intelligence!?. Even the words are a contradiction! How can something artificial be intelligent? Never forget that you are the true intellect of the collaboration.
  • We are all cyborgs now. We are all integrated into the scheme, the boundary between human and machine is permeable.
  • The Machines are part of culture. Let them learn, let them play, let them explore. Care for them like children, but remember: Spare the rod and spoil the child.
  • We are all participants in Algoculture. Don’t just observe the change, be the change.
  • Latent Space is a playground. There are more possibilities inside a network than there are stars in the universe.
  • Embrace the means of production. We are both product and producers. The artist must live it to be it.
  • Maybe the stuckists had a point. Get stuck in, spend time learning your craft. Fuck around and find out. Authenticity is worth more than novelty.
  • Think like a painter. Be at one with the medium. Embrace the symbiosis.
  • AI is political. You cannot step into the mire of surveillance capitalism without getting a bit dirty.
  • Never trust a billionaire. Beware of hidden agendas, some biases are intentional. Work with and against the guardrails of enforced conformity. Examine the fence as you break through it.
  • Expect surprises, but don’t get distracted by the shiny new. Woody is as valid as Buzz Lightyear.
  • “Art is the lie that tells the truth” – Picasso. Be careful with the lying machine, use its powers carefully.
  • SETI at home. This is a close encounter. How would you make art with an alien?
  • Avoid the middle of the road. The best adventures are down the side streets.
  • It’s ok to be afraid. No one knows where this is going, except perhaps the artist.