Just over seven years ago, I created a generative work called ‘Machine Imagined Art‘, based on the then recently released data set from The Tate.
The system uses the ontology from the Tate data, and creates descriptions of non-existent artworks. Due to combinatorial explosion, there are 88 nonillion possible artworks hiding inside the system.

It’s always been one of my favourite works, and a few years ago I turned it into a twitterbot, which imagines a new artwork every six hours. At that rate, I calculated that the Universe will reach the heat-death before it’s posted them all…
The artworks, of course, do not exist. Despite my exhortations for someone to try and make one, they exist only in the imagination of the person reading them.
Until now.
OpenAI recently previewed a new model, called DALL-E, trained on millions of image/description pairs – resulting in a network which can produce images from text descriptions. The results are fascinating, this is an excellent overview.

Whilst they did not release all the code, they did release CLIP, a subsection of the system which works as an ‘image critic’ – when proffered an image and a description, it gives a score of how well it matches. Ryan Murdock took this critic and used it as a guiding signal for navigating BigGAN (a publicly released model trained on 1000s of images) and produced the BigSleep system.
As a result, I can finally see the machine interpretation of my machine imagined art descriptions!
To seed the system, I used the final line of the description as a prompt.


Amazingly, this image is indeed skull-like, and has the water texture of a river, and what appears to be some German text emerging.


I love this one, it has a real Max Ernst vibe, and clearly contains a ‘woman of hope’.


I can see classical ruins, a sunset and what appears to be a shoe in the middle.


This is amazing, it has the ‘pearls of sensual pleasure’, a rainbow of colours, and a wheel-like object at the epicentre.


While the bow and arrow are merely hinted at, that looks like the form of a baby, and even some sort of attempt to create the text ‘Eros’ – I particularly like the floating ‘S’ on the right, which seems to be made of flowers.
What next?
This very much feels like a significant advance in #aiart / #mediasynthesis (or whatever its called this week). The ability to cross modalities between images and text opens up some fascinating possibilities.
Here I am able to complete the journey from a machine imagined art to machine created art that I began seven years ago.
This is just the tip of the iceberg.