Jackanory ran on BBC television from 1965 until 1996, and was a cornerstone of childhood for anyone raised in the UK between those years. The format was simple: an actor would read a book in fifteen-minute episodes, broadcast daily over the course of a week. Occasionally, still illustrations would be displayed alongside the reading. The programme became hugely popular with children and parents alike, and attracted an enormous range of acting talent.

In its early days, the programme encapsulated the core Reithian remit to “inform, educate and entertain”, though over the years it gave way to changing attitudes and loosened its style and range of readers.

Algonory uses Jackanory as inspiration for a series of manufactured episodes from the past, written by the algorithms of 2019. The three episodes are placed in 1968, 1974 and 1985, and aim to capture the evolving style of the programme. In addition, I employed my talented daughters to illustrate two of the stories, creating images inspired by their own interpretations of the text.

Like some sort of hotly anticipated NetFlix miniseries, I have decided to release all three episodes at once.

For Episode One, I chose to set the networks on the works of Enid Blyton – in many ways the archetypal English children’s author. Time has not been kind to her racist, sexist views but her unmistakable tone of upper-middle-class whimsy seemed perfect for the first episode.

Algornory (1968) – The Faraway Chair, by Algo Blyton.

Episode Two is set in 1974, and uses the works of Lewis Carroll. These stories, of course, already have a lysergic quality which suits the absurdist prose generated by the network. I have no idea why the frog and the mock turtle wander off into a philosophical discussion about the nature of the self, but it seems to work.

Algonory (1974) – Alice through the Scrying Glass, by Algo Carroll

Episode Three uses a range of Roald Dahl’s children’s books as sources, and features many familiar characters blended into a single story. I styled myself, unapologetically, as the greatest Jackanory reader of the 80s, Rik Mayall

Algonory (1985) – Mr Fox’s Marvellous Medicine, by Algo Dahl

One of the joys of printing physical copies of my previous GPT-2 experiment, AlgoHiggs, has been the opportunity to hear passages read out loud.

Bringing the book into physical existence transforms the meaning of the text. The words are consumed and judged in an entirely different way than words-on-a-screen.

Inspired by this, I turned my networks towards children’s stories – the archetypal conduit between written word and the human imagination. By partially retraining the GPT-2 network with specific authors, I was able to generate new works in their style, whilst still maintaining some of the underlying language and knowledge from its original model. This comes through in delightfully absurd, and sometimes profane, form. For example, this section from the Algo Dahl network:

“I don’t want to talk about it,” my father said, not even listening.
“Pheasants are more in my line than most children are.
Not only are they more in line than most,
but you and I are equally guilty of lewdness.”

He undressed and put on his trunks.

“I can’t go into details,” he said.

Imagine suffering amnesia and being nursed back to health in a sanatorium populated only by nurses reading Roald Dahl. But the amnesia is incomplete, and past memories occasionally bubble up into consciousness.

The resulting stories are nonsense, but they are dressed in the clothes of the original author. Reading them inevitably produces memories and feelings associated with the original texts.

Reading them aloud, with conviction, enhances the illusion.

Much of my work involves what I call ‘manufactured authenticity’ – shrouding generative systems in the trappings of normality, precisely to highlight their limitations.

When an AI fails, it exposes the holes in the magickal veil that the algorithm has drawn over the world. When we see the holes, we see the limits of the trick.

The Adversary

Privacy International asked me to design a twitterbot based around their character ‘The Adversary‘.

The Adversary represents the (in)visible forces chipping away at our privacy, harvesting and analysing data about us for their own (sometimes oblique) purposes. In this context, Facebook is as much of an adversary as GCHQ or the NSA. Governments and companies increasingly harvest huge quantities of personal data and enact algorithmic judgements based on it.

If you follow this bot, it will monitor your tweeting activity and produce custom video reports analysing your behaviour.

The bot analyses the frequency and timing of its followers’ tweets, as well as performing sentiment analysis on individual organic tweets (i.e. ignoring retweets), tracking and graphing the mood of the follower over the previous 7 days.

Tweet sentiment

Visually, the bot gives a nod to one of the finest pieces of 1980s manufactured authenticity, Max Headroom “The World’s first computer-generated TV host” – a simulacrum of a digital avatar from the pre-CGI days, in reality, performed by Matt Frewer in a rubber mask.

The aim of the bot is to appeal to its followers’ vanity, creating ‘custom video content’ for each of them, whilst at the same time generating a sense of unease – if The Adversary knows so much about me, based purely on what I post in public, I wonder what else ‘the algorithms’ might know?

Time of tweeting

We find ourselves in a unique point in history – for the first time ever, we have platforms through which we can disseminate our ideas (both profound and inane) to a global audience, from the comfort of our phones, for free. However, the implicit faustian pact we are making is far more sinister – by publishing our content to these networks, we are signing over far more about ourselves than we may realise.

Tweet frequency

Working with Privacy International on this project has been interesting, but to a degree frustrating. There are many other, more invasive, ways to analyse twitter data, and many more unsettling ways to re-present social media behaviour, for example, making more explicit references to who else the follower interacts with (a technique used by @bffbot1 a few years ago). However legitimate concerns about privacy from Privacy International prevented those functions from being included in the final build. Whilst I understand their need to be GDPR compliant and ‘practice what they preach’, it did curtail my natural instincts to create something more provocative from the underlying data.

Similarly, the tone of the bot is relatively benign, whereas my natural instinct would be to make it a bit more judgemental and critical. One of my favourite now-defunct twitter accounts, Jack the Twitter used freely given social data and used it to stalk his targets – it was not a bot, but manually controlled by Marcus John Henry Brown, but he created a genuine sense of unease in it’s followers.

We tweet like only our friends are listening, but not all those who are listening are our friends.

The Labyrinth

Another work based on the 100,000 fibre tractography model of a human subject, from CUBRIC.

Here I have taken a small slice through the centre of the cortex and used it to generate a ‘tube map’ of connections.

The various lines are representative of sets of people, things and concepts close to my heart. In some ways it may be considered a ‘self portrait’.

You can explore in more detail, and buy a print, by clicking on the image below.


My friend John Higgs releases his new book today, The Future Starts Here. It’s an excellent read which frames the future in a positive way (rather than the dystopian hellscape projected by the majority of contemporary narratives.)

It also has this description of me in it, which I find most pleasing.

Eric, who is probably better known by his online name Shardcore, is a very modern type of artist. One of his most useful attributes is that he is very easy to find in a crowd. He sculpts his dark hair so that it points straight up, giving him a hairstyle that is the direct midpoint between a Mohican and Tin Tin’s quiff. He also wears silver nail varnish and white pointed leather shoes. Such is the effort that he puts into his appearance that when he was recently forced to wear an eye patch for medical reasons, many people assumed that he was simply accessorising.

Sometimes, after I’ve been out socially with Eric, female friends ask me about him. ‘Who was that guy,’ they say, ‘you know, the handsome one?’ This, I don’t think, is the best description for him. Out of all my friends, I’ve always thought of him as the evil one. It’s not that he does evil things, I hasten to add. He just looks at things in an evil way. He doesn’t act on those evil thoughts. But he thinks them.

The first chapter of the book is based around my various attempts to replace John with code – to create a machine that writes like John, or at least tries to.

AlgoHiggs v1.0

I’ve tried several methods to extract the essence of John from his books. Markov chains produced the usual markov stuff

The status of the Panthers, or were so alarmed by the right rear wheel of the independently released Doctorin’ The TARDIS was released Drummond and Cauty, were required to supply more rice than existed in secrecy ever since, much to a number of florists very happy about it again as she gained the grail of illumination that inspired him.

Word-level Recurrent Neural Nets didn’t do much better:

It was not the sort of thing that deconstructionists were being congratulated in his own maelstrom to be to say that there was no bi-univocal correspondence between linear signifying links or emerging horns poking out of the end of the Second World War

and character-level RNNs collapsed into Chaucerian gibberish:

Classion turned in the carantial and himself hirder becoming Cill Pams, this not enjised and digmication. Daneh or Whenamer hard from a ned teptiwamenal effect to be too fiction, it members, but it vast cated as lough from him more me wrote on the will, then very, expectable.

As entertaining as these generative text models could (sometimes) be, they were falling far short of the kind of Higgsian prose I was hoping for.

A recurrent network, like all neural networks begins as a tabula rasa onto which the weighted connections are sculpted over time. The network knows literally nothing about language when it begins; by ‘reading’ the text, one character at a time, over and over again, it learns to predict the next most plausible character in the string of letters. It has to learn the language from scratch. To gain the ability to generate new coherent sentences, the model must be exposed to millions of lines of text, millions of times over – a logistical and computational feat.

Creating an Algohiggs is a two-fold problem. First the network needs to understand the way English works, then it needs to learn how John writes.

AlgoHiggs v2.0

Earlier this year, openai announced the gpt-2 model. A generative english language model so good that releasing it would, they claimed, threaten the very fabric of society. A brilliant bit of PR.

Whilst they didn’t release the full world-destroying model, they were kind enough to release a smaller version via github.

This model is quite unlike anything else I have investigated. It has been trained on 40 million documents, scraped from the web, and in its standalone form is able to produce amazing ‘human readable’ text. Since the training set is so broad it is able to ape a wide range of writing styles, often with a pleasing lysergic edge. Aside from its ability to write coherent prose, there is clearly a form of semantic representation inside the model (as one would expect from such a broad dataset) which means it is way more than a grammatical party trick. This network knows things about the world, without ever being explicitly told them – semantics as an emergent property of the statistical structure of text – which in itself is philosophically fascinating.

By taking this pre-trained model and exposing it to John’s work, I retrained it until it captured not only the tone and cadence of John’s writing, but also the subject matter. The aim is not to create a network which can completely replicate John’s work, word for word, but rather to gently constrain the model into a particular semantic and stylistic space.

By using a pretrained model, the problem of ‘learning English from scratch’ is taken care of – all that’s required is a little encouragement into the John Higgs area of idea space.

One of the earliest tests produced this bizarre precis of The Breakfast Club, and I immediately knew I was on to something interesting.

The Breakfast Club is the story of five American high schoolers spending a Saturday in detention. Each of the five is a different stereotype – a brain, an athlete, a basket case, a princess, and a criminal, as the film describes them.

They attempt to enter a rave where they are pepper-sprayed, bound and doused with WD-40, and forced to dance for hours in front of giant Boing Boing elephants. When this is successfully certificated as a rave, they head to the back of the detention centre for an X-Factor performance.

Since I am referenced by name in the text of the book, the model also has a representation of me, so I asked it “Who is Eric Drass?” and it came back with this rather disturbing summation:

Eric, I think, is the dumbest person I have ever met.

What makes Eric a terrible person is his attempt to gain control of the situation. Eric, I think, is so intent on his own agenda that he never even attempts to improve upon what is happening around him.

He just wants to make a few extra compromises before committing suicide. It’s hard for me to believe that he ‘worked hard’ and that he has everything under control.
He works for an advertising company that has now either given up or helped bring about the end of the global ecological collapse.

His giant data hub is his way of making a few extra compromises before committing suicide.

Note to self: Never go looking for self-affirmation in the mind of a network.

AlgoHiggs works best when ‘seeded’ with a snippet of text, and then letting it write some more. So I took the starting sentence from each section of John’s new book and asked it to write the rest. In this way, algohiggs was able to write its own version of the book.

Seed text

It is late autumn, but it is still warm enough to sit on Brighton’s pebbled beach wearing a T-shirt and watch the churning waves. I buy a pasty and walk down to the shore to eat it.

Generated text

It’s not a good end for a summer’s day, I think, but I keep drinking and talking to my dogs. They remind me of African panthers, which is ironic, because they are quite frightening.

Eric and I find a giant patch of sand, about the size of a pebble, on the shore. It is a dark greenish brown. The top is a patch of sand on top of a rock, which is an inch or two under my thumb. I start to feel an electrical current run through me. I start to wriggle my fingers and realise that there is water on the beach. The sound of waves is terrific. The buzz is terrific. I look up and immediately recognise residents of the area, who were just starting to drink the water.

The results were so delightful, we’ve printed them up into a proper book. As far as I know, this is the first complete printed book based on gpt-2, and I must admit, the text takes on a whole new flavour when you see it on the printed page.

Whilst not currently available in the shops, you can pick up a copy from John at any of his forthcoming public appearances.

Zenbullets helped create a physical embodiment of AlgoHiggs for John’s book launch, where it took part in the Q&A, as part of the panel.

video by @dansumption


My friend Matt Muir compiles a weekly newsletter of articles and links from across the internet (you really should subscribe). His prose style is distinct, and delightful, and the relatively short gobbets of text readily lend themselves to algo-manipulation.

Plus, he has what can only be described as a mild case of Logorrhea – meaning there is plenty of text to mine…

algocurios is powered by a modified version of the openai GPT-2 language prediction model. Basically it’s a text production system which has been trained by reading millions of articles online. In its standard form, it’s an amazing beast, capable of producing ‘plausible sounding’ text in a myriad of forms.

If we take this existing model (which already has an excellent grasp of many forms of written english) and re-train it a little on Matt’s words, it readily produces text with the same kind of tone and vocabulary.

In the case of Matt and his webcurios, it got the gist very quickly and started spitting out new gobbets of text in the curios form almost immediately.

Given more training, the network would undoubtedly begin to memorise and repeat back sections of the source text. However, by just teasing the network with a little new information results in a delightful blend of tone and subject matter.

The delight, for me, is the way it forces the reader to make unexpected metaphorical leaps in an attempt to extract meaning. Often with algorithmically generated text, the result is semi-predictable or too nonsensical, algocurios hits the sweet spot.

Visit algocurios and explore for yourself at