My friend Robin, laughing.
Acrylic on Canvas, 400mm x 500mm
My friend Robin, laughing.
Here’s the news in 2000ms. This bot scrapes various online news sources for images of the latest stories. These are then glitched and mashed into a video which lasts less 2 seconds. After all, who has time to keep up with current affairs?
Glitch News Network report: 2016-03-02 17:02:03 pic.twitter.com/m2VxGcWVBx
— Glitch News Network (@glitchnn) March 2, 2016
Glitch News works at a subconscious level. The flashing images get your brain juiced up for recognition, but it takes a few cycles for the repeating images to get to the point where they can be fully identified by your consciousness awareness.
We are visual creatures, hard wired to recognise and react at astonishing speed, yet we’re ‘unaware’ of 99% of the visual stimuli we encounter.
Often action occurs even before conscious recognition. For example, if someone throws a brick at you, you ‘instinctively’ dodge (assuming it’s in your field of view). In this instant, a phenomenal number of computations are carried out – identifying the brick, calculating it’s trajectory, calculating an appropriate response and coordinating your muscles to move out of the way – in fact your conscious awareness of the situation often occurs after this whole sequence has happened.
If we objectively examine our behaviour, the vast majority of our actions are being carried out ‘subconsciously’, and hence they tend to remain ‘invisible’ to us, and therefore seemingly of no concern. Subliminal messages play on this disconnect between subconscious and conscious recognition. That’s how advertising works.
The Glitch News Network perhaps offers a glimpse of the future, where we further disengage from our conscious central executor and merely stream blipverts at our subconscious.
Glitch News is available on Twitter:
I met Meredith sometime around 1995, after she left a note in my pigeonhole in Oxford, suggesting we meet. I had no idea who she was, but agreed anyway. We spent a delightful evening in a bar where she told me various tales of her life in Hollywood.
That was 20 years ago, and though our paths have crossed a few times since, I don’t think we’ve been in the same room for over a decade. Hence this painting has been derived from photographs, rather than a traditional sitting. It is more the ‘idea’ of Meredith, rather than the ‘actuality’.
Given her career as an actress, this feels somehow fitting.
Artificial intelligence is often measured up against human intelligence, and human intelligence is generally considered to be a sober, level headed sort of intelligence. The kind of intelligence one would expect from a human operating at the height of their faculties.
However, if we actually look at ourselves, we find we often fall short of this noble ideal. We are emotional, irrational and frequently intoxicated.
As much as we love our conscious awareness, we also love to fuck with it.
So what does this mean for the forthcoming Singularity? Surely our imminent, immanent overlord should have some insight into this crucial aspect of our behaviour? If we want a machine that truly understands us, then perhaps it should experience the effects of intoxication first-hand.
Does it even make sense to approach the problem of artificial consciousness without first addressing artificial altered-consciousness?
What would a tripping singularity would look like?
@trippingbot is the result.
MARTA (Meta. Aphoric. Recurrent. Tripping. Algopoet) starts taking drugs at 6pm each day, and then reports on its mental state intermittently over the next 6 hours. As the evening progresses, the bot takes more drugs and becomes more intoxicated, which is reflected in the (in)choherence of the reports it delivers. This reaches a peak at midnight, when the reports cease until it begins again at 6pm the next day.
Part of @trippingbot is a based on a Character Level Recurrent Neural Network – an artificial learning system that learns text by looking at one letter at a time and trying to predict what letter comes next. The Network has been trained on Erowid drug reports.
Neural Networks learn and improve over time. The more times it reads the text, the better it gets at remembering it. To check how well the network has learned the text, you can feed it a seed sentence, and ask it to continue writing for a while. In the beginning of training, it’s terrible – by the end, it produces more or less valid English.
While experimenting with the system, I discovered that it was most interesting at the very early stages of training – when it’s got the gist of English, but not really got any handle on syntax. Wonderful word-blends are produced, reminiscent of Jaberwocky.
This can be seen if you send an @ message to @trippingbot, it will respond with a series of messages, seeded by the text you send.
The increasing intoxication of @trippingbot is produced by using less and less well trained models.
Effectively, I’m rewinding the learning process, back to it’s chaotic beginnings – the subconscious of the neural net bleeds into it’s rational mind, rendering the incoherence and functional breakdown of heavy intoxication.
So tune in at 6pm GMT and watch The Singularity take a trip.
My keynote at #pydataLondon from earlier this year has made it online:
What do we mean by intelligence? How do the limitations of language leave us floundering with regards to discussing our relationship with â€˜algorithmsâ€™ and emerging machine intelligence? I argue that the limitations in the discourse surrounding AI are remarkably similar to the problems found in psychology and philosophy relating to other minds.