acrylic on canvas 762mm x 1062mm

A portrait of a good friend, I don’t see often enough.

Trackback URL for this post:


This week the internet is awash with #deepdream images. I shan’t go into detail about how they are produced, other people have done a fine job of explaining the process.

The code, released by google, offers a playground for investigating the inner workings of neural networks. This rather dry area of research has been made much more attractive by the ability to create psychedelic visual outputs. There are #puppyslugs everywhere.

In one of my past-lives, I built neural network models as a means to model human language acquisition. Part of my research involved delving inside the networks and creating a visualisation of what they were doing. However, this was 1994, and I certainly didn’t have access to the kind of computing power (or datasets) that Google has, and hence the models were much more simplistic. But it’s fascinating to see a niche area of academia come into it’s own in the age of big-data.


#deepdream is appealing because it gives us access to machine pareidolia, an area of great artistic interest before Google got involved – see Henry Cooke‘s experiments with faces-in-the-cloud and Matthew Plummer-Fernandez‘s Novice Art Blogger.

I couldn’t resist having a play with it too. Here’s one of Alan Moore.


However, as always seems to be the way these days, the backlash has been quick and hard. What was wondrous and astounding has become boring and lazy within the space of a couple of weeks.

It’s a shame that backlash comes so quickly, perhaps it’s the natural result of people excitedly showing off their experiments in real time to a fickle, attention-impaired audience.

I think part of the problem is the apparent lack of human involvement in the process – it is likened to using a photoshop filter, a one-click solution to all your trippy visual needs. So far, all of the examples have treated whole images, and the visual overload of so many disparate elements can be overwhelming.

Below are some of my own experiments, where I’ve attempted to limit the range of the effect, using it on only part of the image, and juxtaposing the effect against reality.







As a final experiment, I fed some of these images back into Wolfram alpha’s image recognition system to see what the machine-mind thought they were:




Trackback URL for this post: