The Mood of the Nation

As we approach the election, it becomes apparent that the narrative landscape of politics has changed. In the past, the discourse was led by inky newspapers and the satirical sniping of broadcaster-sanctioned comedians. The last election, in 2010, certainly had a social media component – manifested by the American-aping ‘televised debates’ and the associated back-channel chatter on Twitter. However, in the intervening five years, the size of this channel has increased enormously (approx 30m users in Q1 2010, to 288m users in Q4 2014)

It’s now possible to use this ‘big data’ to get a handle on the mood of the nation. Tweets can be read by algorithm and classified as positive or negative. Sentiment analysis is big business, harvesting the millions of opinions expressed online, and turning them into numerical values.

Brandwatch recently released sentiment data for David Cameron and Ed Miliband during the BBC Paxman interviews. In the video below I have used these data to plot the mood towards the two leaders during the broadcast – represented by red and blue backed emoticons. If the mood towards to politician is positive, the emoticon smiles, if the mood is negative, the emoticon cries.

Thousands of tweets, reduced first to numbers, and then to emoticons. Watching the result I’m struck by how the mood seems mainly negative towards both men. Is this a reflection of a national disenfranchisement with politics, or is it simply a reflection of social media itself – a place we go to complain, rather than praise?

Trackback URL for this post: http://www.shardcore.org/shardpress/2015/04/02/the-mood-of-the-nation/trackback/

@algobola


disclaimer

Ebola is a serious business, people are dying. The best way to stop the spread of a disease is to contain it at source. There are many organisations actively involved in treating people in West Africa. Do your bit – lobby your politicians and shake them out of their apathy, or make a donation – I suggest the wonderful Médecins Sans Frontières as one such organisation worthy of your support.



exposed

Algobola is an investigation into social contagion.

Algobola infects Twitter, it is passed through the exchange of ‘social media fluids’ – in this case, the use of @ mentions. It’s an experiment to see how far a ‘social virus’ can travel, and whether its presence can have any effect on behaviour.

For the purposes of this experiment, I am patient zero – infectious to anyone I mention in my twitter feed (sorry friends). Once someone is exposed, they have a 50% chance of being infected. If they become infected, they are also contagious. There is a 30% chance of survival.

Changes in infectious status are sent directly to the affected user in the form of a modified avatar image.

1414251947.57SamanthaFaiers
1414256405.08SamanthaFaiers
1414278930.69SamanthaFaiers



Here’s a chart of some test data:


testdata_chart

The number of infected people varies over time, depending on how promiscuous people are in their social network – to some extent it also reflects the day/night posting cycles of the infected population. This test had a 50/50 survival rate. The infection I’ve just started has only a 30% survival rate, so expect more death.

Infectious processes like this suffer from a computational explosion – within a few days, millions of people are affected. (Due to the limitations of Twitter’s rate limits, I can only monitor a few hundred people an hour, so the disease is going to be self-limiting.)

This work touches on two related ideas:

Firstly, it looks at how we respond to incurable diseases like Ebola.

In real terms, the experiment will infect a few thousand people – a drop in the ocean to 645,750,000 registered Twitter accounts. Indeed, this reflects the risk of contracting Ebola for those of us outside the currently infected areas. I’m sitting in Brighton, my chances of being exposed to Ebola at this point it time is effectively nil.

But our response to outbreaks like Ebola reflect who we are, as a collective humanity. It makes us question how far our empathy extends, and how we share our skills and resources in a time of crisis. The only sane response is treatment and containment at source.

However, human nature skews us towards conflating the risk of infection with the horror of actual disease. Because the disease is gruesome, horrific and arbitrary, we have a different kind of emotional response than we have towards real, but intangible threats, like global warming.


tbanim2

Secondly, it questions our apathy towards surveillance.

Algobola works across the network. The pattern of infection reflects social behaviours – it exposes who communicates with whom. This method of infection shares similarities with modern surveillance techniques. The number of ‘hops’ between you and a ‘person of interest’ can determine whether you are subject to further investigation, and can possibly result in real limits to your freedom.

Algobola explicitly exposes these kinds of connections, it shows how one random connection in your network may result in you being marked for ‘special attention’. Within a couple of hops the virus reaches thousands of people I’ve never met – when your government is ‘analysing your metadata’ the algorithms are working very much like a virus. Viruses are amoral, algorithms are much the same.

Will the introduction of this virus have any effect on Twitter behaviour? I’m not sure, I’m taking a baseline reading of how many mentions-per-day the user makes before and after the infection, so check back here for the results.

Update

Here’s what happened

Trackback URL for this post: http://www.shardcore.org/shardpress/2014/10/27/algobola/trackback/

Machine Imagined Artworks (2013)

Taking another look at the Tate Data, the most interesting categories, for me, are the more subjective ones, the categories which feel like they’re furthest along the ‘I need a human to make this judgement’ axis. This dataset goes beyond simple ‘fact based’ descriptions, which means it contains a whole lot more humanity than most ‘big data’.

mia1

We can imagine machines which spot the items within a representational work (look at Google Goggles, for example) but algorithms which spot the ‘emotions and human qualities’ of an artwork are more difficult to comprehend. These categories capture complex, uniquely human judgements which occupy a space which we hold outside of simple visual perception. In fact I think I’d find a machine which could accurately classify an artwork in this way a little sinister…

The relationships between these categories and the works are metaphorical in nature, allusions to whole classes of human experience that cannot be derived from simply ‘looking at’ the artwork. The exciting part of the Tate data is really the ‘humanity’ it contains, something absolutely essential when we’re talking about art – after all, culture cannot exist without culturally informed entities experiencing it.

It struck me that these are not only representations of existing artworks, but actually the vocabulary and structure required to describe new, as yet un-made, artworks.

mia2

So, inspired by an online conversation with Bjørn Magnhildøen, I built a machine which explores this idea space, and suggests new artworks. It can be used as a source of inspiration for artists or just a tool to investigate into an unknown aesthetic domain. By using a small subsection of the Tate categories as starting point, new descriptions are created. There are 88,577,208,667,721,179,117,706,090,119,168 possible artworks in waiting to be described.

Click Here to explore this world of machine imagined art.

It makes me wonder whether the whole process, from generating an idea through to the actual production of the artwork could perhaps be automated. Maybe a hook into the Thingiverse API and a 3d printer? In the meantime, please enjoy exploring an area of idea space, created purely by a machine.

UPDATE 27/04/15

MIA is now on twitter:

Trackback URL for this post: http://www.shardcore.org/shardpress/2013/11/12/machine-imagined-artworks-2013/trackback/

autoserota

Another experiment with the Tate Collection big data. I’m still working on how best to navigate the huge collection, so I thought it might be nice if Nicholas Serota himself (or at least an automated version of him) could show you around.

Click the image below to start your personalised tour of the Tate Collection.



Trackback URL for this post: http://www.shardcore.org/shardpress/2013/11/08/autoserota/trackback/

Exploring The Tate Collection

The Tate Collection recently released their metadata on github, the first ‘big data’ set that has piqued my interest enough to download it and have a play.

Here we present the metadata for around 70,000 artworks that Tate owns or jointly owns with the National Galleries of Scotland as part of ARTIST ROOMS. Metadata for around 3,500 associated artists is also included.

For the record, I’m a big data skeptic, to me it seems to be a buzzword which translates as: “We’ve spent all this time and money collecting lots of disparate data, surely there’s something interesting in there if we look hard enough?”.

I particularly like this definition:





by giladlotan via @doctorow

However, with that caveat, it was with a mixture of excitement and trepidation that I stepped into the data. The Tate data has all the expected attributes about the artworks, title, artist, date, media etc. but more interestingly there are hierarchical metadata associated with each artwork – effectively a tree of tags.

My first investigation was to see how the top level categories are represented in the data over time, perhaps it would reveal an interesting shift in themes, showing the changing nature of artistic expression (and/or curatorial fashions).

Here’s a graph of numbers of artworks tagged with the top level metadata. Be aware this isn’t a true representation of the actual numbers of artworks, rather the number of tags (artworks can have multiple tags, or appear in multiple top level categories). There are interesting peaks around the early/mid 19th C. and the 1970s.

main_subject_over_time

Here are the actual numbers of artworks by year.

artworks_by_date

The graphs are very similar, suggesting that there is a consistent level of tagging for all artworks.

Here’s the same view, normalized to show the variation in proportion of main subject over time.

main_subject_perc_over_time

What’s interesting in this view is that this is not about the quantity of artworks in each category, per se, but rather about the distribution of the tags. These metadata tags are human curated, someone looked at the artwork and made a judgement about a range of attributes. Some are simple enough, particularly for representational works – people, places, activities etc.

However if we travel deeper into the tree some of the categories are much more subjective, and these categories are often the most interesting to explore.

For example, the category ‘emotions and human qualities’ contains the following: fear, love, horror, despair, suffering, grief, shame, anger, innocence, strength, compassion, foolishness, happiness, sadness, wisdom, tenderness, guilt, shock, chastity, desire, humility, pride, nostalgia, contemplation, isolation, condescension, complacency, anxiety, vulnerability, psyche, hope, creativity, vitality, disillusionment, memory, concentration, inspiration, exhilaration, boredom, courage, muse, victim, hedonism, aggression, disgust, dignity, mischievousness, gratitude, serenity, heroism, avarice, laziness, devotion, frustration, anonymity, virtue, deceit, jealousy, pessimism, disbelief, hatred, triumph, antihero, narcissism, uncertainty, escapism, subconscious, gluttony, loyalty, pomposity and hypocrisy

Imagine how it feels to look at an artwork and decide that it represents any of the above qualities. It seems quite difficult to me, but thankfully The Tate have invested the resources into the endeavour and we get to reap the benefits.

The dataset contains over fifteen thousand subjects, so it’s not immediately obvious how to approach the problem of navigating the data.

To get a feel for the data I built a rudimentary tool which allows you to drill down through the categories and find the artworks which match. It’s already proving to be a fascinating rabbit hole into the collection, throwing up interesting and exciting juxtapositions of works.

Click the image below to try it out.


explorer

This dataset is a machine-readable representation of the artistic space of the Tate Collection. There are meanings implicit in the hierarchy of labels of the artworks. Whilst machines that can truly ‘see’ artworks remain in the realm of science fiction, the augmentation of these artworks with human-curated metadata massively expands the ways in which a large collection can be automatically navigated.

The data effectively offers a new representational landscape overlaid on top of the collection. There are an infinite number of paths through this landscape. Normally our path through a collection is in the hands of the curator – works are grouped by artist, period, movement or other curatorial perspective and we are to an extent bound by their decisions.

With this metadata, we could curate our own collection based on our personal preferences and desires, or perhaps at the whims of an algorithmic curator. Don’t get me wrong, the role of the human curator can be integral to the artistic experience, however these kinds of data open up a new realm of possibilities.

Playing with the dataset so far has prompted a number of ideas about how to auto-curate paths through the collection, but I’ve also become aware of the dangerously seductive nature of slicing and dicing something as complex as an art collection on the basis of metadata alone. I am navigating a space one level removed from the artworks, a space defined by the Information Architecture decisions of the designers, and the tagging decisions of the humans who actually entered the data.

I can already see how the contours of this landscape can lead to automated decisions about the relative relevance of one artwork over another. Are we looking at a future where poorly marked up artworks effectively condemned to a dusty backroom gallery of the internet? and perhaps the art stars of the machine-readable future are those with the best tag clouds?

UPDATE

I’ve made an automated version of Sir Nicholas Serota out of the data.



Trackback URL for this post: http://www.shardcore.org/shardpress/2013/11/06/tate-data-explorer/trackback/