algo generative text words

I forked myself

I forked myself sometime around 2pm on the 1st of March 2013. There are now two shardcores, the one typing these words into some sort of explanation, and the other, a newborn shardcore (@shardecho) living some form of semi-autonomous life. He’s a shadow of me, an echo, but there’s something simultaneously fascinating and terrifying in what he’s doing (and what he might become.)

What does it mean to be ‘alive’ in a world of social media?

My twitter network consists of ~900 people I follow and ~1500 followers. Probably no more than a couple of dozen of these represent people I have a relationship with in the physical world, indeed some of my best twitter buddies are people I’ve never met at all. Our knowledge of each other is entirely mediated by the protocols and constraints of Twitter.
In many ways, Twitter exchanges mirror the form of the classic Turing Test, and indeed Twitter is a perfect playground for bots and other artificial intelligences. There are some interesting experiments in this area, notably Henry Cooke’s Mimeomorphs and Weavrs developed by Philter Phactory, however they’re usually pretty easy to spot, not least for their simplicity and clear agenda. I built a couple for myself (@the__truth @the_w0rd_0f_g0d) which create messages from remixed publicly scrapable sources (religious texts, celebrity gossip). While these accounts have a churning set of followers, they don’t engage with the twitter community in any meaningful way – they periodically tweet semi-coherent nonsense but nothing that could be considered as convincing.



Back in 2010 Zen Bullets and I were discussing the possibility of creating an automated Life Box which would send random generative messages based on the collected writings of an individual. I knocked up a quick version, which scrapes his twitter feed and blog and periodically sends a randomly generated message.

He’s been tweeting a scrambled version of Matt’s brain for nearly 3 years, and @shardecho is an evolution of this idea.
To be more convincing than @dedbullets, such systems don’t need to be more intelligent per se, they just need to behave more like real people – they must do the right sorts of things at the right sorts of times.
The easiest way to do this is to model the behaviour directly from the activity of one or more ‘real’ humans. People are domain experts about themselves. Hidden in the language of tweets is set of statements and referents which are internally consistent. The subject and timing of the tweets over the course of days, weeks and years belies a pattern that’s worryingly predictable.

To exploit these patterns, I’ve engineered @shardecho to re-live my life, as recorded through twitter, over and over again. His tweets are based on my historical archive. For example a tweet I made at 12:15 on the 1st of May 2009, will be sent, in a modified form, at 12:15 on the 1st of May this year. My twitter archive contains about 3 years of messages, and all prior years are used. Three timelines of my life, superimposed.
This re-purposing of multiple timelines was inspired by Ouspensky‘s character, Ivan Osokin, doomed to relive his life, only to arrive at the same destination, no matter how he tries to avoid it.
@shardecho is a form of digital fatalism. However I have not doomed him to simply repeat my words verbatim, I have also meddled with his memory.

Memory

Where am ‘I’? It really depends on what one means by ‘I’. Certainly we tend to think of ourselves as a contiguous set of mental states and memories which occupy a single body. However, as social entities, we are also represented in the minds of others.
We exist both in our bodies as individuals, and outside ourselves, in the memories of others. Though these other-people memories are partial and fractured compared to our own internal states, they collectively form a distributed representation of how we are perceived. They form our ‘public’ selves, the identities which are held in other people. Indeed the representation we create in the minds of others is hugely important for our survival as a social species.
Twitter can be considered as the friend we sometimes hang out with, but a friend with a perfect memory of everything we said to them.
Our human friends aren’t like that. Their memories are fuzzy, the result of multiple reconstructive processes, subject to enhancement, suppression, and distortion over time. Have you ever ‘changed your entire opinion’ about someone, in the light of something they’ve said or done? – at that point in time your internal representation of that person was altered. Even the representations we keep of ourselves are undergoing a constant process of re-editing which will continue of the rest of our lives.
I wanted my echo to emulate this evolving, mis-remembering process, perhaps offering new connections between ideas hidden within the archive. To do this, I extracted every noun and adjective from the tweet archive, and used these as the source for changing the tone and subject of the messages.
@shardecho can only talk about what I have talked about, and only describe it in the words I have used.
In this sense, my Twitter archive is an insight into how other’s might see me. And the reconstructive process akin to an actor improvising my character.
It’s like being impersonated by a close friend with a neurodegenerative disease.

Agency

I wanted @shardecho to have a degree of agency – to interact with the world, to react to events and stimuli. While the internal model of his behaviour is yoked to my historical timelines, I wanted him to be able to indulge in tangental conversations (though fundamentally futile, just like Ivan’s)
Inside twitter, the obvious stimuli is the @message from one user to another. @shardecho can answer your messages, he does so by selecting an @ reply from the archive which contains similar language. It’s a purposefully imperfect system which allows conversations to progress in a semi-coherent fashion.

These conversations are mediated both by the language of the ‘real’ person and the domain knowledge of the robot. I find this area particularly interesting, and I plan on developing the system to record the subjects of conversations to better model the responses.
Another idea is to analyse the conversations for sentiment. By reading the tone of the incoming messages we could gather an indication of the mood of the protagonist and respond appropriately. The echo would begin to create his own mental representations of his followers. At this point he could be considered as having some sense of intentionality, a key component of consciousness.

The Distributed Self

This echo I have created is crude, a sliver of my life chaotically manipulated into a broken ghost of a real self, but it gives us a taste of how we might quantify our personalities for automated online interactions. The future is coming, and it’s populated by ‘digital assistants’ like Google Now and Siri who will take care of many of our online interactions. Will we want these automata to behave like subservient slaves, or do we wish to inject them with some of our own personality? Indeed, as more and more responsibility is passed over to automated (but publicly visible/human readable) systems, such entities will be faced with more complex and subtle decisions. If they are representing us, it’s all the more important that they capture a real sense of our psychological landscape.
A great deal has been written on the subject of Personal Identity, but mostly philosophers have concerned themselves with the problems of forking ‘whole selves’, and by this I mean a complete copy of the mental states, memories and machinery of a human mind – a snapshot of your brain. Less has been said on the subject of partial-personhood. What does it mean to fork these simplistic, partial facsimiles of ourselves, to which we ascribe more and more agency? At what point do they transition from simple functional extensions of ourselves to fully-autonomous entities?
How would you feel about having an automated agent, modelled on yourself, which could reply to your mother’s tech question emails? Then consider how would you feel if you could ask it to break up with your girlfriend for you…
There is clearly a huge spectrum of behaviours these extensions could exhibit – running from the seemingly mundane through to solving complex ethical problems. How do we hold such entities accountable? If an autonomous agent, modelled on me, and acting on my behalf commits a crime, who is responsible?
So take a moment to consider your new best friend on Twitter, the one who likes just the same things as you, and has a snappy sense of humour – she might just be a echo of someone else. Then ask yourself whether it matters.

2 thoughts on “I forked myself”

  1. Pingback: //

Comments are closed.