You may have noticed a strange, glowing red phone in one the wall at The Edge. While it may look like the Moscow-Washington hotline, it is actually the result of Nathen Street’s residency project, Social Fragments. Social Fragments is an interactive installation that learns how to put together tweets from the words people use when having a conversation with it. We had a chat to Nathen (face to face, not phone to phone) to learn a little more about this project.
Where did you get the idea from?
Last year I was asked to consider ways people could announce to others their arrival in a space; in the same way you can announce your arrival by checking in on Facebook, but on a much smaller scale and in a more local way. I thought at the time social media would be a good way to do this. But I also realised not all people use Twitter, Facebook or Foursquare, so using these services alone would exclude people.
At the time I was also playing around with an idea I had proposed for Experimenta’s current exhibition “Speak to me”. Experimenta were looking for works that explored the idea of how we communicate with each other, so I proposed an interactive work that let people hear and respond to tweets using speech synthesis and recognition. My proposal would have filled a room, so it was a little ambitious. Although I thought it was a good start, the idea matured into what it is now, much smaller.
What clever and crafty things did you have to do to put it all together?
The installation is part software, part physical object. The object part is a wall mounted, pixelated circle, with strip LED lights illuminating through clear resin and a red retro telephone handset hung in the centre. Embedded inside is a Google Android Nexus S handset running custom software written in Java.
The software I wrote uses the Nuance Mobile Developer SDK. This is the same software used on the iPhone that makes Siri. With this I’m able to convert speech to text, so that when you talk to it I can get a transcript of what you say. Likewise I’m able to produce a script and questions that is converted to speech and spoken by a synthesised voice, with a peculiar take on an Australian accent.
I analyse the transcripts of answered questions using a Markov process, which allows me to guess potential word structures based on the way a person answers the question. For example it collects information such as starting words, ending words, words that go before and after certain words. With this information I play a game of chance, essentially rolling for the next word until I reach 140 characters, then tweet the results.
Is this a one off or something that you are developing for ongoing uses?
I would like to make a series of them, given the opportunity. The software supports multiple languages, multiple voices and accents, and different personalities.
More broadly I’m interested in ways we interact with our built environment, how we engage with computers and how they engage with us. My ongoing experiments in this area may bring up different objects over time, I hope this is the start of further investigation and exploration in this area.
What’s the @address for punters to follow the tweets from Social Fragements?
You can follow the tweets on @SocialFragments
Have a project idea that you would like support for? Apply to be our next Resident.