Erica Scourti: Slip Tongue

New work by artist Erica Scourti, supported through the Near Now Fellowship

Posted on 6th July 2018

Written by Erica Scourti

Drawing on her personal chat archive, Erica Scourti presents a responsive installation that invites the audience to encounter the anxiety and paranoia – but also the thrill and pleasure – of socializing and bonding online.

We offer ongoing support to Near Now Fellows for the continuation of ideas and projects that emerge beyond their Fellowship.

Below, Erica describes her project Slip Tongue, part of 'We are having a little flirt', a group exhibition offering perspectives on the uncertainty of attraction and desire, at Pump House Gallery, Battersea Park, London from 25 April - 8 July 2018.

Slip Tongue

In Slip Tongue, my own digitally-synthesized voice and of a few of my friends' converse about love, relationships and flirtation, quoting from chat and Whatsapp archives of messages between us. Drawing from years of personal exchange between my closest friends, confidantes and ex-lovers to form a collective text, all quoted with permission, the resulting audio piece expresses the fears, support and excitement of navigating friendship plus romantic and professional relationships.

The audio is played at random in order to both reveal personal info whilst fragmenting syntax and thus intelligibility. It is triggered to play by body-heat sensors recognising the presence of an audience, reflecting my increasing interest in how the body is monitored, tracked and sensed by technical interfaces.

My interest in working with Lyrebird, an audio mimicry technology powered by a voice imitation algorithm, stems from a similar interest in how biometric technologies are bringing ever-more granularity to these forms of digital tracking.

The voice (as Lyrebird's ethics statement hints at) is considered a unique biometric marker, the collection of which has implications for identity theft/faking, as well as for detection of specific individuals or demographics (e.g. those speaking with specific accents).

At the same time, my project is an exploration into getting others to 'donate' their voices – what does it mean to give away your voice, to allow it to speak another person's words, and how is the trust required between artist and friends navigated when using both sensitive information from personal conversations, and their own unique simulated voice? I'm interested in how these technologies, especially ones that simulate something as personal, affective and potentially intimate as the human voice, tie into archiving and also replication, in the sense of 'keeping alive' a loved one, or at least having them virtually present through their voice, is made possible by software like this. I'm also intrigued by how this could be deployed in the context of grieving, overcoming trauma and other therapeutic uses, which reflects my own interest in making work as a form of self-therapy.

Simulated voices created for Slip Tongue, using Lyrebird:



Near Bliss Index

Furthermore, in the Near Bliss Index performance which accompanied Slip Tongue, I imitated the 'rhyming couplet' algorithmic code myself and the coder prepared for the actual sound piece by selecting lines from the Whatsapp archive and then rhyming it myself. I also made new digital voices by singing, instead of speaking, the lines required to train the algorithm. This created a collection of eerie, atonal but nevertheless melodic voices which I used to make 'a discordant, durational pop song.'

Working with a group of performers, synthesised digital voices and live vocals juxtaposed in a polyphonic chorus that explored bonding, longing and desire through a hybrid text written out of my WhatsApp chat archives and medical literature on the physiology of attachment.

Referencing both the BLISS index (a method for analysing clinical data into pain thresholds) and the 'relationship inventory' of self-help lore, the performance blended the immediacy of everyday, personal conversation and the technical language of symptomatology.

— Erica Scourti

Read more about Erica's work at ericascourti.com


Mocked by Machine Learning

Exploring the technology that inspired Slip Tongue.​

Lyrebird — named after a species of bird famous for their ability to mimic sounds — is an audio mimicry technology created by a Montreal-based startup that allows users to fabricate speech. It uses an imitation algorithm to 'clone' and manipulate the emotion of a person's voice using a sample of their recorded speech.

Lyrebird give examples of potential uses of the technology including chatbots/AI assistants, audiobooks, customer service hotlines, videogames, advertising campaigns and text readers.

Examples of use cases at Lyrebird.ai

Examples of use cases at Lyrebird.ai

Lyrebird's founders have said that the technology "raises important societal issues".

"More proof, if proof were needed, of the value of critical and analytical thinking to intelligently navigate an ever-expanding digital realm that is intent on increasingly augmenting and shapeshifting reality."

— Natasha Lomas, "Lyrebird is a voice mimic for the fake news era", TechCrunch (25 April 2017)

In May 2018, Google introduced a new robotic assistant for "accomplishing real-world tasks over the phone". Powered by Google Duplex, it reduces the uncanniness of talking to a 'bot' by mimicking natural speech patterns and behaviours.

At its unveiling event (see the video below), the technology was presented as if Duplex's success was measured by its ability to dupe its human conversational partner. The board chairman of Alphabet Inc. (Google's parent company) dubiously suggested that they had passed the Turing Test. This lead to accompanying thinkpieces pondering ethical questions about the way companies should be exploring the use of this technology and its impact on the way we will continue to interact with artificial intelligence.

"If this advanced bot technology is used without transparency, we'll be the ones left asking questions every time we pick up the phone: "Hi... um, are you alive?"

— Bridget Carey, "Human or Bot? Google Duplex scares me", CNET (11 May 2018)

Advances in artificial intelligence and deep learning, applied via audio mimicry technology and paired with advancing video manipulation technology — as demonstrated in the video below of real-time face capture and reenactment software Face2Face — enable the creation and manipulation of digital media. With the proliferation of 'fake news' adding to global political and socio-economic tensions, the way we receive, perceive and validate the media we are presented with needs closer scrutiny than ever.

With thanks to Pump House Gallery, Battersea Park, London.

Header image: Erica Scourti, Soft Touch, 2018.

Author

Erica Scourti