A VIRTUAL TALKING HEAD that goes by the name of Zoe is touted as the "next generation digital personal assistant" due to its ability to express a full range of realistic human emotions.
Shown off by researchers at the University of Cambridge, Zoe is a lifelike face that looks to replace texting with "face messaging".
Zoe is the result of a collaboration between researchers at Toshiba's Cambridge Research Lab and the University of Cambridge Department of Engineering. According to its designers, Zoe is the "most expressive controllable avatar ever created", replicating human emotions with "unprecedented realism", and can display emotions such as happiness, anger, and fear, and changes its voice to suit any feeling the user wants it to simulate.
Users can type in any message, specifying the requisite emotion as well, and the face recites the text.
The name isn't an acronym for a complex technical term, as you might expect. It simply comes from the actress by whom the face was modelled called Zoe Lister, who starred in the UK Channel 4 series Hollyoaks as Zoe Carpenter.
"To recreate her face and voice, researchers spent several days recording Zoe's speech and facial expressions," the University of Cambridge said. "The result is a system that is light enough to work in mobile technology, and could be used as a personal assistant in smartphones, or to ‘face message' friends."
As well as being more expressive than any previous system, Zoe is also remarkably data light. The program used to run her is just tens of megabytes in size, which means that it can be easily incorporated into even the smallest computer devices, including tablets and smartphones.
The main goal of Zoe is to implement the virtual head in devices so interacting with a computer becomes like talking with another human being. To make the system as realistic as possible, the research team collected a dataset of thousands of sentences, which they used to train the speech model with the help of Lister.
They also tracked Lister's face while she was speaking using computer vision software. This was converted into voice and face modelling, mathematical algorithms that gave them the voice and image data they needed to recreate expressions on a digital face, directly from the text alone.
There's no mention by the University of Cambridge as to when we might expect to see Zoe implemented in real life devices, but we'd suspect it'll be a good while yet as the prototype is in the very early stages.
One thing Zoe's researchers did say, however, is that they are looking for the software to allow users to customise and personalise their own, emotionally realistic digital assistants sometime in the near future. µ
Sign up for INQbot – a weekly roundup of the best from the INQ