A SINISTER STARTUP claims its software is able to imitate people's voices with just one minute of training, putting at risk online security systems that rely on voice authentication as well as giving rise to fears that it could be used to frame people for crimes they didn't commit.
Speech synthesis company Lyrebird has already released convincing samples of former US president Barack Obama, current US president Donald Trump and would-be US president Hillary Clinton, and is currently working on a developer API that would enable the technology to be embedded in any application from games to, potentially, malware suites.
The developments come just a year or so after global bank HSBC started using voice recognition in combination with a spoken password as a form of authentication in 2016, with Barclays reportedly planning to follow suit.
Just months prior to the launch of HSBC's voice banking system, the University of Alabama at Birmingham in the US warned that relying solely on voice for authentication or automation might leave systems vulnerable to voice impersonation attacks.
"Advances in technology, specifically those that automate speech synthesis such as voice morphing, allow an attacker to build a very close model of a victim's voice from a limited number of samples. Voice morphing can be used to transform the attacker's voice to speak any arbitrary message in the victim's voice," they warned.
A study by the University using the technology of the time found that automated voice verification algorithms "were largely ineffective to the attacks developed by the research team", with the average rate of rejection running at between 10 and 20 per cent in most cases.
"Our research showed that voice conversion poses a serious threat, and our attacks can be successful for a majority of cases," said Dr Nitesh Saxena, the director of the Security and Privacy In Emerging computing and networking Systems (SPIES) lab, and associate professor of computer and information sciences at the University.
"Worryingly, the attacks against human-based speaker verification may become more effective in the future because voice conversion/synthesis quality will continue to improve, while it can be safely said that human ability will likely not."
Lyrebird, in a short ethics statement, warns against relying heavily on voice recordings for evidence in future.
"Voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries. Our technology questions the validity of such evidence as it allows to easily manipulate audio recordings.
"This could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.
"By releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks. We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible.
"More generally, we want to raise attention about the lack of evidence that audio recordings may represent in the near future," it warns.
But Lyrebird is not the only company effectively working to bust voice authentication systems, and a growing number of universities are also conducting research into voice synthesis. µ
Social network suffers yet another privacy Zuck-up
It's the gateway device into a world of AI development
'Glass Enterprise Edition 2' is coming, for some reason
Monetisation lures Google to cherry-pick from its sibling