GOOGLE HAS DESIGNED a series of algorithms that allow smartphone users to interpret sign language using their phone camera.
Unusually for Google, it has opted not to create an app this time but has released the code into the open-source for developers to make their own.
This approach is most recognisable to Londoners, where there is no official transport app, but hundreds of third-party apps which use its comprehensive data.
The sign language uses a combination of a palm detector model, known as BlazePalm, a hand landmark model which works a bit like a fortune teller, only not much, and a gesture recogniser which can tell exactly what the hand is doing.
It works by dividing the hand into 21 grid points which give it just enough data to be able to tell a twist from a bend from a shrug.
The move has been cautiously welcomed by the deaf community, but some are questioning if machine learning can really pick up on the nuances of sign language, during complex conversations.
It may seem very literal to you and we, but sign language has as much subtlety as spoken language and if there's one thing that we know about artificial intelligence, it's that it's totally rubbish at context, which could mean it gets the whole sentence about-face. And let's not get started on idioms for now, shall we?
Additionally, there are local variations of sign language, different dialects like BSL and Makaton - a whole bunch of non-literal aspects which will all need to be incorporated and perfected before this becomes a viable commercial app.
Google is not alone. Microsoft is already working on something similar for its translation service, whilst other private companies continue to experiment in bridging the gap between verbal and non-verbal communications.
Rumours that the technology was originally built to help politicians in the UK tell their a*se from their elbow are as yet unconfirmed. µ
Might need to come up with a better name though
There's an app for *that*
American as Apple Spy