FOR ALL THE HYPE, we know from experience that artificial intelligence (AI) has a long way to go before it's consistently as reliable as a semi-smart human. Yes, it can play a good game of poker, but it can still be pretty easily bamboozled by simple questions.
A new imageset from Dan Hendrycks, a PhD student from the University of California, Berkeley, shows just how superficial some machine learning can be. His image data set of 7,500 undoctored images seems to fool AI 98 per cent of the time, as squirrels get mistaken for sea lions and dragonflies are assuredly seen as manhole covers.
The database is a subset of ImageNet, which contains over 14 million hand-labelled images designed for training AI. If you want your AI to recognise cats on sight, for example, you just point it in the cat category and leave it to it.
This subset - ImageNet-A - is made up of images that consistently fool AIs. And while it may seem quite petty to deliberately trip up AI, remember this stuff can have serious real-world consequences. It's one thing for a camera AI to mistake a cat for a dog, but quite another for a self-driving car to mistake a pedestrian for a traffic light, say.
You could force the AIs to learn from ImageNet-A, of course, but that's a sticking plaster over the problem as it just addresses the specific images rather than the underlying issues with decoding context. In other words, even the dimmest human knows manhole covers don't sit on leaves, so how can we make AI get its figurative head round the problem?
That's up to researchers, but as TheNextWeb says a possible fix could involve AI becoming "more accurate by being less certain" - having to justify why it's unsure of what a given object is, rather than a binary "is a cat"/"isn't a cat" choice. But that could involve a whole reworking of how we do black-box image recognition. µ
Now you can watch documentaries about horribly disfigured people whenever you like
Brad to the bone
Being in a minority of one doesn't make you right
WeWork needs a rework