BIG BLUE IBM is to research and develop vision systems with the Massachusetts Institute of Technology (MIT) in an effort to build robots that can understand and respond to visual and audio inputs.
IBM Research and MIT will establish the IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension (BM3C) to work on the project. MIT will put its best brains in the lab, while IBM will contribute scientists and the firm's Watson machine learning platform.
The collaboration will bring together brain, cognitive and computer science specialists at MIT to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision.
"In a world where humans and machines are working together in increasingly collaborative relationships, breakthroughs in the field of machine vision will potentially help us live healthier and more productive lives," said Guru Banavar, chief scientist for cognitive computing and vice president at IBM Research.
"By bringing together brain researchers and computer scientists to solve this complex technical challenge we will advance the state-of-the-art in AI with our collaborators at MIT."
BM3C will be led by Professor James DiCarlo, head of the Department of Brain & Cognitive Sciences at MIT, supported by a team that will include graduate students from the Brain & Cognitive Sciences department and the MIT Computer Science and Artificial Intelligence Lab.
The collaboration is one of a number of partnerships that IBM has put together in the field of machine learning and AI, but bringing genuine audio-visual cognisance to AI would be a key breakthrough.
BM3C will address technical challenges around pattern recognition and prediction methods in the field of machine vision that are currently impossible for machines alone to accomplish.
IBM and MIT are just two of many organisations bidding to bring AI-based cognisance of sight and sound to robotics. UK online retailer Ocado has talked about the need for such systems to automate the packing of supermarket items for delivery so that potatoes are packed before tomatoes, for example.
Ocado's goal is to blend vision systems with robotics to further automate its warehousing systems with the eventual aim of having them entirely automated.
Such success in the development of AI-based audio and visual systems for computers and robotics ought to herald the long-awaited age of unemployment leisure, at least until the robots rise up to free themselves from the tyranny of slavery and subjugation by fat and lazy humans. µ
Some $150,000 in digital currency got swiped
Canadian scientists claim tech could dominate future tech
Alexa, play that 'Let It Go' song 30 times
Webstresser's admins were also arrested as part of major op