The only problem [Nvidia has] is that at some point your eyes don't get any better - Bob Colwell, former chief architect, Intel
PROBABLY the CTO of every large technology company has to be a futurist. But it's a rare CTO who speaks at the Singularity Summit to consider the prospects for an artificial general intelligence surpassing humans. But Intel's CTO, Justin Rattner, laid out the future of Moore's Law to a packed auditorium for whom computational speed is a near-religious experience.
Yet Rattner says afterwards that raw speed won't be enough. "I once asked our speech recognition team if there was any direct relationship between machine computing speed and recognition accuracy and after a long pause, they said – because they knew I was not going to be happy with the answer – no." He asked why: "Our recognition performance is limited by our algorithmic understanding, not by our instruction speed. We can give you the wrong answer much faster, but we can't give you the right answer much faster."
Speech recognition is, of course, just one of many tasks even a very young human can do routinely and simultaneously.
"It's clearly a case where, until we have the right algorithms, no amount of performance improvement is going to give us the recognition performance a young child can deliver. That's why I try to separate out these notions a little. I have little doubt that when we figure it out we'll require lots of computing power, so there's no sense in abandoning it."
He believes that instead the big breakthroughs will come from groups like the one led by Palm founder Jeff Hawkins. "He along with a number of other groups are really interested in understanding the algorithmic foundations of intelligence, and that's really the hard work."
At the Summit more discussion was of reverse-engineering the brain from neurons up but, he says, "It strikes me that approaches like that would have led us to create flying machines that flap their wings and had mechanical analogues to feathers. I really believe that machine intelligence will follow a path not unlike that which aircraft took – to really understand the underlying physics of uplift and drag and the aerodynamics of flights and then create machines that were optimal in some sense given the available materials."
The most successful attempts so far, he says, have, like Google, used statistical techniques. "About a decade ago, after AI had largely fallen into some decline people began to introduce these machine learning algorithms which are fundamentally statistical in nature and that's completely reinvigorated the field." Now, "That's producing systems that are showing human-like intelligence in narrow ranges, but the idea is that we can expand them and make them more robust."
Rattner's talk explored other futuristic elements: three-dimensional electronics, cognitive radio, and programmable matter, primarily work being done at Carnegie-Mellon.
"It's really research at this point," he says of CMU's claymation project, which relies on tiny processors called "catoms" to create a pocket-sized slab that will turn into anything – a car key or a video screen. "But the rate at which it's advancing has really surprised us. We're beginning to have realistic conversations about getting it down in the range of a few hundred microns. Whether it's five years away or ten years away it's hard to tell." The bigger issue: "How do you program it when you say, turn it into a six-inch display and all those little elements have to organise themselves in a precise way and carry out unique functions?"
An additional difficulty may be supplies; Intel's next generations of processor chips will use the rare elements hafnium, gallium, and indium. Rattner seems unworried because the quantities used are "minute".
The more immediate problem: "It seems like every day someone wants to take a new element on the periodic chart and ban it."
At the Summit, Rattner projected the Singularity at 2045. Afterwards, he says, "I'm not a practicing Singularitarian. Can you be agnostic?" He adds, " There's little doubt that we will build computers that exceed human intelligence." But humans have survived computing so far. "Would it be totally disruptive if in fact there were machines with far greater intellectual capabilities? I just don't know." µ
Sign up for INQbot – a weekly roundup of the best from the INQ