Yet crowdsourcing can be extremely effective, as MIT's Riley Crane showed in answering DARPA's challenge to find 10 weather balloons moored around the US. The MIT team used social networks and a pyramid of financial incentives to recruit volunteers, their friends and their friends of friends to report sightings - and won by finding all 10 within nine hours.
"Not all hard problems can be solved by aggregation," he said. Unlike movie recommendations or Google Instant, problems like the balloon challenge require "coordination or collaboration". The reports of sightings weren't enough by themselves. The team had to eliminate false reports by comparing reported locations with IP addresses.
"This is a toy problem," he said, "but it's still starting to show some of the possibilities of what we're going to be able to do in future."
Other interesting approaches included the MIT Media Lab's Alexander Wissner-Gross, who argues that if a planet-scale superhuman intelligence emerges it will most likely be from either the quantitative finance or advertising industries. Both have great incentives and great rewards for improvement. These proposals seem dubious on one count: given recent market history it's arguable that a quantitative finance-derived AI would crash every six months.
Exactly how much AI should resemble humans is a long-running debate. We are limited by our biology: our knowledge, as Ken Jennings commented, is unevenly distributed among categories, and our response times vary.
The British mathematician Stephen Wolfram, whose Wolfram Alpha web site delivers expert knowledge on an expanding range of topics represents a different approach. "When I was younger I thought some core breakthrough would suddenly give us AI," he said. Instead, the work he profiled in A New Kind of Science led him to conclude that all you needed was computation. "I'm still often surprised that [Wolfram Alpha] is possible as a practical matter at this point in history." Wolfram Alpha does not try to copy the human brain: it simply provides a structure on which human intelligence can build.
A likely key either way is the rule stated by the 18th century British mathematician, the Reverend Thomas Bayes, who would surely be astounded if he could wake up for a day and see the impact of his ideas. Among them, in 2000, was turning Autonomy CEO Mike Lynch into Britain's first software billionaire (he has just sold his company to HP for £8bn).
Yet for centuries, explained Sharon Bertsch McGrayne, author of The Theory That Wouldn't Die, mentioning Bayes was career suicide. The reason: Bayes' willingness to begin his search for the cause of an effect with a guess, a strategy long despised as subjective. But his methodology, in which your answer changes as you acquire more data, is the way all human learning works.
"Bayes is the foundation of the Singularity and AI, and if AI passes the human brain," she said, "Bayes will be there, too." µ
The week in Google
The scandal that just keeps giving
Clip to the end....