
CERN: The gulf between machine learning and AI
Or why machine intelligence won't lead to Armageddon

LAST WEEK The INQUIRER travelled to CERN in Switzerland to see the Large Hadron Collider and learn more about the IT that makes particle physics tick.
In the first of our reports from the legendary laboratory we talk to professor Michael Feindt, former CERN researcher and founder of Blue Yonder, the predictive analytics software that spun out of CERN and that still powers some of its work.
Predictive analytics is a form of machine learning. Professor Feindt's work uses neural networks, a term that conjures up images of the type of domesticated synthetic intelligences that have been such a hit for Channel 4 with its sci-fi drama Humans.
However, although machine learning is advancing, it's a long way from the artificial intelligence (AI) we imagine.
Feindt explained: "We are very far away from a machine that is 'intelligent' in the very wide sense we attribute it to normally. Even the best algorithms do not 'think', 'feel' or 'live', they have no 'self-consciousness' and no 'free will'. This is, and will stay, pure science fiction for a while.
"But data-driven machine learning algorithms can optimise and automate a large part (99 percent but not 100 percent) of the daily work of even the most intelligent people.
"White-collar work will be automated. But what to do, predict and decide, and defining the decision criteria - what is good and what is bad - will still be completely human territory for a long time."
Blue Yonder technology is used in the context of CERN to predict the results of collisions before they happen, so the data can be pre-sifted to ignore, for example, the vast majority of particles that completely miss their target.
The confusion between learning and intelligence comes from the idea that, because a machine gets better at doing its job, it is also aware. This simply isn't the case.
"What we at Blue Yonder concentrate on right now is just the opposite: to have the fastest progress there where computers are traditionally better than humans," said Feindt.
"That is in taking account of a huge amount of complicated and correlated data, number crunching, hard and objective computations, decisions based on probability-based predictions, pre-defined cost functions and mathematical optimisation processes. In this area humans are much worse.
"Especially the large number of 'cognitive biases' that humans have, and that played their important role in evolution but hinder progress in the knowledge society, lead to many sub-optimal decisions."
The beauty of this technology is that it can be applied to just about anything where prediction will benefit.
Earlier in the year we talked about how it was being used to ensure that the right sandwiches in branches of EAT were on the shelves at the right moment. It's just one example of how IT from CERN is touching our everyday lives.
But the broader question remains as to whether a machine that learns can become 'intelligent' as we know the term. Can a computer become aware? Is life an algorithm or something that requires more than simply throwing more zeros and ones at it at the speed of light?
Feindt believes that basically, yes, it is. "My believe is that consciousness and intelligence in the broader sense come automatically once a unit has enough computing, memory and I/O devices, and clever, self-organising, learning and self-learning software," he said.
"I think that the clever combination used by nature (slow learning from one generation to the next by genetics, fast learning by organised teaching by parents in school and university, and spontaneous learning using media and interactions with other individuals) will more or less be copied to achieve AI."
But he warned that progress is a long way off. "A part of the scientific community tries to work in the direction of AI, but I am sure that we are still very far away from that," he said.
Feindt pointed out that humans will always have a vital role in decision making. "Still the human alone decides what the strategy is, what goals should be reached," he said.
"But our algorithms break down this strategic goal to an optimisation of the often very large number of (repeating) operative decisions. And this makes a lot of difference, not in the far future but today."
So at least for the moment, we can worry a little less about Elon Musk's ideas of robots that think we're imperfections, and autonomous weapons that will blow us all to kingdom come.
In part two, we'll look at why a site that writes about enterprise IT is interested in a particle physics laboratory, looking at the things that the IT department at CERN has given the world. µ
INQ Latest
Intel's Comet Lake-S CPUs could pop up in April 2020
S marks the rumoured spot
Silicon Valley: Final episode review
The best sitcom about a compression algorithm in TV history
Huawei will extend Harmony OS globally during 2020
But not on phones and laptops. Yet
Apple is returning to CES after a 28-year hiatus
Send Tim now to C.E.S