THE LONDON Infosecurity conference doesn't have the big name speakers and research news of a Black Hat or RSA Conference, as it's more of a big marketplace where security vendors hawk their wares to people frightened by the latest cybersecurity scare stories.
But this year was different with the addition of a university research and development area, where delegates got a chance to see and talk about something new, rather than the same products companies tend to push that are virtually the same things as they sold last year but with 2010 slapped on to the names.
So The INQUIRER sat down with Andrew Simpson, director of studies for the software engineering programme at the Oxford University computing laboratory. Not being attached to a vendor, he gave his own opinions about what he was seeing at the security conference.
"It's a bit scary. It feels as though a lot of people at this show set up a stall to hook in a CIO who doesn't really know about today's technology, and sell them something," he said.
"They'll go back to their technical people and say I bought this, and they'll go what, that's a waste of money. Ninety per cent of the stuff here, nobody needs."
Not the words that the computer security industry would use, obviously. But Simpson's words carry weight since he's part of the Oxford University computer science department, which is looking at ways to improve IT security before the problems start, rather than as a quick fix.
"The department's been around a long time, but usually had a reputation for doing theoretical research," he said. "The founder was keen that theory should be linked with practice, and sometimes we lost sight of that."
"So there's lots of work being done on programming, for example. Things like designing new programming languages so it is harder to make mistakes - automatic checking that can tell you when you've made the types of problems that might lead to SQL injection, buffer overflow, or whatever the latest source of bugs is."
"If you can design those things outside of the programming language in the first place, then you win. We've got a long standing interest in that sort of thing."
But Simpson said that there was a limit to how far they could go with that, because when you have two devices communicating on different sides of a network you can't tell whether the other is doing the right or wrong thing.
"Another big area that the lab has been interested in is analysing security protocols," he said. "The way that two devices that talk to each other, what assumptions they are making of what is at the other end of the wire, and can those be broken by someone who's in the middle and so on."
Simpson said that the security industry might fund a little bit of research, such as a student with a doctorate rather than a big strategic partnership.
This is the case with some recent research led by Ilir Gashi, research fellow at the centre for software reliability at City University, who presented Symantec-funded research that showed how different anti-virus engines could be used in combination to improve detection capability, in real world conditions.
Anti-virus software normally worked on the basis of detecting signatures, so are people leaving themselves at the mercy of only one anti-virus company's ability to detect signatures rather than using many different types?
The idea of combining multiple detectors is not new, with a cloud-AV architecture already examined which uses many anti-virus products to improve the rate of detection. The University of Michigan is also developing a system where traffic from a host is forwarded onto anti-virus engines developed on the cloud.
But the City University research used real-world data to find that each of the anti-virus engines they examined - mostly free - did not show 100 per cent detection rates with the malware they looked at, but if you combined just two free software products, you could get perfect detection.
Makes sense, doesn't it? Could we see a cloud-AV architecture in the future?
Gashi said, "In terms of installing more than one anti-virus engine for home use, there are technical and practical difficulties. Sometimes one anti-virus will try and detect another anti-virus as the virus, and try to uninstall it. Of course there are performance implications in trying to run more than one anti-virus on one host machine."
"But there are architectural solutions being developed commercially by a company called GFI which uses more than one anti-virus engine for detection of malware for emails. But they don't run the anti-virus engines on the same host."
But he said that it might make more sense to use this type of system in the corporate world, provided they could get server space that could potentially deploy the anti-virus on virtualised machines. µ
Stay alive and it'll find you
Chrome and punishment