ANY DISCUSSION about how to make security usable sounds a lot like the discussions of the 1980s about airplane safety.
At that time, the big issue was cockpit design: how could an increasingly complex set of instruments be organised to minimise pilot error? Those discussions translated into the software usability movement of the early 1990s, spearheaded by Donald Norman's 1988 classic book, The Design of Everyday Things. Every consumer and business software company has a usability group whose members were influenced by Norman; but many security people have never heard his name.
"The general issue we have is basically a failure to design security in," says , a lecturer in computer science at University College London who specialises, unusually, in security and human factors. "There's a rush for functionality, and we're failing to do risk analysis and consider that, and then once it's done go around the block again with design."
Security, like usability, works best if it's designed in from the beginning. But, she says, as a recent article in The Economist points out, "unless the functionality is in place and proven to be worth something to somebody, thinking about security doesn't happen – and then they wait until problems occur and the security measures are stuck on afterwards, and it always makes for a system that's less efficient and harder to use."
But the deeper problem is that most people – including security people – are goal-oriented.
"Security people perceive security as the primary goal," she says. "And of course it's not. In usability terms we distinguish between the primary and secondary goal – the production task and the enabling task. Security is enabling the long-term survival of the system, but people look at their primary task, their main job, and don't see the benefit of security. When that's the case, and they perceive it as an obstacle in the way of the goal, then of course they try to get around it."
So, the two problems with security become: "First, obstacles are made too big because of bad design, and second, they don't perceive or understand the risks." Plus, humans are generally poor at risk assessment anyway.
As we move increasingly into a situation where everything is computerised, these issues are going to loom larger and larger. "Almost everybody who uses a computer has to use security every day," she says, "and there's no concern about usability. Is the idea that it should be hard because it's important?" µ
Sign up for INQbot – a weekly roundup of the best from the INQ