People under the age of 25 are too young to be able to afford cynicism - Diogenes the Pseudo Pesky Cynic
THE LATE Steve Jobs announced the Apple Macintosh computer 30 years ago this week, and with it he introduced the graphical user interface (GUI) to computer users for the first time, which led to the user interfaces we see on just about everything today.
Apple didn't invent the GUI, of course, but with the launch of the Macintosh it began what became the hallmark of the company - spotting an emerging technology that has the potential to make computers more capable and accessible for users and developing it into a line of carefully designed, compelling products.
In fact, the Macintosh wasn't even the first Apple system to use a GUI – that honour fell to the innovative but costly Lisa that the firm launched in 1983, a year before the Macintosh. However, the Mac introduced the formula of an on-screen desktop with applications running in separate windows and driven by a mouse to a much wider audience by bringing the cost down to something many people could afford.
It's difficult today to appreciate what a huge impact the GUI had on the computer industry. Users of other computers at the time had to perform every action - from starting an application to copying files - by typing arcane sequences of commands at a blinking video terminal prompt.
Apple repeated its magic touch in the last decade by developing the iPhone and then the iPad. Touchscreens already existed, but with the emergence of new capacitive touch technology, Apple saw the opportunity to develop an intuitive user interface that enabled gestures such as tapping and swiping with your fingers.
The key point is that these user interfaces are suited for different use cases. The mouse and keyboard are well adapted for desktop use, whereas the touchscreen is a much more sensible and natural way to operate a compact, handheld mobile device.
However, it seems that innovation in user interfaces has stagnated lately. The GUI is still the most common way that people interact with computers, and most attempts to improve on the GUI have turned out to be largely cosmetic or even counterproductive, actually making the system less intuitive to use. The tiled Metro user interface seen in Windows 8 is a good example.
Even Apple has run into criticism, with the changes to its user interface in iOS 7 meeting with a storm of negative comments from users, while the elegant simplicity of the OS X GUI has been somewhat tarnished by the constant addition of new features and gimmicks over the years.
What will be the next great user interface breakthrough? Speech recognition still languishes in the doldrums, despite Apple's Siri, because it is still hopelessly unreliable, despite decades of research and development.
Meanwhile, motion-sensing interfaces like Microsoft's Kinect seems to have few practical applications outside the world of gaming. Likewise, there was a lot of attention paid to 3D user interfaces a few years ago that seems to have abated without producing any notable end products.
Perhaps some user interface innovation can be expected from developers working with Google Glass, which seems to be begging for an effective and unobtrusive way for the user to control the wearable miniature computer.
However, with device formats proliferating and the lines between devices such as tablets and smartphones blurring, it would be nice to see the development of a user interface that could be equally useful across all of them. Otherwise, we might have to wait quite a while before fluent speech recognition becomes much more efficient and effective and is hooked up with semantically proficient artificial intelligence. µ
Sign up for INQbot – a weekly roundup of the best from the INQ