THE RECENT spate of cloud service outages has highlighted the need for open clouds and that simply relying on numbers does not necessarily provide resilience.
Microsoft's Azure cloud service had a three hour outage earlier this week, while the G-Cloud, the UK government's stuttering cloud initiative, had its own hiccup, and micro-blogging web site Twitter also went dark. Yet cloud providers still promote their services as a reliable and cost-effective way to outsource services, while in truth migrating services to the cloud requires a complete redesign of a firm's infrastructure if it is to be reliable.
Firms looking to move to the cloud are usually bombarded with buzzwords like elastic on-demand capacity, economies of scale and all sorts of other things that make the pinstriped decision makers see pound signs instead of warning signs. To rely on a single cloud service provider is a fool's paradise and to port an existing infrastructure built on the assumption that servers are highly available and well provisioned to the cloud is simply foolish.
The fact is that Amazon, Google, Microsoft, Rackspace and just about every other cloud service provider out there is trying to maximise the use of its resources. This simple business practice should ring alarm bells and users should not treat cloud instances as like-for-like equivalents to physical server deployments.
So the simple answer would be to have a backup strategy, multiple deployments and partitioning of services. The theory is of course absolutely right but there are two significant problems - ensuring portability and managing for seamless failover.
The majority of big cloud providers use proprietary APIs to access services - Openstack, though getting a lot of attention still has some maturing to do - and for service discovery.
It is ludicrous that any company would tie itself into a cloud provider that requires its developers to use a set of APIs that allows for no portability. Rackspace has been aggressively pushing Openstack and clouds based on open standards will make it considerably easier to create 'cloud-ready' systems that can easily be deployed on multiple cloud providers.
Portability is only part of the solution. Firms will also need to work on elegant switching between providers should service be disrupted. Given the importance of this function firms should consider maintaining this capability in-house rather than offloading it to save a few pounds.
What repeated outages at Amazon, Microsoft and Twitter have highlighted is that the cloud isn't a magic bullet for companies wanting to jump on the latest fashionable technology trend. Cloud service providers might present the cloud as a shiny new technology but the truth is, it is a software layer that sits on top of the same sort of hardware that has been powering services for decades.
Until firms realise that they need to treat the cloud as a service that can go down - because ultimately instances still run on single machines, just as they did with traditional client-server set ups - there will be many more outages that knock internet services offline. µ
For real this time
Threat stealthily installs apps to boost ad revenues
Comes in premium non-Premium Edition and non-premium Premium Edition
Demonstrates the potential use of atoms as the building blocks of circuits