ONLINE RETAILER AND WEB HOST Amazon has offered a long-winded explanation and apology for its downed cloud datacentres.
In a 5,765 word statement obviously written by a systems administrator not used to the sound bite brevity of PR, Amazon offered a War and Peace length reason and apology for its Amazon Web Services (AWS) and Elastic Compute Cloud (EC2) datacentres outage.
"The issues affecting EC2 customers last week primarily involved a subset of the Amazon Elastic Block Store ("EBS") volumes in a single Availability Zone within the US East Region that became unable to service read and write operations," wrote a spokesperson for the AWS team.
These so-called stuck volumes caused read and write problems so the AWS operators had to disable all control APIs for the degraded EBS cluster. The problem spread to the entire US East Region where AWS customers like Reddit, Quora and Foursquare were taken offline.
The datacentres were experiencing high error rates and latencies. AWS said it has been trying to restore and stabilise services.
Last week, most of Amazon's US datacentres resumed normal operation but two were still hiccoughing in Northern Virginia. AWS said there were many contributory factors involved, hence the delay and the lengthy post. But it's not a good advertisement for cloud computing, or at least, Amazon's handling of the downed datacentres. Amazon was even criticised by cloud management firm Rightscale for its inability to communicate about the outage effectively.
"Last, but certainly not least, we want to apologise," the AWS team eventually said, if you scroll all the way to the bottom... 5,591 words in. µ
Now you can watch documentaries about horribly disfigured people whenever you like
Brad to the bone
Being in a minority of one doesn't make you right
WeWork needs a rework