Amazon and Greenpeace: Dark Clouds on the Horizon?
Even the IT companies that are utilizing the coal-generated energy that Greenpeace frowns upon are making the most of it. Today's cloud datacenters maximize compute performance and energy efficiency to a degree that IT has seldom, if ever, seen. If there is a common lesson in the Amazon outage and the Greenpeace report, it is this: that all actions have consequences -- planned and unplanned, expected and unintended.
05/03/11 5:00 AM PT
The buzz around cloud computing has been so steady for so long that industry observers should be forgiven if they were lulled to sleep. But events of the past couple of weeks served as a cold water wake-up call that may have obscured cloud's supposedly bright future.
The first was an unplanned outage at an Amazon Web Services (AWS) datacenter in Northern Virginia that lasted for days and affected numerous websites including Foursquare, Hootsuite, Quora and Reddit. The company's Elastic Block Storage (EBS) appears to have contributed significantly to the outage. By the beginning of last week, Amazon said the problem had been largely corrected.
The second was the publication of a new Greenpeace report, "How Dirty is Your Data?" which calculates the "energy ethics" of companies such as Apple, Amazon, Facebook, Google, HP, IBM, Microsoft, Twitter and Yahoo. Based on publicly available information, the report aims tp "grade" companies by analyzing their energy transparency and mitigation efforts, as well as where their existing and upcoming cloud data centers are sited. Yahoo received the highest overall grade, while Apple was cited as the least energy ethical.
Both events stirred commentaries and analyses, particularly the Amazon outage (not surprising, given the length of time involved), including speculation about their overall effects on the progress of cloud computing. But will either measurably impact or cause significant concerns about the viability of cloud? From where I stand, the answers are yes and no, though for significantly different reasons.
Amazon's case was seemingly more complex, as it incorporated the actual outage of the AWS datacenter, its rolling effects on various websites and services, and the quality of the company's communications efforts around the event. Most anyone familiar with datacenter operations could and should sympathize with Amazon. Outages, even severe outages, are a fact of IT life. As a result, companies are usually judged according to how quickly they respond to the problem and whether they are able to prevent similar events from recurring.
Amazon's AWS is generally well regarded, but as the days stretched on, patience among the affected sites and their clients wore thin. That was hardly surprising, but it created the opportunity to consider just how and how well organizations were leveraging the AWS cloud. Some high-profile clients, including Netflix and SmugMug, were hardly touched at all, largely because they had designed their environments for high availability (in line with Amazon's guidelines), using AWS as merely one of several IT resources.
On the other hand, sites that depended largely or entirely on Amazon for their online presence were badly burned (including some which really should have known better). Are there lessons in all this? Absolutely -- but they're ones that sensible datacenter owners and IT managers have known for years: 1) Systems that rely on single points of failure will fail at some point; and 2) Companies whose services largely or entirely depend on third parties can do little but fret, complain, apologize, pray and twiddle their thumbs when things go south.
Looking at the bigger picture, the fact that disaster is inevitable is why good communications skills are so crucial for any organization to develop, and why Amazon's flabby, dissembling public response made a bad situation far worse than it really needed to be. While the company has been among the industry's most vocal cloud services cheerleaders, it seemed essentially tone deaf to the damage its inaction was doing to public perception of not just itself but of cloud computing in general.
It's interesting to note that Amazon's is simply the latest in a series of public relations fumbles that have done or threaten to do significant damage to their perpetrators, including the exposure of tracking data collection on iPhones and Android handsets, the theft of user data on Sony's PlayStation Network and Epsilon's exposure of credit card holders' data.
At the end of the day, we expect Amazon will use the lessons it learned from the AWS outage to improve its cloud services. But if the company doesn't also closely evaluate the communications efforts around the event, Amazon's and its customers' suffering will be wasted.
Greenpeace's Energy Ethics
Greenpeace's conclusions are interesting, but I have some serious reservations about the methodology of the "How Dirty is Your Data?" report. The group's reliance on publicly available information is the most obvious weakness, particularly as it uses that data to judge companies that, for competitive reasons, are about as likely to discuss what's going on behind the walls of their datacenters as Hollywood stars are to detail their latest botox treatments and surgical procedures.
That takes a significant bite out of Greenpeace's claims of accuracy, though it adds some credence to the group's belief that companies as energy-reliant as cloud vendors/promoters (the report estimates that datacenters consume 1.5 percent to 2 percent of all global electricity, an amount that is growing at 12 percent per year) should be more transparent about the way they use or misuse energy resources. Though it may be natural for Greenpeace to skewer companies that site datacenters near inexpensive coal-generated electrical grids, doing so makes the group appear woefully naive about the basic "don't spend more than you have to" rules of corporate governance.
Those concerns aside, I believe "How Dirty is Your Data?" garnered considerably less attention and more criticism than it deserved as it arrived at a critical juncture for both cloud and energy constituencies. After years of promotion and chatter, cloud computing appears to be developing momentum, especially as a foundation for burgeoning mobile services and devices.
At the same time, the energy grids supporting those data centers are becoming increasingly fragile. The Fukushima Daiichi disaster has badly injured or entirely eliminated plans for further nuclear power plants. Global warming trends are projected to put further pressure on renewable energy resources in some regions (particularly hydroelectric in the U.S. West and Southwest).
In other words, at the same time technologies like cloud computing are finally coming into their own, the options for generating the power they require are dwindling. While the development of alternative technologies such as solar and wind power is admirable, they are neither mature nor cost-effective enough to take up the slack any time soon.
Though this may seem to be a singularly grim viewpoint, it is anything but. Challenging times are nothing new, and history is rife with example of people rising to the occasion. Smart, dedicated men and women are doing good work and new, innovative technologies continue to be developed.
Even the IT companies that are utilizing the coal-generated energy that Greenpeace frowns upon are making the most of it. Today's cloud datacenters maximize compute performance and energy efficiency to a degree that IT has seldom, if ever, seen.
If there is a common lesson in the Amazon outage and the Greenpeace report, it is this: that all actions have consequences -- planned and unplanned, expected and unintended. That is certainly the case for companies that offer services dependent on an infrastructure which, even in the best scenarios, is imperfect.
It is also the case for industries embracing a technology whose success depends on an energy infrastructure facing certain, rapid, potentially radical change. The journey that led us to this place required certain kinds and qualities of talent. The road ahead is likely to demand new, substantially different skills.