Infrastructure

EXPERT ADVICE

Choosing Your Path to Data Center Efficiency

Companies have often focused on projects that save costs — both operational and capital — across a wide range of IT functions. The one area where companies have generally shied away from making any drastic changes has been in the area of data center buildout and operations. This area, in most cases, has been handled in a traditional manner, in that it is generally a sunk cost necessary to keep things running.

One could not have blamed them either. Traditionally, IT hardware has not been space efficient — nor were there tools available that allowed companies to measure and maximize the utilization of their hardware. Therefore, it made sense to set it and forget it, as long as it was working the way it was supposed to.

However, in the last several years, more initiatives related to saving energy costs, reducing carbon footprints and greening the data center have brought this facet to the forefront with many companies. Organizations have begun — with good reason — to think in terms of how any IT initiative impacts their data center footprint. As a result, reducing the data center footprint has become a critical success factor for projects being executed in this area.

Following are some of the areas companies are beginning to pay attention to in order to reduce data center costs and optimize efficiency.

Buy vs. Build

Historically, companies have chosen to build their own data centers. In the nineties, this model was briefly abandoned in favor of a hosted model — that is, servers and other hardware assets were hosted at a third-party data center. That model became unpopular in previous recessionary periods but is making a comeback.

Companies trying to consolidate existing data center space should seriously examine collocation facilities offered by various vendors that may make the task of consolidation faster, easier and cost effective.

Floor Space Consolidation

It used to be that companies based the soundness of their IT strategy on the number of data centers they maintained. This practice seems to have slowly become old and has been replaced by a desire to have as few data centers as possible. This is especially the case after a merger or acquisition; the combined company may have to deal with a lot of legacy floor space.

In my experience, some companies have been very aggressive in consolidating legacy data centers into a smaller set of combined data centers. However, most others generally take a long time to take care of this issue, burning precious dollars in the process. Companies that find themselves in this situation should take charge and make this their primary focus.

As a result of the current economic situation, there is a glut of built-out data center space and it behooves companies to investigate their options.

Asset Consolidation

In the past, consolidating assets was a challenge either due to their physical construct (or size) or their power and cooling requirements. However, these issues have been addressed by vendors in several different ways.

One option is to examine the use of bigger and denser racks. These special-purpose racks are designed to accommodate legacy hardware while accommodating standard rack mount servers as well.

At the same time, since they are slightly taller and wider, they can be “stuffed” pretty densely, making much more efficient use of the space they offer. If your assets are spread out across multiple facilities, a staggered buildout with floor space consolidation will offer significant cost savings.

Service-Based Models

With cloud computing and storage becoming very popular, coupled with several software, application and infrastructure as service offerings, companies no longer have to invest in expensive hardware to run their applications, thereby saving on data center floor space.

Vendors like Salesforce.com and others have demonstrated that companies of all sizes can effectively and efficiently make use of a service-based model for a large chunk of their applications. Fears about security have also been laid to rest as these vendors have invested in ensuring that their applications are secure and the customer’s data is safe.

A service-based model is slowly becoming mainstream, even for email and database systems, which are traditionally kept within company boundaries. For customers trying to find ways to save on data center costs, an “as a service” model is the biggest boon the Internet can offer.

Technology Investment

Ultimately, if a company decides against a service-based model in favor of owning its own data centers, there are plenty of technologies out there that will allow it to make the most out of its investment. Here are a few:

  • Ultra dense servers: Whether they are blade servers or tightly packed servers, most new data center grade server hardware these days is built to make the most out of the rack it is housed in. With efficient cooling mechanisms, one can easily pack a rack with the maximum allowable systems that it can accommodate without causing any problems. With efficient rack management mechanisms for cable, power and cooling, one can ensure that all of the server hardware is tightly packed into as few racks as possible.
  • Ultra dense Storage systems: Whether using modular or monolithic storage hardware, vendors have started designing new-generation arrays that pack more TB per square foot than their predecessors. These storage arrays offer state of the art management software to ensure that packing so much storage densely does not compromise performance or functionality.
  • The use of intelligent storage technology: Whether it is thin provisioning or online deduplication, storage technology can allow physical storage to be overallocated to servers, ensuring that wastage is minimized. These technologies should be deployed with care — but when deployed correctly, they will bring about a new way to provision and manage storage.
  • Server Virtualization: These days, no data center strategy can survive without the “V” word. However, what companies often don’t realize is that virtualization is not just VMware. Sure, VMware is the most popular and widely deployed virtualization solution out there, but today almost every major vendor offers an effective and robust virtualization solution for its operating environment. So, if your system strategy leverages Unix heavily, it does not mean virtualization is not for you. In fact, you may be surprised at how easily you can virtualize your environment.
  • Data protection: Companies are rethinking how their data should be protected. The traditional approach of making multiple copies of the same data over and over on disk and tape is being replaced by intelligent data protection technologies. These allow larger tape silos to be replaced by smaller tape silos. With technologies such as data deduplication and virtual tape libraries, data can be protected with a denser but cheaper disk-based solution with only the necessary amount of data being written to tape.

Measuring Data Center Utilization and Efficiency

With the ability to measure how efficiently data centers consume resources such as power and cooling, companies can also measure how efficient their data centers are overall.

Regardless of how densely populated the racks are, companies can ensure that rack layout and power-cooling efficiency are optimal. At the same time, these tools can ensure that the amount of power consumed per usable asset (servers, storage, etc.) is maximized.

Rethinking Systems Management

Whether assessing their storage, databases, systems or applications, management companies need to ensure that they can get their proverbial arms around measuring how these assets impact efficiency or inefficiency of the data center resources being utilized.

With tools to analyze and model methods for improving this utilization, companies can seek to reduce risks by taking corrective action wherever possible.


Ashish Nadkarni is a principal consultant at GlassHouse Technologies.

1 Comment

  • Hi Ashish,

    Thanks for the insightful article and great points. At Logicalis, we emphasize (like you mentioned) that the key to an optimal virtualized environment is component compatibility and the use of widely recognized standards. You’re right, the biggest success is when you standardize your hardware platform and your software environment as much as you can — the same hypervisors, the same underlying hardware and all those pieces.

    Thanks again for the article!

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels