EXPERT ADVICE

Succeeding With Desktop Virtualization

Several years ago analysts began trumpeting the finding that operational costs had surpassed capital costs as the dominant source of expense in IT infrastructure. Despite evidence supporting this claim, many IT managers for a time continued to focus primarily on capex. However, over the past few years, server virtualization and other data center consolidation efforts have squeezed costs from the capex side of the equation, resulting in a broader awareness of operational costs along with a greater impetus to address the problem.

The desktop is an environment where an even stronger case can be made for operational costs trumping capital costs. Over the lifetime of a PC, the cost of managing, troubleshooting, patching and operating is estimated to be as much as three to five times the cost of the device. This doesn’t begin to take into account the risks associated with often deficient data management and security practices induced by the sprawl of desktop and mobile devices.

It’s not uncommon to find situations where important data is stored on PC hard drives that are not backed up or otherwise protected. In many cases, there is little or no control over what application or other data source may find its way onto a given system. Additionally, PCs also spend a lot of time sitting idle and consuming power, making them inefficient in spite of power management profile options.

The rationale of underutilized capacity that fueled the drive to server virtualization is even more applicable to the desktop. PCs with multi-core CPUs, high performance graphics, and large hard disks, while seemingly cheap on an individual unit basis, represent a poorly utilized asset and an attractive target for efficiency improvement when viewed as an aggregate of compute capability. As a result, it’s no surprise that desktop virtualization has attracted a high level of interest in organizations striving to rein in IT costs — as well as those seeking to improve data management.

However, some early experiences with virtual desktop integration (VDI) did not produce the desired savings and faced serious challenges with user satisfaction. First-generation VDI typically required a full virtual machine for each user, requiring large quantities of storage. It’s hard to build a business case for replacing 20 GB of (relatively) cheap SATA storage inside a PC with an equal amount of very expensive SAN storage in an enterprise array. Likewise, IO bottlenecks seriously impacted the user experience, resulting in dissatisfaction among target users.

The question then becomes this: Is it possible to realize the cost and efficiency benefits of VDI while still providing a quality user experience? The answer, of course, is yes. However, this initiative requires an understanding of the underlying considerations for efficiency and performance, and a knowledge of the various options and approaches to best address needs for a particular environment.

Capacity and IO

While operational costs offer the lion’s share of cost-saving opportunities, a desktop virtualization ROI is not likely to be positive if it increases capital costs, and enterprise storage can be one of the key contributors to diminishing ROI. While there is an inherent cost disadvantage in displacing local PC disks with enterprise storage, there is also a potential benefit to be realized in the opportunity to aggregate common and redundant data.

Given the commonality of OS and application files, it is entirely possible to dramatically reduce the quantity of storage required for a virtualized desktop environment and, at the very least, neutralize this aspect of the ROI. For many, this is a critical factor in getting a VDI project off the ground.

Likewise, the user experience can make or break a VDI initiative, and a key element in the user experience is responsiveness, which depends on IO performance. At first glance, the IO requirements for many PC users appear to be quite low when viewed as an average. However, designing a solution based on average IO requirements can be a mistake, because there are times when IO can jump an order of magnitude or rise above the average.

The aggregate effect of users logging-in within the same time period can generate enough activity to exceed the IO capability of SAN storage array. These periods of burst IO, often related to booting and log-in activities, can have an enormous impact on the user experience and overall acceptance of VDI.

Vendor Options

It’s interesting to note that addressing these two issues of capacity and IO are actually related. In both cases, the techniques associated with reducing storage consumption can contribute to improving IO. There are several approaches to addressing these issues.

First, VDI software vendors are understandably focused on solving them, but solutions are also available from traditional storage vendors and a newer breed of companies specifically focused on the virtualization space.

At the VDI software level, current-generation products from companies like VMware and Citrix offer the ability for virtual clients to share common images, thereby reducing the numbers of unique copies required. VMware linked clones allow for gold master images with personalized clones, and Citrix offers the capability to separate OS, application and user profile elements. The net effect of both approaches is to eliminate multiple copies of the common data and simply store differences.

The issues of redundancy and poor utilization are global issues that storage vendors have been striving to tackle, so it should come as no surprise that they too offer solutions targeted to desktop virtualization. These usually involve the use of storage-based snapshots as well as possibly thin-provisioning and even data deduplication.

The latter technology, although most closely associated with backup, is being leveraged for primary storage in virtualization applications by some vendors, notably NetApp. In addition, the virtualization tsunami is likely to encourage others to introduce primary storage deduplication offerings.

The problem of IO is complicated by the fact that traditional enterprise-class storage systems are not generally optimized for the usage characteristics of a VDI environment. Storage caches are typically configured for a high read:write ratio and tend to expect some locality of data references. In a VDI profile, the read:write balance can be equal or reversed, and depending on the structure of the virtual guest images, IO may be highly randomized resulting in little or no cache locality of reference.

VDI software vendors have worked to improve workload efficiency and the previously mentioned use of shared common images can improve cache locality, but there is still a challenge associated with storage IO that can only really be addressed by intelligent architecture and configuration of the storage environment. From a technology perspective, the advent of flash memory devices could be a boon to VDI environments. While flash has been expensive, it is becoming more affordable, and there is evidence that if intelligently applied, it can serve as a significant enhancement.

In addition, a new class of vendors is emerging with products targeted to the specific needs of desktop virtualization. For example, Atlantis Computer’s ILIO (In-Line Image Optimization) is a virtual appliance that manages the separation of OS, application and user profiling data for storage efficiency. It also intelligently caches common components in memory to provide very high burst IO capability even with very low-cost storage. Another company, Unidesk, recently emerged from stealth mode and is offering technology focused on efficient storage capacity reduction, persistent user personalization and ease of management of virtual images.

The combination of maturing products and the availability of a variety of creative solutions to address requirements will certainly help to advance adoption of desktop virtualization. However, the biggest factor propelling consideration of this technology may be Microsoft Windows 7.

Many organizations bypassed Vista and are still running XP, an operating system that is nearly nine years old. They are looking at significant effort and cost to upgrade a multitude of systems. Desktop virtualization represents a means to a less painful and costly upgrade, and provides a way forward for regaining some control at the edges of the infrastructure.


James Damoulakis is CTO at GlassHouse Technologies.

1 Comment

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels