Cache as Cache Can: New SSD Options Boost Performance

Last week saw a pair of announcements by major vendors that reflect the degree to which solid state disk or flash caching technologies are moving into the mainstream:

  • IBM announced several key enhancements to its latest XIV Gen3 solutions, including a solid state drive caching option that can increase system performance by up to three times. The new option provides up to 6 TB of fast-read SSD for the XIV Storage System Gen3 IBM introduced in July 2011. Caching frequently used data on SSD memory placed between the system memory and hard drives allows customers to store and retrieve that data with virtually no latency, speeding access by orders of magnitude, according to IBM. Plus, SSD cache requires no separate tier to manage.
  • EMC announced VFCache, a new server flash caching solution that increases throughput by 3X while reducing latency by 60 percent, thus enhancing the performance of read-intensive workloads with cacheable working sets, including databases (CRP, ERM), OLTP, email, Web and reporting. Going forward, EMC plans to add additional features to VFCache including deduplication, additional capacity and form factors, deeper integration with EMC storage solutions and management technologies, and additional integration with its FAST architecture. EMC also announced plans for a Q2 2012 early customer access program for “Project Thunder” which delivers the benefits of VFCache in a server-networked appliance. EMC’s VFCache has been precertified for numerous x86-based servers from vendors including Cisco, Dell, HP and IBM.

Down With Data Bottlenecks

As these and many other announcements suggest, there is significant growing interest in SSD- or flash-based caching. But why? For two reasons: 1) Despite continuing to sell at 20X to 50X premiums over traditional hard disk drives, SSD prices have still fallen enough to make them increasingly compelling to growing numbers of businesses looking to maximize application/workload performance, thus changing strategic imperatives from $ per GB to $ per I/O, and 2) As organizations continue to amass and leverage ever-larger volumes of information, they increase the likelihood of I/O “bottlenecks” occurring between computing and storage systems that measurably impact and degrade the performance of key business applications and workloads.

This last point is particularly important, since it qualifies as the IT equivalent of attempting to pour a gallon of milk through a soda straw, a problem for which PCIe-based SSD cache offers a relatively easy, simple and affordable fix (“relative” depending on the size of the SSD solution and IT budget involved). In essence, data critical to a given application (say, a customer or product database required by an OLTP application) is moved manually or automatically according to stated policies to the SSD cache. Then, said application can easily access that data without being punished by the inherent mechanical capabilities of HDDs or throughput limitations of conventional systems and networks.

In the case of IBM’s XIV Gen3, that 6-TB SSD cache is connected to a given system via a robust PCIe bus, supporting enhanced transaction speeds and performance. In addition, integration with the larger XIV Gen3 storage system can dramatically improve the latency of common functions like data back-up and recovery. Sounds pretty good, but is it? Overall, yes. XIV has been a particular bright spot in IBM’s storage business where over 5,200 XIV units have been shipped since the company was acquired in January 2008. Moreover, some 1,300 of IBM’s XIV customers are entirely new to the company.

Uptake of the latest (introduced in July 2011) Gen3 solutions has also been very robust, constituting 80 percent of the XIV capacity sold in Q4 2011. Given that XIV is especially appealing to IBM’s bread-and-butter enterprise customer base, optional SSD caching that can improve OLTP IOPS by 3X and random workload IOPS by 6X, and also delivers a near-3X latency reduction in data-intensive workloads like medical records applications seems like a no-brainer for large organizations.

A Lot to Like

With VFCache, EMC is leveraging partner Micron’s SSD technologies in a simple but also potentially radical way. In one sense, VFCache isn’t anything new. PCIe-based SSD caching solutions are available from specialty players, including Fusion-IO as well as a number of system/server vendors. All make roughly the same pitch regarding system performance and application benefits as does EMC. So what makes VFCache different or unusual? First is the fact that it’s a heterogeneous server solution coming from a preeminent storage vendor but that’s more than a bit of a bromide. EMC’s 2003 acquisition of VMware should have settled any doubts about the company’s ambitious plans to cultivate new fields, particularly those associated with x86 server architectures.

Second, while VFCache is certainly aimed at the server market, its support of the company’s VMAX, VMAXe, VNX and VNXe systems, and planned future integration with EMC’s FAST (Fully Automated Storage Tiering) architecture, along with features such as deduplication, should make it particularly attractive to EMC storage customers of every size. Finally, since VFCache can be used in most contemporary x86-based servers, it can also support the storage solutions attached to those heterogeneous systems, meaning that its throughput and latency benefits can be enjoyed by virtually any organization. In essence, VFCache appears to significantly extend the numerous wagers EMC has made over the past decade on playing in promising new x86-related markets.

All these points aside, there’s no certainty that either IBM’s or EMC’s approach to SSD cache will be a sure winner. The fact is that as SSD prices come down, the technology will provide an increasingly viable option across numerous IT strategies, workloads and applications. In addition, there is continuing debate about the relative benefits of various SSD cache technologies, which configuration is best for which use case, and whether businesses are better served by lower-cost general purpose offerings or more-expensive but more highly optimized solutions.

Bottom line: There’s a lot to like about both IBM’s XIV SSD caching option and EMC’s VFCache. IBM’s solution should appeal to many of the company’s existing enterprise customers, particularly those who have purchased new XIV Gen3 systems, and could also help pique the interest of organizations considering or sitting on the fence about investing in these products. EMC’s VFCache is likely to find willing customers among many of the company’s existing clients, especially those making sizable investments in or transitions toward highly virtualized x86-based systems. But VFCache could also offer EMC and its channel partners an entry point to multiple new commercial opportunities and markets.

Overall, both IBM’s and EMC’s approaches to SSD caching reflect the efforts of vendors that thoroughly understand current IT needs and the implications of evolving business computing requirements. As such, both should find places in numerous business data centers.

Charles King

E-Commerce Times columnist Charles King is principal analyst for Pund-IT, an IT industry consultancy that emphasizes understanding technology and product evolution, and interpreting the effects these changes will have on business customers and the greater IT marketplace. Though Pund-IT provides consulting and other services to technology vendors, the opinions expressed in this commentary are King's alone.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels