Virtualization Lets Little Bank Play on Big-Bank Turf
With virtualization, "you can move to a different data center, if the need happens to be there," said Sunny Nair, vice president of IT and systems operations at BancVue, "and just run your server farm like a power utility would run their power station, building out the computing resources necessary for a user or a customer, and then shutting that off when it's no longer necessary."
Oct 22, 2012 5:00 AM PT
Server virtualization can set the stage for private-cloud benefits for any company that wants to prevent hardware issues from interfering with operations. Combining several virtual machines onto one piece of hardware can maximize CPU usage and efficiency, especially in an environment that uses different operating systems under one umbrella.
For companies that provide services to community banks, that cloud enablement provides business agility to their customers, enabling them to better compete against mega banks on such critical areas as customer service and end-user portals.
Listen to this week's podcast featuring Sunny Nair, vice president of IT and systems operations at banking services provider BancVue. He describes how BancVue creates the services that empower its customers to beat the giants in their field by better leveraging agile IT. The discussion is moderated by Dana Gardner.
Download the podcast (18:18 minutes) or use the player:
Here are some excerpts:
Dana Gardner: Many companies these days need to tackle the dual task of cutting costs, while also increasing agility and providing better services and response times to their constituents. How did you accomplish both?
Sunny Nair: The first thing we wanted to do was to abstract the applications and the operating system from the hardware so that a hardware failure wouldn't bring down our systems. For that, of course, we went to virtualization. We experimented with various virtualization products. Out of those trials, vSphere was the best software for a heterogeneous environment like ours, where we had Windows and different flavors of Linux.
So we stuck with VMware, and that helped us abstract the hardware layer and our software layer, so we can move our operating systems and our virtual servers to different pieces of hardware, when there was a hardware issue on one server, enabling us to be more agile.
Instead of running just one server on one piece of hardware, we were able to run anywhere between 12 and 20 different servers. All servers weren't utilized at 100 percent all the time. We were able to leverage the CPU to its full capacity and run many more servers. So we had, at a minimum, a 12x increase in our server capacity on each piece of hardware. That definitely did help our costs.
Gardner: Tell us a little bit about BancVue.
Nair: BancVue is a financial services software and marketing company. We help community financial institutions compete with mega banks by providing them marketing expertise, software expertise, and data consultation expertise, and all those things require technology and software.
For many of our partners we provide the website that many people land on when they search for the website on the Internet. And we also provide the gateway to their online banking. So it's extremely important for the website to stay up and online.
In addition to that, we also provide rewards checking calculations, interest rate calculations, which customer is qualified for certain products, and so on. We are definitely a part of the ecosystem for the financial institution.
Gardner: Once you settled on your strategy for virtualizing your workloads and supporting your heterogeneity issues, how did that unfold?
Nair: It was a step-by-step approach of wading deeper into the virtualization world. Our first step was just getting that abstraction layer that I was talking about by virtualizing our servers. Then, we looked at it and we said, "Well, from vSphere we can use vMotion and move our virtual servers around. And we can consolidate our storage on a storage-attached network (SAN)." That helped us disengage further from each piece of hardware.
Then, we can look at vCenter Operations Manager and predict when a server is going to run out of capacity. That was one of the areas where we started experimenting, and that proved very fruitful. That experiment was just earlier this year.
Once we did that, we downloaded some trial software with the help of VMware, which is one of the benefits that we found. We didn't have to pay up immediately. We could see if it suited our needs first.
We used vCloud Director as a trial, and vShield and vCenter Orchestrator together. Once we put all those pieces together, we were able to get the true benefit of virtualization, which is being in a cloud where not only are you abstracted out, but you can also predict when your hardware is going to run out.
You can move to a different data center, if the need happens to be there, and just run your server farm like a power utility would run their power station, building out the computing resources necessary for a user or a customer, and then shutting that off when it's no longer necessary, all within the same hardware grid.
Fit for Purpose
Gardner: I suppose it also gets to that point of cutting your total costs, when you can manage that as a fit-for-purpose exercise. It's the Goldilocks approach -- not too much, not too little. That's especially important, when you have an ecosystem play, where you can't always predict what your customers are going to be doing or demanding.
One admin can do the work of at least three admins, once we've fully implemented the cloud, because the buildup and takedown are some of the most expensive portions of creating a server. You can automate that fully and not have to worry about the takedown, because you can say, "Three days from now please remove the server from the grid." Then, the admin can go do some other tasks.
We run Dell hardware, Dell servers, and Dell blades, and that's where we run production. In development, we also use Dell hardware, where we just use the R610s, 710s, and 810s, basically small machines, but with a fairly good amount of power. We can load up to 20 servers on in development, and as many as 12 in production. We run about 275 VMs today.