OpManager: A single console to manage your complete IT infrastructure. Click here for a 30-day free trial.
Welcome Guest | Sign In
ECommerceTimes.com

Security Showdown: Cloud Computing vs. On-Premise IT

Security Showdown: Cloud Computing vs. On-Premise IT

Either in real terms or perceived terms, security is one of the biggest hang-ups people have, and it's a wide-open question. When we talk about the cloud and the enterprise, are we talking about something that is fundamentally different in terms of securing it, versus what people are accustomed to doing across their networks?

Much of the cloud security debate revolves around perceptions. It's about seeing the glass as half-full. Perhaps it's only a matter of proper practices and means to overcome fear, caution and reluctance to embrace successful cloud computing.

Or is the glass half empty -- that in order to ramp up to cloud computing use and practices, a number of potentially onerous and perilous security pitfalls will prove too difficult? Is it only a matter of time before a few high-profile cases nip the cloud security wannabees in the bud?

For sure, security in general takes on a different emphasis, as services are mixed and matched from a variety of internal and external sources.

So will applying conventional security approaches and best practices be enough for low-risk, high-reward, cloud computing adoption? Is there such a compelling cost and productivity benefit that cloud computing means that if you are late, you would be in a difficult position vis--vis your competitors or that your cost will be high?

Most importantly, how do companies know when they are prepared to begin adopting cloud practices without undo risks?

Here to help us better understand the perils and promises of adopting cloud approaches securely, we welcome our panel. With us we have Glenn Brunette, distinguished engineer and chief security architect at Sun Microsystems. He is also a founding member of the Cloud Security Alliance (CSA). We're also joined by Doug Howard, chief strategy officer of Perimeter eSecurity, and president of USA.NET; Chris Hoff, a technical adviser at the Cloud Security Alliance, and also director of cloud and virtualization solutions at Cisco Systems; Dr. Richard Reiner, CEO of Enomaly; and lastly, we welcome Tim Grance, program manager for cyber and network security at the National Institute of Standards and Technology (NIST).


Listen to the podcast (55:50 minutes).

As I mentioned, the biggest hang-up people have, either in real terms or perceived terms, is security, and it's a wide-open question, because we could be talking about infrastructure, platform as a service (PaaS), data, or simply doing applications. All across the board people are applying the word "cloud." But I think for the intents and purposes of our discussion we want to look at what the enterprises are going to be doing. We have a crowd of architects with us.

Let me take my first question to you, Chris Hoff. When we talk about cloud and enterprise, are we talking about something that is fundamentally different in terms of securing it, versus what people are accustomed to do across their networks?

Chris Hoff: That's a great question, actually. Again, it depends upon what you mean, and, unfortunately, we are going to probably say this a thousand times.

Dana Gardner: Let's get the taxonomy over with.

Hoff: Yeah, what is cloud? Depending upon the application, you will have a set of practices that almost look identical to what you would use in non-cloud environments. In fact, with the CSA, the 15 domains of areas of focus are really best practices around what you should be doing to secure your assets in your business, no matter where you happen to be doing your computing.

That being said, there are certainly nuances and permutations of certain things and activities that we do or don't do currently in applications -- of moving your information applications to the cloud that, in some cases, are operational and, in some cases, behavioral, and, in some cases, technical.

You can dive in and slice and dice up and down the stack, but it's fair to say that, in many cases, what cloud has done and what virtualization has done to the enterprise is to act as a fantastic forcing function that's allowed us to put feedback pressure on the system to say, "Look, depending on what we are doing internally in our organizations, and the care and feeding of our infrastructure applications and information, now that I am being asked to move my content applications information outside my normal comfort zone of the firewall and my policies and my ability to implement what I normally do, I really need to get a better handle on things."

This is where we're starting to see people spin up things they weren't doing before or weren't doing with as much diligence before, and operationally changing the way they behave and how they assess and classify what they do and why.

Gardner: Richard Reiner, tell me a little bit about what the pitfalls are. What makes this a little different in terms of the risks?

Richard Reiner: It's an entirely different set of questions when you are talking about software as a service (SaaS) versus platform versus infrastructure. So, let me just answer for the infrastructure-as-a-service (IaaS) part of the story, which is where we play. We have a platform that does that.

Fundamentally, when you look at infrastructure-on-demand services, they are delivered by means of virtualization and, for most enterprises, probably a very large majority of enterprises, it's the first time that they have even considered, much less actually deployed, infrastructure of a nature that is simultaneously shared and virtual.

Shared means something hostile could be there alongside your workload as the customer, and virtual means that fundamentally it's a software-induced illusion. If something hostile in there can subvert one of the software layers, take control of it, or make it behave differently than what is expected, the customer's workload could find itself executing on a virtual server, running code on a virtual processor that is nothing short of hostile to it.

A virtual processor could be programmed, for example, to wait until secrets are decrypted from disk and then make off with the plain text. That's a fundamental new risk and it's going to require new controls.

Gardner: Glenn Brunette, perhaps another of way of posing this question is not whether the cloud is secured or not, but whether client-server architectures are secured or not? And, is the risk with cloud less than the risk with client-server? Is that fair?

Glenn Brunette: That's an interesting way to put it, for sure. To echo my fellow panelist's previous statements, a lot of it depends on how you look at cloud and what your definition is, whether you're dealing in a SaaS model, where you have a very specific well-defined interaction method, versus something, maybe IaaS, where you have a lot more freedom, and with it a lot more risk.

Is it more or less secured than client-server? I don't think so. I don't think it is either more or less secured. Ultimately, it comes down to the applications you want to run and the severity or criticality of these applications, whether you want to expose them in a shared virtualized infrastructure.

With respect to how these applications are managed, a lot of the traditional client-server applications tended to be siloed, and those siloed applications had problems for scalability and availability, which posed problems for providing continuity of service. So, I don't think they are necessarily better or worse than one another. Their issues are just little bit different.

Gardner: Doug Howard, maybe this is back to the future. There was a time when those things were centralized and they only went out through the interface to a green terminal. That had some advantages. Are we looking at similar advantages now with cloud computing, where you can control a single code base or you can manage only the amount of information you want to go across the wire, without risk of data being left on clients and all that difficulty of managing different application variations and platforms at the edge?

Doug Howard: Clearly, if you look at where client-server was many years ago, as compared to where it is today, it's significantly different. The networks are different, the infrastructure is different, and the technology is different. So, the success rate of where we are today, compared to where we were 10 and 15 years ago trying the same exact thing, is going to be different.

At the end of the day, it's really about the client experience and, as you guys sitting in the audience are probably thinking right now, everything that we talk about starts with, "Well, it depends" and various other alternations to that. From your perspective, the first thing that you need to know is, "Am I going to be able to deliver a service the same way I deliver it today at minimum? Is the user experience going to be, at minimum, the same that I am delivering today?"

Because if I can't deliver, and it's a degradation of where my starting point is, then that will be a negative experience for the customers. Then, the next question is, obviously, is it secured as a business continuity? Are all those things and where that actual application resides completely transparent to the end user?

I'll give you a key example. One of the service suites that we offer is messaging. It's amazing how many times you walk into a large enterprise client, and they go, "Well, I'd like to see a demo of what the user experience of getting messaging services from a hosted or from a shared infrastructure is, compared to what it would look like in-house."

Well, open your Outlook client, because if it's different than what it would be in-house and out of house, we're starting at the wrong point. We shouldn't be having this conversation.

The starting point you need to really think about, as you go through this, is does it look like it did 10 years ago or 15 years ago? It doesn't really matter. The client experience today is going to be significantly different from what we tried 10 or 15 years ago.

Gardner: Tim Grance, it sounds like we have a balancing act, risks and rewards, penalty, security. It's not going to be all on one side, but you want to make the right choice and you want to get the rewards of the economic benefits, the control, the centralization, and, of course, you don't want to have to deal with a major security blow-up that gets a lot of bad publicity. How are you approaching this from that risk-rewards equation?

Tim Grance: Any time you do things at scale, it's like standards. If you do it really well, it's great, because you have a systemic answer. If you don't, you get ugly really fast. God and the devil both dwell in the details, depending on how well you do these things. But it's hard elevating it as just another cold-hearted business decision you have to make.

If you aggregate enough demand in your enterprise or across your area of work, and you can yield enough dollars to put up for someone to bid on, people will address a lot of these security concerns -- I don't have a transparent security model, I don't know exactly how you are protecting my data, I don't know where you are putting your data.

If you give them a big enough target, you aggregate enough demand to make it attractive. You can drive the answers to all of these questions, but you do have to ask for the full set of business use cases to be addressed.


Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: The Open Group sponsored this podcast.


Facebook Twitter LinkedIn Google+ RSS