Trends

ANALYSIS

‘Invisible’ Technologies: What You Can’t See Can Hurt You

There are times when it seems like technology can work almost too well. Now, if working too well sounds to you like an impossibility — along the lines of being too rich or too good looking — reflect that there’s more to a technology than end-user experience.

In addition to the experience of using the technology, there are other considerations that play a role: things like maintenance, operations and ongoing support. While these considerations are less directly visible to the enterprise end-user, they are nevertheless important — and when you have a technology that’s ubiquitous, where operation is transparent, and where the experience (to the end user, at least) is close to frictionless, awareness that the technology even exists can fade into the background.

Consider the plumbing in your home. Unless something major is wrong, chances are good that you don’t give much serious thought to the specific mechanics of how your plumbing works. When there’s an issue, you care very deeply — especially when there’s water dripping down the walls. However, unless something calls your attention to it, the plumbing is a given — and a black box.

This same phenomenon can occur with certain technologies used in business environments. Although they are of paramount importance to keeping the organization running smoothly, some technologies aren’t directly “visible” from a business point of view. They tend to operate below the radar, which too often means they’re not being systematically examined from a risk standpoint or vetted from an operational standpoint.

Information security is one area where this can become an issue. A few examples of “invisible” technologies (by no means an exhaustive list): TLS, the backbone of secure information exchange for many applications; SSH, often used as a default mechanism for systems administration; SAML, used to exchange identity information between systems; and Kerberos, used as the default authentication method for many operating system platforms.

Some Risks Invisible Technologies Pose

These “invisible” technologies represent a potential risk area for organizations. First, they often don’t get enough scrutiny. While we might thoroughly vet, analyze, assess and model a totally new technology or application coming into the organization, it might not occur to us to spend the same time systematically analyzing technologies that already are in active use under the radar.

Second, we may not be as alert to situations that impact the operational security of those technologies, such as potential vulnerabilities, new attack paths, and changes to safe configuration or operating parameters. Again, this isn’t because those things are not important — it is a function of resource bandwidth and perceived need.

Consider the security technologies TLS and SSH — they are both in near-daily use in most organizations, but may not undergo the same level of scrutiny as more directly business-visible technologies.

How well do you understand TLS usage in your environment? Are you familiar with exactly how and where it’s used? Have you reviewed specific configuration settings, like which ciphersuites are allowed?

With TLS, there are several significant issues that might not be front of mind. Legacy protocol versions (i.e., TLS protocol versions less than 1.2) are known to be susceptible to attack (e.g., POODLE, DROWN). There are also usage-related issues — for example, HTTPS Interception, the subject of US-CERT’s recent TA17-075A advisory.

The same is true of SSH. ISACA and SSH Communications Security recently issued joint guidance that outlines several areas of potential concern in SSH usage, such as configuration-related issues, key management, and other areas that might be off an organization’s radar but are critical to ensuring that its technology is secured.

Making the Invisible Visible

A useful exercise for organizations is to train themselves to be alert for potential blind spots and to put active measures in place to help find and address them. There are a few valuable strategies that can assist this effort.

First, establish mechanisms that can help you identify where potential blind spots are, such as application threat modeling. Part of the process of threat modeling involves creating a data flow diagram, or DFD — that is, a systematic and comprehensive map of information exchange pathways throughout an application over its various components and systems. Analyzing data access in a systematic way forces you to question how tasks are accomplished — potentially cluing you in to overlooked areas as a result.

Very few organizations will have the time or resources to threat model their entire ecosystem. Assuming you do not have that luxury, you still can realize quite a bit of value just by adopting the mindset of looking for blind spots and questioning assumptions. As you interact with sources of data that you might come across in the course of doing your job, you can take the opportunity to question your own understanding of how entities interact.

In fact, this process can be helped by anything that provides information about how systems or applications are used: business impact assessments, interaction diagrams, network topology diagrams. Even output from configuration management or vulnerability assessment tools potentially can provide clues and help you identify areas that could use further scrutiny.

Once you have identified an area where you know (or suspect) that something is running in an under-the-radar way, a useful step is to define who in the organization is assigned accountability for keeping the usage secured and maintained appropriately.

The absolute most important element is to ensure that it’s someone’s job to keep specific technology elements secured and maintained. It already may be the case that someone is monitoring the technology, and you just need to confirm it.

Other times, nobody will have explicit accountability for a particular element, and keeping track of it will need to be assigned. Either way, it’s not reasonable to assume that the security team can do it all singlehandedly. Instead, ensure that accountability is assigned in a practical way, and that there exists some feedback mechanism to ensure that appropriate actions are taken when necessary.

Ed Moyle

Ed Moyle is Director of Thought Leadership and Research for ISACA. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels