No One Can Afford an Attack - Find the best Cybersecurity Pros to Protect Your Business Data
Welcome Guest | Sign In

Full Disclosure Applies to Internal Security Too

By Ed Moyle
Dec 21, 2017 5:00 AM PT

If you've been keeping up with the news, you've probably noticed a few recent reports about companies that may have been a little less than candid about security issues. For example, we recently learned that Uber experienced a breach in 2016. As we've also learned from subsequent press reports, the company may have paid the attacker to remain silent about that breach instead of acknowledging it publicly and openly.

Along similar lines, we all probably remember -- and are still dealing with the consequences of -- the Equifax breach, which exposed the credit histories of millions of individuals. The Equifax board of directors recently commissioned a report to investigate stock sales key executives made shortly after the breach. The findings indicated that the executives who sold shares at that time were not in the loop regarding the breach and did not know that it had occurred until well after the fact.

So, while some personnel at Equifax knew about the exposure, the report shows that key executives were not informed about it until weeks later.

The point is, there's a lesson for security practitioners in events like these, even beyond the obvious -- that is, the impacts that a well-publicized security breach can have. Specifically, there's a lesson about how and when we communicate within our own organizations -- to internal stakeholders, decision makers, senior management, and the board -- about security-relevant events and occurrences.

Despite pressures that we might feel to downplay or keep challenging facts under wraps, very seldom is that the best course of action in the long term. Obviously, communications about these topics should be tempered with appropriate discretion in light of circumstances.

For example, you probably don't want to use a bullhorn to shout about a breach before you've done all the due diligence and confirmed that one actually occurred. However, effective and open communication is a key component to any successful response strategy.

This communication won't happen on its own, though. There is groundwork that needs to be done. Forethought is necessary if we are going to communicate optimally -- both during a breach and as a normal course of routine prevention and ongoing security "hygiene."

That sometimes may require us to convey messages that are hard for others in the organization to hear. It likely won't make us popular; but being popular isn't our job -- securing the organization is.

Inner Workings

In the case of vulnerabilities, traditional wisdom tells us that disclosing information is beneficial -- it helps the community at large. By alerting the community to the presence of a vulnerability, you empower people to act -- for example, by installing a patch, hardening configuration against the issue, deploying compensating controls, or even (if the situation is extreme enough) discontinuing use of the vulnerable technology to await a solution.

Knowing the facts gives those potentially impacted by the issue options that would be unavailable to them if the information were kept secret.

The same thing is true of information surrounding security events inside an organization. Making internal teams aware of a security event can enable them to assist and add value. For example, an accounting team might be able to assist with investigative efforts by helping to separate legitimate from suspicious transactions. A business team might suggest remediation strategies based on knowledge of business processes or supporting systems.

By keeping internal teams apprised and looping them in, you enable their help. Since you don't know who specifically, might be able to provide that assistance, casting a wide net for support often can be advantageous.

This principle also can be applicable in situations other than breaches. In cryptography circles, Kerckhoff's Principle states that a cryptosystem should be resistant to attack even if everything about its operation (except the key) is public.

That openness and transparency adds value to the system. Why? Because it allows for examination from a broad range of people, with a variety of different interests and points of view, to collectively analyze it and contribute to improvements.

This same principle can be harnessed for internal communications about security goals as well. Openness sometimes can allow improvements, analysis, or contributions to a better overall posture to come from outside the security team.

Provided other stakeholders are informed about both goals and strategies to meet those goals, a security team open to that input can translate those suggestions directly to improvements and better outcomes.

How? When? To Whom?

The point is, there is potential value to be realized from active and open communication to internal stakeholders about security topics. This is true both for communications upward (i.e., communications with senior management and the board) and, in some cases, for communications with peer teams.

That said, it is important to recognize that clear, open and productive communication takes work. We need to plan and prepare for that communication to occur; we need to make sure we're tempering openness with discretion, and we also need to make sure that we're actively working to mitigate forces that might detract from our ability to communicate effectively.

First and foremost, it is important that communication pathways are defined explicitly and accepted (and acceptable) organizationally. This means having the conversation about internal as well as external notification channels for breaches, and ongoing information about security posture.

One effective strategy to do this is a tabletop planning exercise to walk through the communication scenarios that you might expect to arise. A breach situation would be a good starting point, but there are others as well -- for example, a ransomware scenario, an ethical or whistle-blower scenario, etc.

Conducting specific and targeted exercises like these allows you to discuss ahead of time what communication is appropriate and ensure that channels are documented. It likewise ensures that the persons on the other end know who you are and why you're contacting them, and that they should expect periodic communications from you.

Second, it's important to work actively to understand and mitigate issues that might discourage open communication. For example, there often can be situations in which human nature or other factors discourage an open approach.

Consider, for example, that very often the team members responsible for fixing an issue can be the same folks who might be the first to notice unauthorized activity. Make it clear to them that objectively communicating according to defined pathways won't lead to unfortunate consequences (i.e., blaming them for the issue existing in the first place).

Likewise, banal as it seems, sometimes scheduling can be an issue (i.e. can you reliably get face time with senior managers when you have an urgent message to communicate?). In that case, having a "break glass" procedure to allow immediate access to them in an emergency situation is worth discussing ahead of time.

At the end of the day, good communication is essential, but it won't just happen -- it takes work to put it in place, planning to ensure it's effective, and discipline and vigilance to keep it going.

Ed Moyle is Director of Thought Leadership and Research for ISACA. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.

Facebook Twitter LinkedIn Google+ RSS
How do you feel about accidents that occur when self-driving vehicles are being tested?
Self-driving vehicles should be banned -- one death is one too many.
Autonomous vehicles could save thousands of lives -- the tests should continue.
Companies with bad safety records should have to stop testing.
Accidents happen -- we should investigate and learn from them.
The tests are pointless -- most people will never trust software and sensors.
Most injuries and fatalities in self-driving auto tests are due to human error.