Our vulnerable attack surface is increasing, as are the adversaries targeting our systems and networks – both in terms of quantity and sophistication. Our ability to safeguard ourselves is fundamentally incapable of scaling in parallel with the threats confronting us. The inadequacy of old-school perimeter-based network defense was proven long ago. Even the very notion of “the network” which we seek to defend is problematized by the business-driven rush to embrace the innumerable cloud-based PaaS, SaaS, and IaaS offerings that have emerged over the past decade. As critical business processes have been shifted to the cloud, the network perimeter has largely disappeared.
Information-security professionals have embraced the current sad reality that networks will be compromised eventually and hence the ability of a given organization to rapidly detect and respond to breaches has increasingly overshadowed any notion of preventing intrusions.
An effective preemptive detect-and-respond capability is contingent on two fundamentals:
- continuous aggregation of network and endpoint telemetry sufficient to provide near real-time evidence of attackers’ behavior on your network, and
- enough highly-trained security analysts to parse out the attackers’ signal from the telemetry noise and adequate incident-responders to hunt down and kick the attackers out of your network.
Highly-skilled information-security professionals are a scarce resource. While there are a variety of entry points into the profession of network defense, they all share one commonality: time. There are problems which can be addressed by throwing money at them. Regrettably, the limited size of the information-security talent pool is not one of them. In the short-term, organizations actively poach highly-qualified staff from one another, but this has the long-term effect of inflating information-security salaries while failing to address the root problem. (High salaries do serve as an incentive drawing newcomers into the field, but there is no short-cut around the learning curve.)
Meanwhile, attackers are waltzing round our defenses and enjoying a heyday at our expense.
We’ve talked about the people problem, so what about the telemetry problem? In some sense, the situation would appear significantly better, in view of the fact that you can throw money at the problem and just buy more black boxes. Unfortunately, in order to achieve effective monitoring coverage, you’re going to have to deploy solutions from a wide variety of vendors. Currently, as a general rule, information-security tooling (be it hardware or software) doesn’t seamlessly interoperate with other vendors’ products. So you’re stuck with three options:
- accept the fact that your security point-solutions are just point-solutions and deal with it,
- identify a single vendor providing a reasonable subset of security tools that can interoperate via the vendor’s proprietary control plane and content yourself within that walled garden, or
- task your network defenders with writing scripts to cobble together an acceptable degree of interoperability between your various defenses by leveraging their respective proprietary APIs.
Option three is the best option presently available. Unfortunately, this approach is incredibly expensive, both in terms of the many man-days your staff will need in order to build and maintain your bespoke vendor integrations as well as in terms of the opportunity cost of your network-security team having their attention divided between actively defending your network and tool-smithing. And so we find ourselves again confronting the people problem.
What do open standards have to do with any of this?
Imagine a world in which information-security vendors’ products all spoke a common language out of the box. Picture a world in which your security analysts and incident-responders didn’t have to address emerging threats isolated within the silo of your organization, but could leverage the collective intelligence of other security professionals confronting the same emerging threats in other organizations via information-sharing trust groups.
This is the world we are building, and it all starts with open standards. Rome wasn’t built in a day, nor was the Internet. It’s going to take a considerable amount of time and effort to get there. Nevertheless, New Context (along with other firms of similar long-term vision) are committed to building this new world.
We can turn back the tide. We can get to a place where our networks aren’t open playgrounds for attackers. Effective security automation (enabled by interoperable tools supporting open standards) is absolutely necessary in order to free our limited pool of human talent to do what humans do best.
What can you do to help? As your organization issues RFPs, ask vendors if they support STIX/TAXII and OpenC2. If they don’t, find out where it is on their roadmap. If a vendor tells you they have no interest in supporting open standards, that’s a clear signal they’re depending on you to content yourself inside their walled garden in order to sustain their business model, regardless of the impact on your organization’s security posture – take your money elsewhere.