It’s an established problem that many organisations have too many security tools deployed. Each one requires expertise, integration points, patches and in the case of many, they can even extend your attach surface. I recently read this article by Rick Howard from Paloalto Networks* and it calls attention to the fact that platforms which don’t integrate [easiliy] are likely to fail the test of enterprise deployments and will not be extensible in the long term, minimising the value they bring to the business.
[…] most network defenders. For their entire careers, they have been trained that vendor-in-depth and best-in-breed are golden principles in cybersecurity. When all else fails, follow the golden principles. […]
Ironically, these same network defenders have missed the point advocated by Geer’s monopoly paper. In it, the authors advocate several actions designed to limit the attack surface of the Microsoft operating system platform:
Publish interface specifications to major functional components of its code, both Windows and Office.
Foster development of alternative sources of functionality through an approach comparable to the highly successful “plug and play” technology for hardware components.
Work with consortia of hardware and software vendors to define specifications and interfaces for future developments in a way similar to the Internet Society’s RFC process to define new protocols for the internet.
[…] you will find that adopting a security platform that integrates with other vendors is exactly the same solution.
“News outlets and blogs will frequently compare DDoS attacks by the volume of traffic that a victim receives. Surely this makes some sense, right? The greater the volume of traffic a victim receives, the harder to mitigate an attack – right?
At least, this is how things used to work. An attacker would gain capacity and then use that capacity to launch an attack. With enough capacity, an attack would overwhelm the victim’s network hardware with junk traffic such that they can no longer serve legitimate requests. If your web traffic is served by a server with a 100 Gbps port and someone sends you 200 Gbps, your network will be saturated and the website will be unavailable.
Recently, this dynamic has shifted as attackers have gotten far more sophisticated. The practical realities of the modern Internet have increased the amount of effort required to clog up the network capacity of a DDoS victim – attackers have noticed this and are now choosing to perform attacks higher up the network stack.”
“Attackers can order their Botnets to perform attacks against websites using “Headless Browsers” which have no user interface. Such Headless Browsers work exactly like normal browsers, except that they are controlled programmatically instead of being controlled via a window on a user’s screen.
Botnets can use Headless Browsers to effectively make HTTP requests that load and behave just like ordinary web requests. As this can be done programmatically, they can order bots to repeat these HTTP requests rapidly – effectively taking up the entire capacity of a website, taking it offline for ordinary visitors.”
“For applications to be resilient to DDoS attacks, it is no longer enough to use a large network. A large network must be complemented with tooling that is able to filter malicious Application Layer attack traffic, even when attackers are able to make such attacks look near-legitimate.”