ML accelerates the cyber arms race — we need real security more than ever

Picture credit: Pixabay

Machine learning is en vogue, being applied to many classes of problems. One of them is cybersecurity, where ML is used to find vulnerabilities in code, simulate attacks, and detect when an intruder has breached a system’s defenses. Ignoring that intrusion detection is an admission of defeat (it comes into play when your system is already compromised!) this sounds like a good development: helping defenders to find weaknesses faster, hopefully before the attacker does.

This rather optimistic view of the role of ML in cybersecurity ignores the fact that the attacker will use the same techniques to find weaknesses faster. Furthermore, let’s assume, optimistically, that ML speeds up the defense (proactively detecting weaknesses, detecting intrusions) as much as the attack (detecting and exploiting weaknesses). This is a pretty big assumption, as the attacker can choose where to attack, while the defender must defend everywhere. But even if this assumption holds, detecting vulnerabilities is only part of the defense: the defender must also remove those vulnerabilities, and that part is not accelerated by ML, as it still requires humans to analyze, modify, test, and deploy programs.

In other words, the “patch” part of the traditional, reactive patch-and-pray cycle of software debugging isn’t accelerated, only the “pray” part. So, rather than strengthening the defender, the net effect of ML is increasing the attacker’s advantage in the cybersecurity arms race.

This is not an argument for stopping the defensive use of ML in cybersecurity. It is an argument that ML is not the technology to win the cybersecurity war — it will, at best, delay the inevitable defeat. That’s still better than doing nothing, but it’s a fatal mistake to think it’s all you need to do.

In this respect, it is bewildering to see the widespread ML-mania everywhere in cybersecurity; for example, the Australian Government’s National Security Challenges for the National Intelligence and Security research grants program mentions ML a lot, but is surprisingly quiet about anything that will prevent attacks in the first place. Other countries don’t seem much better.

ML hasn’t changed the fact that systems will be compromised if they aren’t secure by design, and their critical components operate to specification. Cybersecurity work needs to focus on these fundamental approaches. Anything else buys us at best some breathing space and is at worst a detraction creating a fatal illusion of security.

We need real cybersecurity more than ever, especially since the advances in ML shift the battlefield further in favour of the attacker. We need to re-focus on the fundamentals: Security-oriented design that enables proof of security enforcement, and implementations that can be proved to match the design. The seL4 microkernel and work based on it show that it is possible, but as a community, we need to continue to work on scaling these guarantees up to full, real-world systems. ML won’t do it for us.

Authors: Gernot Heiser is the Scientia Professor and John Lions Chair of Operating Systems at the School of Computer Science and Engineering of UNSW Sydney, Australia. His primary occupation is leading research in and evangelising for the Trustworthy Systems (TS) Group, aiming at making software systems truly trustworthy, i.e., secure, safe, and dependable. Prime application areas are safety- and security-critical cyber-physical systems such as aircraft, cars, medical devices, critical infrastructure, and national security.

DisclaimerAny views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.