In the military world, asymmetric warfare is where a large military force has to deal with far smaller and irregular opposition, like guerillas or other insurgents. So instead of facing off against a clearly visible enemy military unit, you could be surrounded by any number of smaller threats that remain hidden until an unexpected and often unconventional attack comes.
Most crime-fighting forces also operate under asymmetric conditions, where a finite number of police and similar units face any number of criminal threats—with the additional handicap that criminals don’t have to obey laws, rules, and regulations.
In both cases, the resemblance to cybersecurity is striking. Organizations worldwide are also locked in an asymmetric struggle where the attackers could be anywhere, strike anytime, and wreak costly havoc with disproportionately smaller resources. But compared to physical security, the asymmetry is even greater, and current advances in AI are likely to give the attackers even more firepower.
The modern-day defender’s dilemmas
We’ve written about the defender’s dilemma before—the idea that an attacker only has to succeed once while the defender has to succeed every time. This holds especially true for defending against data breaches, where one point of entry might be all it takes to gain a foothold and steal sensitive information. With the overall attack surface of a modern organization potentially spanning thousands of components spread across multiple logical and physical layers, finding one gap is much easier than tightly locking down many sprawling information systems.
Catch me if you can
Compared to the physical world, a small action can have disproportionately large effects in cybersecurity. Even though cybercriminals often operate in organized groups, even a single person can cause extensive disruption and damage to entire organizations—especially when attacks are performed and amplified via automated botnets.
Adding to the force asymmetry is the relative impunity of attackers. The vast majority of cyberattacks don’t require physical access and are performed remotely, with the attacker operating from another region or even another country. Sure, you can often track down the connection and retrace an attacker’s steps, but cases where an individual is linked to a specific attack, located, arrested, and convicted are vanishingly rare in proportion to the global volume of attacks.
Tracking down the perpetrators becomes even harder when you factor in geopolitics. It’s common for attackers to operate from or via countries that give them free rein to hack organizations and states considered hostile for political reasons. Going back to the military analogy, somebody is taking potshots at you, and there’s nothing you can do to stop them.
Shower you with noise
The other big asymmetry is that defenders have to be ready all the time while also being constrained in their actions. For example, if your application is being pounded by invalid requests that you suspect to be probes or attack attempts, you have to be careful and selective with filtering and blocking because you might affect legitimate traffic and impact business. Apart from manual operations that require stealth, attackers don’t have to worry about inaccuracies, invalid requests, and not breaking anything, especially when running botnets that deliberately spray randomized traffic to see what sticks.
Cloudflare’s State of Application Security report for 2023 showed that “HTTP anomalies” make up 30% of all HTTP traffic blocked or otherwise mitigated by their WAFs. The sheer volume shows that these are not malformed requests caused by occasional glitches but deliberate attempts to flood servers with invalid traffic—and this is only data from one provider, and only for requests that were caught successfully. This is the level of noise that defenders have to contend with around the clock while attackers pick their time and place to strike.
The AI amplifier
Advances in AI technology in the past few years have given powerful new tools to everyone, but I’d argue that so far in cybersecurity, the new AI superpowers have benefitted attackers far more than defenders. Again, this is because attackers don’t have to worry about inaccuracies or occasional errors, so researching, preparing, and executing attacks at scale becomes far easier. If you’re asking an LLM for ten possible attack payloads and intend to use them maliciously, you probably won’t mind if only one of them actually works and won’t care if another one breaks something or causes data loss.
AI-assisted development is another area where inaccuracies matter far less to attackers than to teams building production applications. LLM-based code assistants further lower the barrier to entry by making it far easier and quicker to develop malware and payloads that might not be perfect but work just well enough for one attack. Because LLMs deal with natural language, they’ve also been put to use for social engineering, greatly improving the quality and plausibility of phishing and other malicious messages. And again, even if the result doesn’t make perfect sense, it might be good enough for one attack.
Apart from text-based tools, cybercriminals have also turned to AI-generated audio and video to amplify their scamming abilities. In the last few years, there have been multiple reports of scams that use AI voice imitation to aid social engineering attacks. Recently, this approach was taken to the next level when voice imitation was combined with deepfake video to spoof an entire video call with a CFO and other company staff, convincing the victim to transfer a large sum of money to the attackers. There are also stories of AI image generation being used to successfully fake IDs in identity verification processes, opening up a whole new avenue for scams in the digital and physical realm.
For all the hype but also genuine innovation, it’s best to see AI as a massive amplifier of existing capabilities—and with the asymmetry inherent in cybersecurity, AI is amplifying that asymmetry.
Catching up with the bad guys
The painful reality is that existing LLM-based AI solutions are extremely useful to attackers yet all but useless to defenders, especially when you need to respond in real time. Security teams are being overwhelmed by noise, and AI helps the attackers crank up the volume even further, but all is not doom and gloom. For now, AI mostly gives the attackers a quantitative rather than qualitative edge, so working smart and relentlessly cutting down on the noise is the way to keep up.
The key is to truly follow well-defined security best practices and find ways to make them a reality instead of an aspirational goal that can never be attained. Automation is crucial for making this happen, but only where you’re not automating unnecessary steps or acting on uncertain data. While AI can be a great help here, be wary of directly acting on data from LLM-based products, as this always carries some degree of uncertainty and, therefore, noise. For tasks like prioritization, machine learning (ML) approaches can be far more reliable, allowing humans to focus on tasks that make the biggest difference.
The asymmetry in cybersecurity is real, but if we can stop AI from making so much noise, it may help us redress the balance.