The Future of Prompt Injections

Some days, it feels like every application and system out there is getting new functionality based on large language models (LLMs). As chatbots and other AI assistants get more and more access to data and software, it’s vital to understand the security risks involved—and prompt injections are considered the number one LLM threat.

In his ebook Prompt Injection Attacks on Applications That Use LLMs, Invicti’s Principal Security Researcher, Bogdan Calin, presents an overview of known prompt injection types. He also looks at possible future developments and potential mitigations. Before you dive into the ebook with its many practical examples, here are a few key points highlighting why prompt injections are such a big deal.

Magic words that can hack your apps

Prompt injections are fundamentally different from typical computer security exploits. Before the LLM explosion, application attacks were typically aimed at getting the application to execute malicious code supplied by the attacker. Hacking an app required the right code and a way to slip it through. With LLMs and generative AI in general, you’re communicating with the machine not using precise computer instructions but through natural language. And almost like a magic spell, merely using the right combination of words can have dramatic effects.

Far from being the self-aware thinking machines that some chatbot interactions may suggest, LLMs are merely very sophisticated word generators. They process instructions in a natural language and perform calculations across complex internal neural networks to build up a stream of words that, hopefully, makes sense as a response. They don’t understand words but rather respond to a sequence of words with another sequence of words, leaving the field wide open to “magic” phrases that cause the model to generate an unexpected result. These are prompt injections—and because they’re not well-defined computer code, you can’t hope to find them all.

Understand the risks before letting an LLM near your systems

Unless you’ve been living under a rock, you have most likely read many stories about how AI will revolutionize everything, from programming to creative work to the very fabric of society. Some go so far as to compare it to the Industrial Revolution as an incoming jolt for modern civilization. On the other end of the spectrum are all the voices that AI is getting too powerful, and unless we limit and regulate its growth and capabilities, bad things will happen soon. Slightly lost in the hype and the usual good vs. evil debates is the basic fact that generative AI is non-deterministic, throwing a wrench into everything we know about software testing and security.

For anyone involved in building, running, or securing software, the key thing is to understand both the potential and the risks of LLM-backed applications, especially as new capabilities are added. Before you integrate an LLM into your system or add an LLM interface to your application, weigh the pros of new capabilities against the cons of increasing your attack surface. And again, because you’re dealing with natural language inputs, you need to somehow look out for those magic words—whether directly delivered as text or hidden in an image, video, or voice message.

Keep calm and read the ebook

We know how to detect code-based attacks and deal with code vulnerabilities. If you have an SQL injection vulnerability that allows attackers to slip database commands into your app, you rewrite your code to use parameterized queries, and you’re usually good. We also do software testing to make sure the app always behaves in the same way given specified inputs and conditions. But as soon as your application starts using an LLM, all bets are off for predictability and security.

For better or worse, the rush to build AI into absolutely everything shows no signs of slowing down and will affect everyone in the tech industry and beyond. The pressure to use AI to increase efficiency in organizations is real, making it that much more important to understand the risk that prompt injections already pose—and the far greater risks they could pose in the future.


Read the ebook: Prompt Injection Attacks on Applications That Use LLMs

We provide outside-the-box Solutions

ERP, CRM, ON-PREMISE SOFTWARE, VOIP, and more...

Please fill in your details and we will get back to you ASAP.