Jump to navigation Skip to main content Jump to footer
Prof. Dr. Konrad Rieck | Professor of Computer Science at TU Berlin and BIFOLD © Kevin Fuchs FOTOGRAFIE

Agentic AI: Between FOMO and security risks

In psychology, we talk about FOMO, the fear of missing out on something important. With agentic AI, we are on the cusp of the next leap in intelligent technology, one that must not be missed. By equipping large language models with their own tools and allowing them to navigate our digital world independently, we hope to unlock new realms of productivity. In these realms, not only will emails be answered and appointments coordinated entirely automatically, but AI agents will also look after customers, carry out research and develop new products. The very idea of handing these tasks over to an AI is tempting and fires the imagination.

It remains to be seen where and how these expectations will be met. One thing is already clear, however: fundamental security issues remain unresolved when it comes to agent-based AI. For decades, the principle in computer science has been to separate data and instructions in order to minimise the attack surface. AI agents simply throw this principle out the window. They never know for certain whether they are currently reading data or receiving new instructions. A ‘Wait, stop, I’m speaking now!’ is both a quote and a potential instruction. So far, we have not found any reliable mechanisms to prevent this. Thus, a manipulated email, a tampered document or a seemingly harmless website can throw AI agents off balance and trigger harmful actions.

Anyone who lets an AI agent loose on their emails, for example, is like a homeowner who hands a key to a stranger and hopes that the stranger will only water the flowers. The agent has access to sensitive data, can communicate on our behalf and make decisions. We have no guarantee that it will stick to its tasks. Prompt injection – the targeted injection of instructions via manipulated content – can turn a helpful assistant into an accomplice in the blink of an eye, causing damage as an attacker’s extended arm.

These security issues cannot be ignored. Instead of FOMO, we should be guided by conscious deliberation. Which use cases justify a security risk? Where can agents be deployed in such a way that errors remain correctable? What damage are we prepared to accept? None of these questions is easy to answer, and our hopes can quickly be dashed. It will take considerable effort in both research and practice to reconcile the potential and security of agent-based AI. Yet responsible progress also means utilising technology when we have mastered it — and not when we fear missing out on it.

March 2026