— Concepts
Prompt Injection
An attack where malicious input overrides the developer's intended LLM instructions.
Also known as: Indirect prompt injection · Jailbreak
What is Prompt Injection?
Prompt injection is the AI-era equivalent of SQL injection. An attacker embeds instructions inside data the LLM reads — a webpage, a PDF, an email, a tool result — that hijacks the model into doing something the developer never intended (exfiltrating data, calling unauthorised tools, bypassing safety rules). It is the #1 unsolved security risk in agentic AI. Defences include input sanitisation, system-prompt locking, sandboxed tool execution, and human-in-the-loop confirmation for sensitive actions.
— Related
Terms connected to Prompt Injection
Concepts
AI Agent
An AI system that decides its own next action and takes multi-step actions autonomously.
Open →Concepts
Agentic AI
AI systems that plan, decide, and act on multi-step goals with minimal human oversight.
Open →Techniques
System Prompt
The hidden instruction that shapes an AI's persona, rules, and behaviour.
Open →Concepts
Tool Use
An AI's ability to call external tools (APIs, code, search) instead of just generating text.
Open →From definitions to deployed projects.
Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.
