ONROL — AI Execution School

    — Concepts

    Prompt Injection

    An attack where malicious input overrides the developer's intended LLM instructions.

    Also known as: Indirect prompt injection · Jailbreak

    What is Prompt Injection?

    Prompt injection is the AI-era equivalent of SQL injection. An attacker embeds instructions inside data the LLM reads — a webpage, a PDF, an email, a tool result — that hijacks the model into doing something the developer never intended (exfiltrating data, calling unauthorised tools, bypassing safety rules). It is the #1 unsolved security risk in agentic AI. Defences include input sanitisation, system-prompt locking, sandboxed tool execution, and human-in-the-loop confirmation for sensitive actions.

    From definitions to deployed projects.

    Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.

    Reserve seat