— Concepts

    Hallucination

    When an AI confidently generates information that isn't true.

    What is Hallucination?

    Hallucination is when an LLM produces output that sounds confident but isn't factually accurate — invented citations, wrong numbers, fake quotes. Mitigation: ground responses in retrieved documents (RAG), require citations, set temperature lower, ask the model to express uncertainty when unsure. Still a top open problem in 2026.

    — Apply this

    From definitions to deployed projects.

    Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.

    Reserve Free Masterclass