— Concepts

    Inference

    The process of running a trained AI model to get a prediction or output.

    What is Inference?

    Inference is what happens when you call an LLM — the model is run to produce an output for your input. Distinct from training (where the model is being created). 'Inference cost' refers to the per-call cost of running the model. Companies like Groq specialise in fast inference.

    — Apply this

    From definitions to deployed projects.

    Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.

    Reserve Free Masterclass