— Concepts
Context Window
The maximum text an LLM can read in one prompt.
What is Context Window?
The context window is the maximum number of tokens an LLM can process at once — both the input prompt and the generated response. Modern frontier models have 200k-2M token context windows (Claude 3.7: 200k, Gemini 1.5 Pro: 2M). Bigger context = the model can read more of your documents at once without RAG.
— Related
Terms connected to Context Window
Concepts
Tokens
The chunks of text LLMs process — roughly 0.75 words each.
Open →Models
LLM (Large Language Model)
An AI model trained on huge amounts of text that can understand and generate human language.
Open →Techniques
RAG (Retrieval-Augmented Generation)
Giving an AI access to your private documents so it can answer questions about them.
Open →— Apply this
From definitions to deployed projects.
Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.
Reserve Free Masterclass