— Concepts
MLOps
DevOps for machine learning — versioning, deploying, and monitoring AI models in production.
Also known as: LLMOps · AIOps · ML Engineering
What is MLOps?
MLOps (Machine Learning Operations) is the practice of running ML and AI systems in production reliably. It covers data pipelines, model versioning, experiment tracking, deployment, monitoring, and retraining. Tools: MLflow, Weights & Biases, BentoML, Kubeflow, NVIDIA AI Enterprise. In 2026, MLOps has bifurcated: classical-ML MLOps (still about training pipelines) and LLMOps (prompt management, eval pipelines, RAG observability). ONROL covers the LLMOps slice that matters for shipping AI products without an ML PhD.
— Related
Terms connected to MLOps
Concepts
AI Evaluation (Eval)
The discipline of measuring whether an AI system is actually doing the job correctly.
Open →Techniques
Fine-Tuning
Adjusting a pre-trained AI model on your specific data to change its behaviour.
Open →Concepts
Applied AI
The practical use of AI tools to ship products and outcomes.
Open →Concepts
AI Agent
An AI system that decides its own next action and takes multi-step actions autonomously.
Open →From definitions to deployed projects.
Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.
