— Techniques
LoRA (Low-Rank Adaptation)
An efficient way to fine-tune large models by training only a small set of new parameters.
What is LoRA (Low-Rank Adaptation)?
LoRA (Low-Rank Adaptation) is a fine-tuning technique that adds small trainable layers to a frozen base model. Result: 100x cheaper than full fine-tuning, with comparable quality for most tasks. The standard technique for fine-tuning open-source models like Llama. ONROL covers LoRA in the Orchestrator track.
— Apply this
From definitions to deployed projects.
Knowing what a term means is step one. ONROL's AI Generalist track gets you shipping projects that use it.
Reserve Free Masterclass