Fine-tuning
LLM & Language ModelsThe process of further training a pre-trained AI model on specific data to customize it for a particular task, domain, or style.
Fine-tuning takes a general-purpose AI model and specializes it. Think of it like this: a medical school graduate (pre-trained model) knows general medicine, but residency training (fine-tuning) makes them a cardiologist. The base knowledge remains, but the model becomes much better at a specific thing.
There are several levels of fine-tuning. Full fine-tuning updates all model parameters (expensive, requires lots of data). LoRA and QLoRA update only a small subset of parameters (much cheaper, works with less data). Prompt tuning adds learnable prefix tokens without modifying the model itself.
Fine-tuning is used by companies to create AI that speaks their brand voice, understands their domain terminology, or follows specific workflows. OpenAI, Anthropic, and others offer fine-tuning APIs, and open-source models like Llama can be fine-tuned locally.
Real-World Example
Many AI tools on Coda One were built by fine-tuning base models — Jasper fine-tunes for marketing copy while Freed AI fine-tunes for medical documentation.
Related Terms
More in LLM & Language Models
FAQ
What is Fine-tuning?
The process of further training a pre-trained AI model on specific data to customize it for a particular task, domain, or style.
How is Fine-tuning used in practice?
Many AI tools on Coda One were built by fine-tuning base models — Jasper fine-tunes for marketing copy while Freed AI fine-tunes for medical documentation.
What concepts are related to Fine-tuning?
Key related concepts include LoRA (Low-Rank Adaptation), Pre-training, Transfer Learning, Training Data. Understanding these together gives a more complete picture of how Fine-tuning fits into the AI landscape.