Fine-Tuning
Definition
The process of taking a pre-trained model and further training it on a smaller, task-specific dataset to adapt it for a particular use case.
Fine-tuning is a key technique in transfer learning that allows organizations to customize foundation models for specific needs without training from scratch. The process typically involves training on hundreds to millions of task-specific examples, updating either all model weights (full fine-tuning) or a subset (parameter-efficient fine-tuning methods like LoRA and QLoRA). Instruction fine-tuning teaches models to follow user commands, while domain fine-tuning adapts models to specialized fields like medicine or law. Fine-tuning can dramatically improve performance on specific tasks while requiring only a fraction of the compute used for pre-training. It has become a major service offering from AI companies, with platforms like OpenAI, Anthropic, and open-source tools making it increasingly accessible.
Related Terms
Foundation Model
A large AI model trained on broad data at scale that can be adapted to a wide range of downstream ta...
Pre-Training
The initial phase of training a foundation model on a large, general-purpose dataset before it is fi...
RLHF (Reinforcement Learning from Human Feedback)
A training technique where a language model is aligned with human preferences by training a reward m...
Transfer Learning
A technique where a model trained on one task is reused or adapted for a different but related task,...