Fine-tuning
Fine-tuning is the process of further training an existing AI model on specific data to improve performance for a narrow use case.
Also known as: model fine-tuning, model tuning
Fine-tuning is the process of further training an already-trained AI model on domain-specific data — typically several thousand examples from your own business — so the model improves on the specific use case you need. The result is a model that understands industry terminology, follows your business's writing style, or classifies documents according to your categories. Fine-tuning is powerful but also expensive and time-consuming. For most businesses, RAG or good prompt engineering is sufficient — fine-tuning becomes relevant when those do not deliver the precision needed, or when the task requires particular style consistency.
Related terms
- LLM (Large Language Model) — An LLM is a large language model trained on enormous text volumes that can generate, summarise, and analyse text in a human-like way.
- Prompt engineering — Prompt engineering is the craft of formulating instructions to a language model so it returns consistent, precise, and useful answers.
- RAG (Retrieval-Augmented Generation) — RAG is a technique where a language model answers based on the business's own documents — instead of only its general training.