Back to GlossaryCore Concepts

Foundation Model

Definition

A large AI model trained on broad data at scale that can be adapted to a wide range of downstream tasks through fine-tuning or prompting.

The term "foundation model" was coined by Stanford researchers in 2021 to describe the emerging paradigm of large-scale pre-trained models. Unlike traditional models built for specific tasks, foundation models like GPT-4, Claude, Gemini, and Llama serve as a base that can be adapted to countless applications. They are trained on diverse internet-scale datasets and develop broad capabilities including language understanding, reasoning, and world knowledge. Foundation models exhibit emergent abilities that appear only at sufficient scale. The foundation model paradigm has centralized AI development around a few large models, raising questions about concentration of power and the homogenization of AI systems.

Companies in Core Concepts

View Core Concepts companies →