Back to GlossaryArchitecture

Diffusion Model

Definition

A generative model that learns to create data by gradually denoising a random noise signal, reversing a process that progressively adds noise to training data.

Diffusion models work by defining a forward process that gradually corrupts data with Gaussian noise over many steps, and then training a neural network to reverse this process step by step. During generation, the model starts with pure random noise and iteratively denoises it into a coherent output. This approach produces remarkably high-quality images with better diversity and training stability compared to GANs. Diffusion models power leading image generators like DALL-E 3, Midjourney, and Stable Diffusion. They have been extended to video generation (Sora), audio synthesis, and 3D object creation. Techniques like classifier-free guidance and latent diffusion have made them both more controllable and more computationally efficient.

Companies in Architecture

View Architecture companies →