Diffusion Model
Also known as: Diffusion, Denoising Diffusion
A generative AI architecture that creates images by learning to reverse a gradual noising process, powering systems like Stable Diffusion and DALL-E.
Diffusion models generate images by learning to reverse a process that gradually adds noise to images—starting from pure noise and refining toward a coherent image.
How It Works
Training:
- Take real images
- Gradually add noise until pure static
- Train model to predict and remove noise at each step
Generation:
- Start with random noise
- Iteratively denoise guided by text prompt
- End with coherent image matching description
Why It Works
By learning the reverse of destruction, the model learns the structure of images—what makes an image look like a “cat” or “sunset.”
Variants
- DDPM: Original diffusion approach
- Latent Diffusion: Works in compressed space (faster)
- Stable Diffusion: Open-source latent diffusion
- SDXL: Higher resolution variant
Advantages
- High-quality, diverse outputs
- Fine control through guidance
- Can be conditioned on various inputs
- More stable training than GANs
External Resources
Related Terms
Related Writing
The Shifting Value Proposition in the AI Era
December 24, 2025
Correction Penalty - The Plane of Infinite Tweaking
December 1, 2025
AI Ecosystem Capital Flows (updated)
November 20, 2025
Meta's Mind-Reading AI Sparks Urgent Call for Brain Data Privacy
November 13, 2025
Memetic Lexicon
October 7, 2025