GANs (Generative Adversarial Networks)
Core ConceptsAn AI architecture where two neural networks compete — one generates content, one judges it — pushing each other to improve. Largely superseded by diffusion models for image generation.
GANs consist of two neural networks: a generator that creates fake data (images, audio, text) and a discriminator that tries to distinguish fake from real. They train simultaneously — the generator gets better at fooling the discriminator, and the discriminator gets better at detecting fakes.
GANs were the dominant image generation technology before diffusion models. StyleGAN produced strikingly realistic human faces, and GANs powered early deepfake technology. However, GANs are notoriously difficult to train (mode collapse, training instability) and less controllable than diffusion models.
While diffusion models have largely replaced GANs for text-to-image generation, GANs are still used for specific applications like image super-resolution, video frame interpolation, and real-time style transfer.
Real-World Example
The 'This Person Does Not Exist' website that generates realistic human faces uses a GAN — though today most AI image generators have switched to diffusion models.
Related Terms
More in Core Concepts
FAQ
What is GANs (Generative Adversarial Networks)?
An AI architecture where two neural networks compete — one generates content, one judges it — pushing each other to improve. Largely superseded by diffusion models for image generation.
How is GANs (Generative Adversarial Networks) used in practice?
The 'This Person Does Not Exist' website that generates realistic human faces uses a GAN — though today most AI image generators have switched to diffusion models.
What concepts are related to GANs (Generative Adversarial Networks)?
Key related concepts include Diffusion Model, Deep Learning, Neural Network, Deepfake. Understanding these together gives a more complete picture of how GANs (Generative Adversarial Networks) fits into the AI landscape.