Skip to content

Deepfake

Safety & Ethics

AI-generated content that convincingly replaces a person's likeness or voice in media — raising serious concerns about misinformation and fraud.

Deepfakes use AI to create realistic fake media: swapping faces in videos, cloning voices for fake phone calls, or generating entirely synthetic people. The technology has become increasingly accessible and realistic, creating significant challenges for trust and verification.

The term originally referred to face-swapping in video, but now covers all synthetic media designed to impersonate real people: voice deepfakes (AI-generated phone calls), image deepfakes (face swaps, synthetic photos), and full video deepfakes (fake speeches, fake celebrity endorsements).

Detection tools exist (Microsoft Video Authenticator, Intel FakeCatcher, various academic tools) but struggle to keep pace with generation capabilities. The most reliable defense is still media provenance — knowing where content came from and whether it's been verified.

Real-World Example

When you see a fake video of a celebrity endorsing a product or hear a cloned voice in a scam call — those are deepfakes. AI generation is advancing faster than detection.

Related Terms

More in Safety & Ethics

FAQ

What is Deepfake?

AI-generated content that convincingly replaces a person's likeness or voice in media — raising serious concerns about misinformation and fraud.

How is Deepfake used in practice?

When you see a fake video of a celebrity endorsing a product or hear a cloned voice in a scam call — those are deepfakes. AI generation is advancing faster than detection.

What concepts are related to Deepfake?

Key related concepts include Voice Cloning, GANs (Generative Adversarial Networks). Understanding these together gives a more complete picture of how Deepfake fits into the AI landscape.