Deepfakes are deceptive videos or images created using AI. These are so convincingly real that they can trick viewers into believing falsified events. Their use has significantly grown with advancements in AI, raising various ethical and social concerns.
Imagine an artist who’s very good at face painting. The artist can paint your face so well that even your best friend might think it’s you. Deepfakes are like that, only instead of an artist, a computer uses AI to create or change videos and photos.
Deepfakes are a product of advanced predictive neural networks (AI algorithms), particularly Generative Adversarial Networks (GANs). The term “deepfake” combines “deep learning” and “fake”, suggesting the use of deep learning algorithms to generate fake content.
GANs consist of two neural networks - a Generator network and a Discriminator network. The Generator network learns to generate realistic fake data (such as an image or video), while the Discriminator network learns to differentiate between real and fake data. During training, these two networks essentially ‘duel’ each other. As the Generator improves at creating realistic forgeries, the Discriminator improves at catching them, which in turn pushes the Generator to get even better.
The result of this process is deepfakes—videos, images, or audio that look and sound like real ones. By training a GAN on a large dataset of images of a particular person, AI can generate a video of that person saying or doing something they never did.
While the technical prowess behind deepfakes is impressive, it also poses significant ethical and societal concerns. The potential to create convincing false media content opens doors for misuse, including disinformation campaigns, fraudulent impersonation, and other harmful activities.
As a counter to this significant issue, AI technologies such as deep learning are also being used to develop methods for detecting deepfakes. For example, Convolutional Neural Networks (CNNs), which are exceptionally good at analyzing visual imagery, can be trained to identify inconsistencies in deepfake videos that human eyes might miss.