When deepfake is increasingly becoming a problem, scientists have come up with a counteractive strategy. Researchers have shared details about how frequency analysis of images can detect deepfake.
Frequency Analysis To Detect Deepfake
Reportedly, researchers from Ruhr-University Bochum have devised a tool that can detect deepfake images. The tool leverages frequency analysis to distinguish between deepfake images and the original pictures.
Briefly, they addressed the trouble of identifying computer-generated fake images. Stating how these images come alive, the researchers explained that computer models, Generative Adversarial Networks or GAN, play a key role in generating these almost-real images.
The technique precisely uses two algorithms. While the first one creates a fake image, the second analyzes it to detect it as false. If detected, it then commands the first algorithm to revise the image. This goes on until the second algorithm no longer detects the image as fake.
Certainly, with such precision and perfection, this deepfake creates more troubles then one can imagine. They are especially dangerous given the fake news dilemma of today’s world.
Hence, the researchers have proposed using frequency analysis to detect the deepfake images. The technique involves converting the image to frequency domain via discrete cosine transform (DCT), which they describe as,
The DCT expresses… a finite sequence of data points as a sum of cosine functions oscillating at different frequencies. The DCT is commonly used in image processing due to its excellent energy compaction properties and its separability, which allows for efficient implementations.
In their study, they analyzed real images from Flickr-Faces-HQ (FFHQ) data set and fake StyleGAN-generated images. Comparing the DCT spectrum of the two showed some clear differences. Thus, observing the artifacts in the computer-generated images helps in their identification.
In their study, the researchers also elaborate on the impact of upsampling techniques on DCT spectrum. While GAN-generated images exhibit artifacts in the frequency spectrum, they further trained their model according to the different upsampling techniques (nearest neighbor, bilinear, and binomial).
Furthermore, they also tested the technique for analyzing images that experienced perturbations due to image uploading. For instance, whether the artifacts persisted if the image gets uploaded to social media. They could establish that their classifier resisted most of the perturbations (blur, crop, compression) except noise.
So, that depicts the effectiveness of their technique to detect fake images.
Although, it is still possible for an adversary to evade the classifiers via specially crafted image perturbations. Nonetheless, since the model makes use of the weakness present in almost all GAN architectures, it’s still viable for the future.
Let us know your thoughts in the comments.
Latest posts by Abeerah Hashim (see all)
- Researcher Hacked Tesla Model X Demonstrating Keyless Entry System Vulnerability - November 25, 2020
- GitHub Patched A Vulnerability Months After Google’s Report - November 25, 2020
- Bug in Twitter Fleets Where Posts Remain Visible - November 24, 2020