Recent advancements in artificial intelligence (A.I.) and deep learning have brought forth an impressive array of technological possibilities to explore. When it comes to visual media alone, we’ve passed a point where deep learning systems can replace recorded faces with digitally created replicas of others. The results are impressively difficult to tell apart. They allow people to do everything from switching actors’ faces on David Letterman, to changing their own faces with celebrity photos through a commercialized app.
But, like with most paradigm-changing innovations, this has also presented a new wave of problems for everyone to worry about. After all, when considering the use of incriminatory video evidence, the ability to change recordings willingly becomes potentially dangerous.
In an effort to keep ahead of these issues, Facebook, Microsoft, and several U.S. universities are teaming up to develop stronger detection techniques—and they’re challenging everyone to help.
Realizing the Imminent Dangers of Deepfakes
Since modern A.I. can now seamlessly manipulate pre-existing images and videos, convincing deepfake representations of all kinds are sprouting up.
Of course, to nobody’s surprise, most applications of this technology seem to revolve around the idea of nonconsensual pornography. However, it also has other sinister uses. For instance, to create manipulative representations of politicians doing and saying unflattering things to affect their public image.
In fact, with this blatant threat to politics, deepfakes have become a growing part of the conversation surrounding 2020’s elections. Many already believe that their existence is an active threat to U.S. democracy and deepfakes will impact voter choices.
While deepfakes have been recognized as a growing hazard, not much has been done to combat them—until now.
The Deepfake Detection Challenge
According to Facebook, the goal of this challenge is to eventually produce technology that everyone can use to detect A.I. tampering.
The social media company hopes to accomplish this by providing a dataset of real and tampered videos to any interested organizations for analysis. Additionally, to avoid privacy violations (seemingly a first) or relying on its own data, the videos all feature paid actors.
With this dataset of videos in hand, challenge participants can then build and test their deepfake detection solutions. The exact details and parameters of the contest have yet to be released. Hopeful participants can look forward to both public leaderboards and cash prizes. Facebook also announced that it will partake in the challenge itself, though it understandably won’t be eligible for prizes.
However, before the challenge goes live, it’s first being tested in October through a targeted working session at the 2019 International Conference on Computer Vision (ICCV). Would-be participants should keep a close eye on Facebook’s DeepFake Detection Challenge page for updates in the immediate future.