Deepfakes—videos created using machine learning that digitally alters the image and/or sound—are rising exponentially. While some are using the tech for benign ends, like placing Steve Buscemi’s face on Jennifer Lawrence’s body (a cursed image that can’t be unseen), there’s growing concern that the technology could be used for nefarious political purposes. That might sound alarmist, but considering that some social media users have already been fooled by deepfake videos, it’s a legitimate concern.
Policymakers are beginning to take the threat seriously. In addition to legislation at both the state and national level, there’s growing interest in tools that can identify and flag digital manipulations.
Attack the Blockchain
Earlier this month, Wired reported on Amber Authenticate, a new tool that cryptographically authenticates video footage using blockchain. According to Wired, users will program the tool to create a “hash” at regular intervals, a snapshot of data which is stored in perpetuity on a public blockchain. When you run the footage back through the algorithm, the hashes will be different if there are any variations to the original video or audio data.
The implications are widespread. As computers grow in power, digital data becomes easier to manipulate, meaning you no longer need to be a programming wizard to alter footage. With body cameras becoming standard across many police precincts, what’s to stop the police from changing a potentially incriminating video, or a seedy business owner from altering security footage to pin a burglary on an innocent suspect?
Civil Rights of Way
These concerns are of paramount importance, according to Amber CEO Shamir Allibhai. “There’s a systemic risk with police body cameras across many manufacturers and models,” Allibhai told Wired. “What we’re worried about is that, when you couple that with deepfakes, you can not only add or delete evidence but what happens when you can manipulate it? Once it’s entered into evidence it’s really hard to say what’s a fake. Detection is always one step behind. With this approach it’s binary: Either the hash matches or it doesn’t, and it’s all publicly verifiable.”
Utilizing the popular open-source blockchain platform Ethereum, Amber Authenticate also includes a web platform that uses a simple visual system to identify what part of the footage, if any, has been manipulated. Additionally, the platform includes a detailed “audit trail” that records when the footage was created, uploaded, hashed, and submitted to the blockchain. Allibhai recently presented his tool to representatives from the Departments of Defense and Homeland Security.
According to Jay Stanley, a senior policy analyst at the American Civil Liberties Union, the authentication of the data could prove vitally important in sensitive cases like police shootings. “Like body cameras themselves, video authentication can help create community confidence in evidence about what’s taken place, and can give everybody confidence that things are on the up and up in what can be very harrowing and difficult incidents,” Stanley told Wired.
Deepfakes or Free Speech?
Recently, lawmakers have begun to propose legislative solutions to deepfakes. According to Axios, Sen. Ben Sasse (R-Neb.) introduced legislation in December that would have targeted both individuals creating deepfake videos with the intent “to do something illegal,” as well as distributors like Facebook, but only if they knowingly distributed the content. Clearly, there’s a lot of wiggle room in this proposal, which was scuttled in the government shutdown (Sasse intends to reintroduce it, according to Axios).
Local legislatures are also taking up the issue. Last May, the New York State Assembly introduced a bill aimed at not only protecting individuals from their likeness being used in deepfake pornography, but across films, commercials or musicals. “An individual’s persona is the personal property of the individual and is freely transferable and descendible,” the bill read, according to The Register.
Somewhat surprisingly, Hollywood fought back against the bill. The Motion Picture Association of America argued that the policy was too restrictive, leaving filmmakers unable to make biopics about actors, musicians, and athletes. Furthermore, Disney argued that the legislation would impede on the first amendment rights of storytellers, even while acknowledging that deepfakes are a problem.
The bill ultimately died in committee but the issue will undoubtedly come up again. After all, deepfakes aren’t going anywhere.