Cracking Down on Deepfakes
March 24, 2021
Deepfakes, synthetic media reliant on the usage of AI to create seemingly real footage of people via digital video and photo manipulation, have practical usage in art, social media, movies, and other forms of entertainment. You yourself may have seen deepfake technology utilized in a Disney movie, allowing movie directors to bring dead actors back to life on screen, or in the well-known internet meme “Baka Mitai,” which refers to variations of AI-generated videos of people singing to the chorus of a song from the video game series Yakuza.
But at the same time, deepfakes pose a serious threat to the integrity and authenticity of photos and videos when used for more nefarious purposes like fraud, blackmail, or even the creation of non-consensual intimate photos. Deepfake videos have their uses and shouldn’t be banned, but the danger they pose means stricter regulation is necessary to regulate them and fight against disinformation.
Machine learning methods have been used to create fake videos of politicians in videos, misrepresenting their actions and views. Deepfake photos have been used to create sock puppets, people who don’t exist but are active in the media, which in turn are used to evade consequences for what the persona writes or posts. And deepfake photos have been utilized as propaganda.
“As the dark and deeply disturbing events of Jan. 6 show, disinformation can have very real consequences,” said former CIA officer and current member of the Council of Foreign Relations Matthew Ferraro. “It can poison minds and lead to delusions and violence.”
In addition to major political risks, deepfakes also draw concern for the danger they pose to women. According to research company Sensity AI, which has tracked online deepfake videos since December 2018, between 90% and 95% of deepfakes are non-consensual pornography, 90% of which is of women.
“It really makes you feel powerless, like you’re being put in your place,” said Helen Mort, British poet and broadcaster. “Punished for being a woman with a public voice of any kind. That’s the best way I can describe it. It’s saying, ‘Look: we can always do this to you.'”
She fell victim to a deepfake pornography campaign that used old non-intimate photos taken from her private social media accounts, from pregnancy photos to images of her as a teen. The perpetrator posted fake naked photos of her to a porn site and even encouraged other people to edit her into violent pornographic photos. Many of them were frighteningly realistic.
At the moment, movements have started to advocate for the banning of non-consensual deepfake porn, and legislation that regulates deepfakes in general has been introduced, from the 2018 Malicious Deep Fake Prohibition Act to the 2019 DEEP FAKES Accountability Act. The Defense Advanced Research Projects Agency, part of the U.S. Department of Defense, has funded competition between individuals to develop tools for detecting deepfakes. The first federal legislation on deepfakes was signed into law by former President Trump in December 2019 as part of the National Defense Authorization Act for Fiscal Year 2020. The legislation requires the government to alert Congress of deepfake disinformation activities targeting U.S. elections. And this year, the National Defense Authorization Act of 2021 directs the Department of Homeland Security to produce assessments of deepfake production and dangers.
These are all great developments. Regulation must continue to be instated and updated as deepfake technology evolves and develops, for the safety of all and the preservation of truth.
Photo courtesy of TECHNOLOGYREVIEW.COM