U.S. Sen. Marco Rubio, R-Fla., is worried about deepfakes, a troubling internet trend in which false content videos are made by superimposing video or audio on top of or in place of the original video, changing what actually occurred to make viewers believe something else took place.
This week, Rubio and U.S. Sen. Mark Warner, D-Va., who both sit on the U.S. Senate Intelligence Committee, sent a letter to 11 social media companies, urging them to act immediately against deepfakes.
“As concerning as deepfakes and other multimedia manipulation techniques are for the subjects whose actions are falsely portrayed, deepfakes pose an especially grave threat to the public’s trust in the information it consumes; particularly images, and video and audio recordings posted online,” the senators wrote. “If the public can no longer trust recorded events or images, it will have a corrosive impact on our democracy.”
In the letter sent to social media companies, Rubio and Warner scolded the tech companies for not doing enough to combat these videos.
“Despite numerous conversations, meetings, and public testimony acknowledging your responsibilities to the public, there has been limited progress in creating industry-wide standards on the pressing issue of deepfakes and synthetic media,” the senators wrote. “Having a clear strategy and policy in place for authenticating media, and slowing the pace at which disinformation spreads, can help blunt some of these risks. Similarly, establishing clear policies for the labeling and archiving of synthetic media can aid digital media literacy efforts and assist researchers in tracking disinformation campaigns, particularly from foreign entities and governments seeking to undermine our democracy.”
The two senators posed several questions in the letter sent to leaders of Facebook, Twitter, YouTube, Reddit, LinkedIn, Tumblr, Snapchat, Imgur, TikTok, Pinterest and Twitch, including:
- What is your company’s current policy regarding whether users can post intentionally misleading, synthetic or fabricated media?
- Does your company currently have the technical ability to detect intentionally misleading or fabricated media, such as deepfakes? If so, how do you archive this problematic content for better re-identification in the future?
- Will your company make available archived fabricated media to qualified outside researchers working to develop new methods of tracking and identifying such content? If so, what partnerships does your company currently have in place? Will your company maintain a separate, publicly accessible archive for this content?
- If the victim of a possible deepfake informs you that a recording is intentionally misleading or fabricated, how will your company adjudicate those claims or notify other potential victims?
- If your company determines that a media file hosted by your company is intentionally misleading or fabricated, how will you make clear to users that you have either removed or replaced that problematic content?
- Given that deepfakes may attract views that could drive algorithmic promotion, how will your company and its algorithms respond to, and downplay, deepfakes posted on your platform?
- What is your company’s policy for dealing with the posting and promotion of media content that is wholly fabricated, such as untrue articles posing as real news, in an effort to mislead the public?
Expect to see some, if not all, of these companies to be called back to Capitol Hill for future testimony as Rubio and Warner continue their efforts.
Reach Mike Synan at firstname.lastname@example.org.