Biden's AI order is the first step in a very long project.

Nov 7
The Biden administration's AI executive order, the most significant attempt by the United States to tackle the problems posed by this emerging technology, attempts to address several AI-related issues, including deepfakes.

Joe Biden has instructed the Commerce Department to research methods for recognizing and labeling "synthetic content," as well as for confirming the content's legitimacy and determining its real source.

Experts concur that AI-generated content can be identified by modern systems as appropriate for play. Soheil Feizi, an assistant professor of computer science at the University of Maryland, describes it as a "theoretical impossibility" that future systems will be error-proof because underlying AI models will always improve faster than detection tools. Feizi's research has shown how AI can hack text and image recognition systems. "Even in the future, there won't be a dependable approach to solving these issues," he declared.

Regarding what to do with these flawed tools, there is less consensus. Fazi contends that even if a watermarking system isn't perfect, it can still identify or discourage less complex attacks.

The counterargument is that it's best not to pretend they work because poor AI detection further erodes trust each time it fails to identify a deep fake or marks a real image as phony. However, some have likened AI detection technology to shutting your door. Of course, someone could force open the window or shatter the hinges with a crowbar, but at least something is better than nothing.

Verifying the source of a piece of content is the goal of more promising technology. In the simplest terms possible, this will function provided the content provider uses cryptographic code to "sign" it.

Users can be informed of the legitimacy of this signature and have it automatically verified by web browsers, social networks, and other technologies. These hard-to-hack technologies are already in use to verify the source of some programs.

"There is some logic in getting the government involved, but companies are eager to develop content authentication standards," says Hani Farid, a professor at the University of California, Berkeley, who specializes in deception, human perception, and digital forensics.

According to her, a system like this may be used in conjunction with watermarking technology to greatly improve Internet cleanliness. She predicted that those with the highest levels of skill and dedication would figure out how to get past the AI-detecting system. "But I'll declare victory if we can solve at least 75% of the problem."

But technology is only one aspect of the issue. People have been indoctrinated for years to distrust whatever they see online, particularly when it comes to ideas they disagree with. Even if justified, worries about deepfakes could exacerbate the harm caused by years of discussion about low-tech misinformation efforts on social media.

This is known by experts as the "liar's dividend" because it benefits those who disseminate false information and distrust by creating a climate of confusion and skepticism.

Reversing the liar's dividend will require more than just strong technology and backing from large corporations and governments. "None of this works when you don't trust institutions," Farid remarked. It goes beyond simply believing something to be untrue.

You think that businesses, the media, and the government are keeping it from you."
Rebuilding that trust will be necessary in the long run to secure a healthy Internet in the era of artificial intelligence, but it will take much more than a presidential directive to do this.

Created with