Explicit deepfakes are now a federal crime. Enforcing that may be a major problem. - Tech Industry
On May 19, President Donald Trump and First Lady Melania Trump beamed to press and allies as they signed the administration's first major piece of tech regulation, the bipartisan Take It Down Act.
It was seen as a win for those who have long been calling on the criminalization of NDII, or the nonconsensual distribution of intimate images, and a federal pathway of redress for victims. Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, explained it may be a needed kick in the pants to a lethargic legislative arena.
"I think it's good that they're going to force social media companies to have a process in place to remove content that people ask to be removed," he said. "This is kind of a start; to build the infrastructure to be able to respond to this type of request, and it's a really thin slice of what the issues with AI are going to be."
But other digital rights groups say the legislation may stir false hope for swift legal resolutions among victims, with unclear vetting procedures and an overly broad list of applicable content. The law's implementation is just as murky.
"The Take It Down Act’s removal provision has been presented as a virtual guarantee to victims that nonconsensual intimate visual depictions of them will be removed from websites and online services within 48 hours," said the Cyber Civil Rights Initiative (CCRI) in a statement. "But given the lack of any safeguards against false reports, the arbitrarily selective definition of covered platforms, and the broad enforcement discretion given to the FTC with no avenue for individual redress and vindication, this is an unrealistic promise."
These same digital rights activists, who had issued warnings throughout the bill's congressional journey, will also be keeping a close eye on how the act may affect constitutionally protected speech, with the fear that publishers may remove legal speech to preempt criminal repercussions (or flatly suppress free speech, such as consensual LGBTQ pornography). Some worry that the bill's takedown system, modeled after the Digital Millennium Copyright Act (DMCA), may over-inflate the power of the Federal Trade Commission, which now has the power to hold online content publishers accountable to the law with unlimited jurisdiction.
"Now that the Take It Down Act has passed, imperfect as it is, the Federal Trade Commission and platforms need to both meet the bill’s best intentions for victims while also respecting the privacy and free expression rights of all users," said Becca Branum, deputy director of the Center for Democracy & Technology (CDT)'s Free Expression Project. "The constitutional flaws in the Take It Down Act do not alleviate the FTC's obligations under the First Amendment."
Organizations like the CCRI and the CDT had spent months lobbying legislatures to adjust the act's enforcement provisions. The CCRI, which penned the bill framework that Take It Down is based on, has taken issue with the legislation's exceptions for images posted by someone that appears in them, for example. They also fear the removal process may be rife for abuse, including false reports made by disgruntled individuals or politically-motivated groups under an overly broad scope for takedowns.
The CDT, conversely, interprets the law's AI-specific provisions as too specific. "Take It Down’s criminal prohibition and the takedown system focus only on AI generated images that would cause a 'reasonable person [to] believe the individual is actually depicted in the intimate visual depiction.' In doing so, the Take It Down Act is unduly narrow, missing several instances where perpetrators could harm victims," the organization argues. For example, a defendant could reasonably get around the law by publishing synthetic likenesses placed in implausible or fantastical environments.
Just as confusing is that while the FTC's takedown authority for applicable publishers is vast, its oversight is exempt for others, such as sites that don't host user-generated synthetic content, but rather their own, curated content. Instead of being forced to take down media under the 48-hour stipulation, these sites can only be pursued in a criminal case. "Law enforcement, however, has historically neglected crimes disproportionately perpetrated against women and may not have the capacity to prosecute all such operators," the CDT warns.
Steinhauer theorizes that the bill may face a general infrastructure problem in its early enforcement. For example, publishers may find it difficult to corroborate that the individuals filing claims are actually depicted in the NDII within the 48 hour period, unless they beef up their own oversight investments — most social media platforms have scaled back their moderation processes in recent years. Automatic moderation tools could help, but they're known to have their own set of issues.
There's also the question of how publishers will spot and prove that images and videos are synthetically generated, specifically, a problem that's plagued the industry as generative AI has grown. "The Take It Down Act effectively increases the liability for content publishers, and now the onus is on them to be able to prove that the content they’re publishing is not a deepfake," Manny Ahmed, founder and CEO of content provenance company OpenOrigins. "One of the issues with synthetic media and having provable deniability is that detection doesn’t work anymore. Running a deepfake detector post hoc doesn’t give you a lot of confidence because these detectors can be faked or fooled pretty easily and existing media pipelines don't have any audit trail functionality built into them.”
It's easy to follow the logic of such a strong takedown tool being used as a weapon of censorship and surveillance, especially under an administration that is already doing plenty to sow distrust among its citizens and wage war on ideological grounds.
Steinhauer still urges an open mind. "This is going to open a door to those other conversations and hopefully reasonable regulation that is a compromise for everyone," he said. "There's no world we should live in where somebody can fake a sexual video of someone and not be held accountable. We have to find a balance between protecting people, and protecting people's rights."
The future of broader AI regulation remains in question, however. Through Trump championed and signed the Take It Down Act, he and congressional Republicans also pushed to include a 10-year ban on state- and local-level AI regulation in their touted One Big Beautiful Bill.
And even with the president's signature, the future of the law is uncertain, with rights organizations predicting that the legislation may be contested in court on free speech grounds. "There's plenty of non pornographic or sexual material that could be created with your likeness, and right now there's no law against it," added Steinhauer. Regardless of whether Take It Down remains or gets the boot, the issue of AI regulation is far from settled.