Core considerations from digital human rights experts show the gaps between proposals to rein in Non-Consensual Intimate Content and the interests of marginalized communities, youth, and everyday people.

By Lia Holland

Lia Holland (they/she) is Campaigns and Communications Director at national digital human rights organization Fight for the Future and an abuse survivor.

Dealing with abuse online is gonna be fraught. The creation of Non-Consensual Intimate Content (NCIC), already a huge and under-addressed problem, is being accelerated by AI. There’s no universal policy solution to dealing with NCIC that will make everyone happy. But we’ve got to grapple with NCIC: not just for celebrities, but for everyday people and especially for traditionally marginalized communities including youth, artists, sex workers, abuse survivors, and activists. And to do it correctly, we need a human rights framework at the center of our efforts.

Unfortunately, Congress has been failing the people who experience online abuse for years—relying on tech lobbyists, self-interested entertainment tycoons, and organizations from NCOSE to the RIAA that all lack fundamental analysis of the way the Internet actually works or how changes to it impact human rights. None of these groups focus on leveraging the considerable knowledge within the racial and gender justice movements to make all our digital lives safer and more rights-affirming. 

We do. 

The pressing novelty of explicit AI deepfakes, grifters using AI to profit from people’s likenesses without their consent, and public outcry could fuel the momentum to make real change for the good of us all. We’ve been having a lot of conversations about this at Fight for the Future, and although the path is thorny, the mood is hopeful. Congress still has a chance to listen to NGOs specializing in digital human rights and the communities most harmed by online abuse. A chance to get regulation of NCIC right and affirm the dignity of abuse survivors everywhere. 

To encourage legislators to reimagine current proposals that focus on rights that attorneys will have to continually sue over or that will end up quashing online creativity and critique, we wanted to share some of the core considerations that are surfacing among our colleagues and a read on some of the bills out there today. 

If you read far enough, you’ll find a moment where we might actually praise(?!) a dear Congressional nemesis who, according to the Internet, may or may not be the zodiac killer.

Core Considerations

  1. Legislation that centers rich people and celebrities with armies of attorneys will always fail everyday people. Current proposals, like the NO FAKES and NO AI FRAUD Acts, focus on a fraction of a percentage of people: celebrities. By prioritizing new rights for attorneys representing the wealthy and famous to enforce through automated takedown systems and sue over, Congress is ignoring the vast majority of those who will be harmed—including youth, abuse survivors, political candidates, and organizers. 

    Most people who will be impacted by non-consensual, abusive online content don’t have the resources to hire an expensive attorney. They can’t wait for months or years of court battles while being victimized by NCIC, and simply want a way to get such content taken down, fast. A method that doesn’t ruin their ability to connect, organize, access resources, create, work, and learn online. Unfortunately, the takedown system recently added to NO FAKES would do just that.
  2. Non-consensual intimate content isn’t new—it’s the consent that matters, not whether AI was used. Whatever Congress ultimately does for AI should address all NCIC, whether it’s made with AI or not. The issue at hand here is consent, and it should be prioritized. Just as a teenager should be able to swiftly remove an AI image of their face on a sex worker’s body, so should that sex worker be able to get images of them used without their consent taken down—and so should a divorcee be able to remove intimate images posted by a vengeful ex.

    By focusing on mechanisms that empower consent instead of on a particular technology, Congress will also discourage abusive use-cases for future technologies beyond AI, while encouraging healthier and more respectful business and social norms online.
  3. By failing to pass any meaningful data privacy protections, Congress has allowed tech companies to profit from endangering abuse survivors for over 25 years. This must end. Amy Boyer was murdered by her stalker using information he obtained from a data broker in 2002—a practice still common among stalkers today. For over a quarter century, Congress and federal agencies have fundamentally failed to protect our privacy rights in the digital age, allowing stalkers, scammers, and impersonators to thrive on a steady diet of our intimate personal data.

    A meaningful privacy bill would minimize data collection and permanently put control of everyone’s intimate personal data where it belongs: in the hands of each individual instead of tech companies. It would also leave the door open for even stronger federal and state protections against a future full of novel privacy threats. Such a strong data privacy bill would go a long way toward defending abuse survivors against those that would harass, intimidate, impersonate, or murder them.
  4. We need to focus on the abusive business models at play, not roll back the rights and freedoms of individuals. It’s inappropriate to treat everyone on the Internet as if they are a criminal, when the problems stem from the abusive business models of tech companies. There have been some proposals to create a carveout in Section 230 of the Communications Decency Act in order to combat NCIC—but that won’t work. It is simply untrue that further rollbacks of the Internet’s first amendment will suddenly cause bad actors or sleazy tech grifters to cease to exist. What such rollbacks would do is censor important speech and resources, including for abuse survivors. 

    Sex workers are still living the consequences of 230 carve outs in the deaths, violence, and loss of safety resources caused by SESTA/FOSTA. Their perspective on digital marginalization, surveillance, and non-consensual imagery is among the most valuable available on our planet, and accordingly their needs and concerns should be centered in this conversation. Efforts to modify Section 230 would also meet strong organized resistance from a large swath of civil society. We’d much rather work together toward a good solution than fight against a false one.

    Further, criminalizing young and/or uneducated individuals drawn in by the Internet’s lack of consent culture and normalization of abusive business models is also a false answer. Instead, we need to invest in robust education on consent, especially for youth, to counter a generation of digital business interests that have abandoned consent as a concept and set a terrible standard for both corporate and individual behavior.

Hairy Solutions

The NO FAKES Act & Notice-and-Takedown

With these important considerations in mind, one interesting solution for NCIC that Congress is already getting wrong is a notice-and-takedown system. Such a system could quickly benefit pop stars and high schoolers alike, but the form proposed in the NO FAKES Act gets it very wrong.

Currently, notice-and-takedown has earned the ire of human rights experts due to the malicious and negligent activity rampant under the Digital Millennium Copyright Act (DMCA)’s notice-and-takedown regime. Unfortunately, this system is the model for the system in NO FAKES, and may in fact end up being even more harmful. Poorly done, notice-and-takedown systems imperil fundamental First Amendment rights like fair use and quash creativity. But if Congress decides to learn from past mistakes and listen to those most impacted, we could create a system that is rights-affirming in NO FAKES or a similar bill.

In discussing notice-and-takedown for NCIC with various colleagues at sex worker advocacy organizations, they expressed both interest and concern. Under the DMCA today, legitimate music and criticism is getting wiped from the Internet by bad actors weaponizing frivolous or negligently automated AI takedowns. Sex worker advocates are particularly concerned that legitimate, consensual content could be easily taken down if there isn’t a good system in place to challenge bad-faith claims, to swiftly put back legitimate content, and to impose strong civil penalties for bad-faith takedown notices. NO FAKES unfortunately lacks all of these protections—but it wouldn’t be hard to add them.

The concerns that sex worker advocates raise reflect concerns that reach far beyond their own communities—such as politicians decrying images they don’t like as AI-generated, previewing a future no one wants. 

The NO FAKES Act, the NO AI FRAUD Act, & New IP Rights

Yet in order to make any notice-and-takedown system possible without the non-starter of carving out Section 230, many are pushing an expansion of federal intellectual property rights to include likeness and voice. Chief among advocates are rich rightsholders like celebrities and entertainment tycoons, who seem more interested in creating new transferable IP rights to exploit than in finding what the actual best solution for NCIC is. They’re using the real harm and suffering of NCIC victims to make a money- and power-grab, convincing politicians to ignore the fact that the vast majority of people don’t make money off their own likeness, and only want a way to stop abuse. 

But we should all be concerned: creating transferable intellectual property rights to a person’s likeness or voice without aggressive and clear reversion and termination (which the NO FAKES and NO AI FRAUD Acts lack) may have wide-ranging and unintended consequences. A third party owning an artist’s voice or likeness may prevent them from making the art they want to make, or lending their voice to social causes. Further, it may also allow rights purchasers to deepfake people without their consent, in a perverse reversal of what these laws aimed to guard against in the first place. Low income people may be particularly vulnerable to such harms. New IP rights are the wrong tool for most of these harms because IP rights are a tool for encouraging the spread of, and the profit off of, new inventions and works of art. We need the opposite for NCIC.

Before creating any such rights that can be irrevocably given away for life plus seventy years, and sold on the open market, Congress should begin by studying the effects of current IP practices on artists and creators today. And any system that is ultimately created should include a rigorous continuing analysis of the impact of the regime. 

The DEFIANCE Act

We’re happy to join organizations like the Center for Democracy and Technology in endorsing the DEFIANCE Act, which creates civil grounds for victims to sue abusers who share NCIC. 

This is a step in the right direction—to treat all NCIC the same whether it is made with AI or not. And, it does not create yet another path to prison for more people, a move that Fight for the Future and many of our allies vehemently oppose. The bill is carefully targeted and drafted so that it will easily survive First Amendment challenges. That’s important, because we need these essential protections to stand in the face of lawsuits from Big Tech or other bad actors.

However, we don’t believe that this act is enough on its own. The bill will create a floor of protections while the thornier issues are worked out by advocates, activists, and experts. We can’t rely solely on lawsuits to solve the problem because they can take years, be prohibitively expensive, and don’t provide an immediate solution to get images taken down fast before they can disrupt the lives of NCIC victims.

The TAKE IT DOWN Act

Alright. Here it is, proof that enough monkeys with enough typewriters can occasionally get something almost right. Ted Cruz’s TAKE IT DOWN Act has many good parts—including mandating a takedown within 48 hours of any NCIC after a valid notice from a victim is received and careful moves to avoid trampling the first amendment.

But we don’t believe that criminalization is a valid answer to this issue, one on which many people are simply uneducated. We should invest in education and building better social mores about consent, not throwing ever more people, especially youth, in prison. 

If our favorite big-time Congressional Twitter beef opponent Ted Cruz really wanted to stick it to us, he’d shift the criminal liability to civil liability and we *might* just have no choice but to endorse TAKE IT DOWN. The gauntlet has been thrown.

To wrap all this up, we want to recognize this moment. 

There is a unique combination of will and opportunity right now that could lead to a better Internet for everyone. Congress should place the concerns of their favorite pop stars on a level with those of everyday people, avoid throwing people who really may not know better in prison, and focus on real-world solutions guided by those who would be most harmed.