With the rapid advancement of artificial intelligence—and a general lack of regulation—concerns about its potential repercussions are growing. One question sits at the center of the debate: whose safety is most at risk?When it comes to gender, recent reports by the United Nations suggest that AI poses a disproportionately greater threat to women than to men worldwide. In November 2025, both the United Nations Office At Geneva and UN Women reported that digital abuse is a rapidly growing tool of violence against women, and artificial intelligence both amplifies and creates new methods of abuse.
According to recent research, 90-95% of all online deepfakes are non-consensual pornographic images—and of those nonconsensual pornographic images generated by AI, approximately 90% depict women. In addition, many deepfake tools are also designed and trained in ways that intentionally exclude depictions of men, which further exacerbates the ability to facilitate digital harassment and abuse against women in particular.
Excerpts from interviews provide further insight into the effects of digital abuse, and why it overwhelmingly targets women. UN Women Executive Director Sima Bahous emphasized that digital abuse rarely stays online, and often seen in the real life, which subsequently leads to physical violence and femicide. Likewise, feminist activist and author Laura Bates further weighed in on the real-life effects of digital abuse. “When a domestic abuser uses online tools to track or stalk a victim, when abusive pornographic deepfakes cause a victim to lose her job or access to her children, when online abuse of a young woman results in offline slut-shaming and she drops out of school – these are just some examples that show how easily and dangerously digital abuse spills into real life.”
Bates also addressed why AI deepfakes are targeting women in such overwhelming numbers. They explain, “In part, this is about the root problem of misogyny – this is an overwhelmingly gendered issue, and what we’re seeing is a digital manifestation of larger offline truth: men target women for gendered violence and abuse… But it’s also about how the tools facilitate that abuse.”
Compounding the issue is a global lack of legal protection. As of this year, 1.8 billion women and girls do not have legal protection against digital abuse, and laws addressing digital harassment are only available in 40% of countries worldwide. Although laws are continuing to emerge to protect victims, the U.S. has yet to put forth comprehensive federal laws that specifically protect women from digital abuse. Bipartisan support in Congress passed the Take It Down Act in May of 2025, which requires those who post non-consensual pornography, including deep fakes, 48 hours’ notice to take down the image upon a victim’s request to avoid prosecution.
However, the U.S. act has received pushback from those who fear the act could lead to false censorship and threaten free speech, and that it contains potential loopholes that could allow perpetrators to post non-consensual imagery and still get away unscathed. RAINN, the largest anti-sexual violence organization in the U.S., also argues that the Take It Down Act does not provide enough protection for victims and calls on the technology industry to make the changes necessary to protect victims.
RAINN President Scott Berkowitz has urged major technology platforms to implement their own protections that will free victims from tech-enabled abuse, and that they have the power to do so immediately. “This isn’t about building something new; it’s about ending something harmful. We’re calling on platforms to show survivors, and all internet users, that their freedom from abuse matters.”
While the Take It Down Act is certainly a milestone in deterring abusers from posting nonconsensual images online, advocates say it is only the beginning. Without stronger federal protections and more proactive intervention from technology companies, the tools that enable AI-generated abuse will continue to outspace the systems designed to stop it.
