A mother and her 14-year-old daughter are championing enhanced protections for victims following the circulation of AI-generated explicit images of the teenager and her female classmates within a New Jersey high school.
Simultaneously, on the opposite side of the country, authorities are probing an incident involving a teenage boy who purportedly utilized artificial intelligence to produce and disseminate similar images of fellow students—also teenage girls—attending a suburban Seattle, Washington high school.
These distressing cases have once again brought attention to the proliferation of explicit AI-generated content, predominantly harming women and children, and surging online at an unprecedented rate. According to findings from independent researcher Genevieve Oh, over 143,000 new deepfake videos have been posted online this year, surpassing the total for all previous years combined.
Families impacted by these incidents are urgently urging lawmakers to enact robust safeguards for victims whose images are manipulated using new AI models or the myriad of apps and websites openly advertising such services. Advocates and legal experts are also calling for federal regulations that can offer uniform protections nationwide and send a clear message to potential perpetrators.
Dorota Mani, whose daughter was a victim in Westfield, a New Jersey suburb, emphasized, “We’re fighting for our children. They are not Republicans, and they are not Democrats. They just want to be loved, and they want to be safe.”
While the issue of deepfakes is not novel, experts argue that it is escalating as the technology to create it becomes more accessible and user-friendly. Researchers have been sounding alarms this year on the surge of AI-generated child sexual abuse material, utilizing depictions of real victims or virtual characters. In June, the FBI cautioned about an increasing number of reports from victims, both minors and adults, whose photos or videos were manipulated to create explicit content shared online.
Several states have enacted their own laws over the years to combat the problem, but the scope varies. Texas, Minnesota, and New York recently passed legislation criminalizing nonconsensual deepfake porn, joining Virginia, Georgia, and Hawaii. Some states, such as California and Illinois, only empower victims to sue perpetrators for damages in civil court.
Several other states are contemplating their own legislation, including New Jersey, where a bill is in progress to ban deepfake porn and impose penalties on those who disseminate it.
State Sen. Kristin Corrado, a Republican, expressed optimism about the bill’s passage, citing the heightened attention due to the Westfield incident.
The Westfield event occurred in the summer and was reported to the high school on Oct. 20. The AI-generated images were allegedly circulated among a group of friends on Snapchat. The school has not disclosed disciplinary actions, and authorities have not commented.
Details about the incident in Washington state, under investigation since October, have not been disclosed. Authorities have obtained search warrants, and the information is subject to change as the probe continues.
If the New Jersey incident leads to prosecution, current state laws against the sexual exploitation of minors may apply. However, Mary Anne Franks, a law professor at George Washington University, advocates for federal legislation providing consistent nationwide protections and penalties for organizations profiting from products and apps facilitating deepfake creation.
President Joe Biden’s October executive order calls for barring the use of generative AI to produce child sexual abuse material or non-consensual intimate imagery. It also directs the federal government to issue guidance for labeling and watermarking AI-generated content.
U.S. Rep. Tom Kean, Jr., introduced a bill requiring disclosures on AI-generated content, while U.S. Rep. Joe Morelle’s bill would make it illegal to share deepfake porn images online.
Some, including the ACLU, Electronic Frontier Foundation, and The Media Coalition, caution against overbroad proposals that may infringe on the First Amendment. Stakeholder input and careful consideration are emphasized to ensure any bill addresses the problem without being overly expansive.
Mani and her daughter have taken proactive steps, creating a website and charity to aid AI victims. They are actively engaging with state lawmakers and planning a trip to Washington to advocate for stronger protections. Mani emphasizes the importance of support systems for affected children, emphasizing, “Not every child, boy or girl, will have the support system to deal with this issue, and they might not see the light at the end of the tunnel.”