Follow Us on Google News
The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday.
In a written report, the U.K.-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.
“We’re not talking about the harm it might do,” said Dan Sexton, the watchdog group’s chief technology officer. “This is happening right now and it needs to be addressed right now.”
In a first-of-its-kind case in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 virtual child abuse images, according to the Busan District Court in the country’s southeast.
In some cases, kids are using these tools on each other. At a school in southwestern Spain, police have been investigating teens’ alleged use of a phone app to make their fully dressed schoolmates appear nude in photos.
The report exposes a dark side of the race to build generative AI systems that enable users to describe in words what they want to produce — from emails to novel artwork or videos — and have the system spit it out.
If it isn’t stopped, the flood of deepfake child sexual abuse images could bog investigators down trying to rescue children who turn out to be virtual characters. Perpetrators could also use the images to groom and coerce new victims.
Sexton said IWF analysts discovered faces of famous children online as well as a “massive demand for the creation of more images of children who’ve already been abused, possibly years ago.”
“They’re taking existing real content and using that to create new content of these victims,” he said. “That is just incredibly shocking.”
Sexton said his charity organization, which is focused on combating online child sexual abuse, first began fielding reports about abusive AI-generated imagery earlier this year. That led to an investigation into forums on the so-called dark web, a part of the internet hosted within an encrypted network and accessible only through tools that provide anonymity.
What IWF analysts found were abusers sharing tips and marveling about how easy it was to turn their home computers into factories for generating sexually explicit images of children of all ages. Some are also trading and attempting to profit off such images that appear increasingly lifelike.
“What we’re starting to see is this explosion of content,” Sexton said.
While the IWF’s report is meant to flag a growing problem more than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-generated abuse. It particularly targets the European Union, where there’s a debate over surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse even if the images are not previously known to law enforcement.