In recent months, child safety experts and researchers have warned of the risk that artificial AI could exacerbate online abuse.
Meta CEO Mark Zuckerberg speaks at a hearing of the US Senate Judiciary Committee on January 31, 2024, in front of a crowd of attendees carrying AI-generated images. Photo: Reuters
NCMEC has not yet released the total number of reports of child abuse material from all sources it received in 2023. But in 2022, it received reports of about 88.3 million files about the problem.
“We are getting reports from innovative AI companies themselves, (online) platforms, and members of the public,” said John Shehan, vice president of NCMEC.
The CEOs of Meta, X, TikTok, Snap and Discord testified at a US Senate hearing on child safety on online platforms on Wednesday (January 31). US lawmakers questioned social media companies about their efforts to protect children from “online predators”.
Generative AI could be used by bad actors to repeatedly harm real-life children by creating fake images of them, researchers at the Stanford Internet Observatory said in a report last June.
Fallon McNulty, director of NCMEC’s CyberTipline, which takes reports of online child exploitation, said AI-generated content is becoming “more and more photorealistic,” making it difficult to determine whether victims are real people.
OpenAI, the company that created ChatGPT, has set up a process to submit reports to NCMEC, and the organization is in talks with other AI companies, McNulty said.
Hoang Hai (according to Reuters, FT)
Source
Comment (0)