Recommended

AI videos of child sexual abuse surged in 2025 as Grok faces investigation

This photograph taken on Jan. 13, 2025, in Toulouse shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by xAI, the American company specializing in artificial intelligence and its founder South African businessman Elon Musk.
This photograph taken on Jan. 13, 2025, in Toulouse shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by xAI, the American company specializing in artificial intelligence and its founder South African businessman Elon Musk. | Lionel Bonaventure/AFP via Getty Images

Artificial intelligence-generated videos depicting child sexual abuse surged by more than 26,000% in 2025, amid growing concern over Grok, an AI chatbot now under investigation for producing similar content. California Attorney General Rob Bonta has announced an official inquiry into Grok and its parent company, xAI.

The U.K.-based Internet Watch Foundation, or IWF, reported identifying 3,440 such AI-generated videos in 2025, up from just 13 the previous year, with more than half of the videos falling under “category A,” the classification for the most severe forms of abuse, CBS News reported.

The chatbot Grok, developed by Elon Musk’s xAI and integrated with the X platform, became a focal point of controversy in December after Copyleaks, a content detection firm, found it was generating one nonconsensual sexualized image per minute, according to its estimates.

In response, xAI stated it had implemented changes to block users from generating images of people in minimal clothing through Grok, following scrutiny from regulators in the European Union and consumer protection agencies in the United States.

"We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules," a statement from the company reads. "We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary."

"We have implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers."

The National Center on Sexual Exploitation sent a statement to The Christian Post saying Grok had created tens of thousands of images that sexualized women and children without their consent. The group called on Congress to act swiftly to regulate AI-powered tools.

“Sexually-explicit deepfake images are used to harass, de-humanize, and silence people, particularly women,” said Dani Pinter, chief legal officer at NCOSE. “If AI technology is permitted to advance without accountability, it will not be a tool of innovation, but rather a cheap weapon for harassment and humiliation.”

The group said Grok is the latest in a line of examples showing how tech companies profit from the tools that enable mass production of exploitative imagery, and that existing laws have failed to protect American users.

The organization voiced support for a series of federal bills it says are essential to reining in abuses made possible by AI, including the DEFIANCE Act, which passed the Senate last week and would allow survivors of deepfake sexual imagery to take civil action against perpetrators.

The group is also pressing for the passage of the Kids Online Safety Act and the Sunset Section 230 Act, which would impose legal duties on platforms to prevent and mitigate harm to minors and reduce the immunity enjoyed by tech companies over user-generated content.

Other bills cited by the group include the STOP CSAM Act, A.I. LEAD Act, GUARD Act, NO FAKES Act, and the Frederick Douglass Trafficking Victims Prevention and Protection Reauthorization Act of 2025.

The IWF said that the AI-generated child sexual abuse material it recorded last year was part of a broader spike in harmful content, with over 300,000 CSAM reports processed by the watchdog group.

Federal law in the United States classifies the production or distribution of child sexual abuse material as a criminal offense under statutes that prohibit child pornography.

The IWF said newer AI tools are especially concerning because they allow the generation of realistic videos at scale, using little technical skill, and often by combining real images of children scraped from the internet.

The group’s analysts said they believe the rise in incidents reflects the growing accessibility of AI tools for malicious purposes.

You’ve readarticles in the last 30 days.

Was this article helpful?

Help keep The Christian Post free for everyone.

Our work is made possible by the generosity of supporters like you. Your contributions empower us to continue breaking stories that matter, providing clarity from a biblical worldview, and standing for truth in an era of competing narratives.

By making a recurring donation or a one-time donation of any amount, you’re helping to keep CP’s articles free and accessible for everyone.

We’re sorry to hear that.

Hope you’ll give us another try and check out some other articles. Return to homepage.

Most Popular

More Articles