Recommended

Artificial intelligence opening door for predators to sexually abuse children, experts warn

Getty Images/Oscar Wong
Getty Images/Oscar Wong

Artificial intelligence has made it easier for predators to sexually abuse children and harder for law enforcement agencies to hold these bad actors accountable, according to a recent report.

Britain’s top law enforcement agency, the National Crime Agency, warned that there are between 680 to 830,000 adults in the United Kingdom that pose a sexual risk to children.

NCA Director General Graeme Biggar said in a recent speech that the agency highlighted how the number is “roughly 10 times the prison population” and noted that it illustrates the dangers of the internet. 

Get Our Latest News for FREE

Subscribe to get daily/weekly email with the top stories (plus special offers!) from The Christian Post. Be the first to know.

Due to the internet, sexual predators can easily access or share videos and images of child sexual abuse, and now, with AI, NCA has started seeing hyper-realistic images and videos of this type of content. 

“The use of AI for child sexual abuse will make it harder for us to identify real children who need protecting and further normalize abuse,” Biggar said. “And that matters because we assess that the viewing of these images — whether real or AI-generated — materially increases the risk of offenders moving on to sexually abusing children themselves.” 

“There is also no doubt that our work is being made harder as major technology companies develop their services, including rolling out end-to-end encryption, in a way that they know will make it more difficult for law enforcement to detect and investigate crime and protect children."

The Internet Watchdog Foundation, an organization that ensures the internet is safe for children by removing child sex abuse imagery, called for the implementation of stronger online safeguards in response to NCA’s report. 

“Increasing online activity by offenders, described as the new ‘front line’ by the NCA, means we cannot expect children to safely navigate the risks online on their own,” IWF CEO Susie Hargreaves OBE said in a statement last month. “We need to ensure technological safeguards are in place to protect children.” 

Hargreaves emphasized that it is “vital” for companies that use or plan to use end-to-end encryption to also introduce safeguards to prevent online abuse of children. 

"Some online platforms continue to provide predators with a way to initiate contact with children, enabling opportunities for grooming, online child sexual abuse and physical abuse,” she continued.

"There is a danger that if technology companies introduce end-to-end encryption on their platforms, it will make it even harder to identify and stop online predators."

As the IWF explains on its website, end-to-end encryption is a form of online privacy, but it goes further than standard encryption, noting that, on social media, "messages which are end-to-end encrypted can only ever be seen by the sender and receiver."

“Theoretically, that sounds fine,” IWF stated. “But without additional safeguards, there is no opportunity, even for the company providing the messaging platform, to spot and prevent criminal content such as child sexual abuse imagery from being shared.” 

In June, the FBI issued a warning about how malicious actors may use AI to edit images or videos of people to make them appear more sexual, and some of the victims in these sextortion cases have been minors. Sextortion is a crime that involves manipulating a victim through the use of sexually explicit imagery or videos.

"Malicious actors use content manipulation technologies and services to exploit photos and videos — typically captured from an individual's social media account, open internet or requested from the victim — into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums or pornographic websites,” the FBI warned, as reported by Fox News

According to the FBI, the victims are often unaware that someone has manipulated their likeness and circulated it around the internet. The agency also noted a 463% increase in reported sextortion cases from 2021 to 2022, noting that AI has made it easier for predators to exploit victims.

However, Yaron Litwin, executive of Canopy, has reportedly developed AI software that blocks these kinds of manipulated images and sends out alerts.

"Our technology is an AI platform that has been developed over 14 years … and can recognize images with video and an image in split seconds," Litwin told Fox News Digital.

"The platform itself will also filter out in real time any explicit image as you're browsing through a website or an app and really prevent a lot of pornography from appearing online."

Samantha Kamman is a reporter for The Christian Post. She can be reached at: samantha.kamman@christianpost.com. Follow her on Twitter: @Samantha_Kamman

Was this article helpful?

Help keep The Christian Post free for everyone.

By making a recurring donation or a one-time donation of any amount, you're helping to keep CP's articles free and accessible for everyone.

We’re sorry to hear that.

Hope you’ll give us another try and check out some other articles. Return to homepage.

Most Popular

More Articles