Everyday thousands of children are being sexually abused. You can stop the abuse of at least one child by simply praying. You can possibly stop the abuse of thousands of children by forwarding the link in First Time Visitor? by email, Twitter or Facebook to every Christian you know. Save a child or lots of children!!!! Do Something, please!

3:15 PM prayer in brief:
Pray for God to stop 1 child from being molested today.
Pray for God to stop 1 child molestation happening now.
Pray for God to rescue 1 child from sexual slavery.
Pray for God to save 1 girl from genital circumcision.
Pray for God to stop 1 girl from becoming a child-bride.
If you have the faith pray for 100 children rather than one.
Give Thanks. There is more to this prayer here

Please note: All my writings and comments appear in bold italics in this colour

Monday 10 July 2023

The Latest Wave in On-line Child Sexual Abuse - AI > The Potential for Harming Kids is Frightening; It's Everywhere, even New Zealand

..

Paedophile perverts are among the very first to master new platforms and new apps in their relentless pursuit of children (read innocence) to destroy. AI is the latest technology to make a serious impact on behalf of perverts everywhere. It has yet to be revealed all the evil that AI is capable of enabling in the soul-killing abuse of children. Here is just the beginning:



The fight against Britain’s paedophile gangs – and why it’s getting harder


Wave of generative AI is making the Internet Watch Foundation’s job harder than ever


By Matthew Field, The Telegraph
10 July 2023 • 10:00am

In a village on the leafy outskirts of Cambridge, an innocuous office block is on the front line of Britain’s battle to stop the spread of illegal child sexual abuse videos across the web.

Inside, a team of 30 or so analysts from the non-profit Internet Watch Foundation (IWF) filter through reams of potentially illegal material, either reported to its hotline by victims or hunted down proactively. 

The foundation then alerts internet providers who take down illegal websites, which are often run by criminal paedophile gangs for profit. It also tags images with a special computer hash that means they can be instantly blocked if someone tries to upload them on the web again.

Dealing with such horror on a daily basis is a task few would willingly take on. However, for Dan Sexton, the group’s chief technology officer, it is a moral imperative.

“Everyone is here because they want to protect children,” he says. “They want to make the internet a safer place. That drives people and pulls people together.”

Every 2 minutes they find a new image


Last year, the IWF assessed 375,230 reports of child abuse content – a 4pc increase on the previous year – and found 255,588 of them contained illegal imagery, or were advertising it. It also identified over 1.6 million unique images of child abuse. Its team typically find a child abuse image every two minutes. 

But its job is getting harder, Sexton says. The surging popularity of private messaging apps that rely on end-to-end encryption – scrambling their contents so they cannot be read by anyone other than the sender and receiver – has made it harder to put a stop to the spread of material. 

And an even more recent threat identified by the IWF is the emergence of a new wave of artificial intelligence – so-called generative AI.

New AI tools can invent artificial yet extremely lifelike images based on text prompts. While some of these AI engines are controlled by tech companies, others are entirely “open source”, meaning anyone can access their code.

It has not taken long for dark web criminals to spot an opening. So-called “deep fake” pornography – artificial images and videos of celebrities – started proliferating some time ago. Now, paedophiles are using AI tools to generate sickening synthetic sexual abuse images of children. Such content is illegal in the UK. 

Campaigners fear a trickle of these digitally produced images could become a flood. AI makes it trivially easy to recreate these videos on a massive scale.

“Child sexual abuse offenders adopt all technologies and some believe the future of child sexual abuse materials lies in AI-generated content,” GCHQ, the government’s spy agency, told the BBC last month.



AI - Worst-case scenario


David Thiel, of the Stanford Internet Observatory, another non-profit that fights the spread of child abuse, told The New York Times in June AI-generated child abuse would be “absolutely the worst case scenario for machine learning that I can think of”.

“AI generated imagery is a concern of ours,” Sexton says, “it was something that was a hypothetical a little while ago. Now it is definitely happening.”

“It has the potential to get much worse very quickly. It is something we need to be ready for. We only have a certain amount of resources.”

The IWF, which raised around £5.5m from donations and industry in 2022, is the only non-law enforcement group in Britain allowed to take proactive action to hunt down illegal child abuse videos. 

Accessing or seeking out child abuse material is against the law, but the IWF, launched in 1996 and funded by the tech industry and private donors, operates under a special agreement with the police and the Crown Prosecution Service to take down illegal websites.

At the centre of its headquarters is its hotline room, manned by a clutch of analysts and responders who take tip offs from the public or victims of abuse about videos spreading online.

When The Telegraph visits this inner sanctum, the team turn all their monitors off, as nobody without authorisation is allowed – or for that matter would wish – to see the websites they are inspecting. 

“Our hotline are experts in finding content, looking for exploitive websites, whether that is open web or dark web,” Sexton says. 

When the foundation finds illegal websites, its staff alert internet service providers and social media companies, which then block the addresses.

It also uses technology to stop the spread of illegal images. Known child abuse images are tagged – or hashed – with a kind of digital watermark. This coded image is then stored in a database. If the same image is shared or uploaded in future, tech companies can easily block it by spotting this code and the police can be alerted to its existence.

The upcoming Online Safety Bill will place added demands on tech companies to block illegal content – such as by using techniques like hashing images – and threaten them with huge fines if they fail to stop child abuse spreading. 

But the wording of the bill has led to a bitter row between tech companies such as Meta-owed WhatsApp and the government, child safety campaigners and the police. Tech companies argue the government could open the door to state snooping by weakening encryption. Child safety advocates, on the other hand, argue they are being hampered by the popularity of end-to-end encrypted apps.

The bill, which is inching its way through Parliament, empowers Ofcom to compel companies to use their “best endeavours” to develop technology that can identify child abuse content.

That would be nice, for a change.

Private messaging services WhatsApp, which is used by 2bn people worldwide, and Signal have warned this threatens their future in the UK. They say the law could force them to indiscriminately scan images on phones – in technical terms “client side scanning” – as they are uploaded. Meta, which owns WhatsApp, has said it plans to add further encryption to its other messaging services, such as Messenger. 

Michelle Donelan has described the Online Safety Bill as a 'win' for parents CREDIT: John Lawrence

Last week, Apple echoed these concerns, arguing the bill posed a serious threat to the privacy provided by encryption and calling for an amendment to the law.

Sexton says the debate has grown “very unhelpful and very frustrating”.

“Our worry is at some point these children might just say, ‘Well there’s no point reporting it’,” he says.

The IWF engineer argues that it is within the gift of tech companies to stop abuse spreading, claiming they already run some checks on encrypted messaging apps for viruses and scam links.

WhatsApp insists these checks are very different from tools that scan images and videos, and no data is sent to third party servers or shared with law enforcement.

“The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals,” a spokesman for Meta says.

“We don’t think people want us reading their private messages so have developed safety measures that prevent, detect and allow us to take action against this heinous abuse, while maintaining online privacy and security.”

Apple and Meta are both members of the IWF and each has donated to the group, as have Amazon, Google and TikTok. Sexton says the foundation aims to “support all these companies to find a solution”.

Still, the IWF and many of its funders appear increasingly at odds over encryption.

“It’s letting children down,” Sexton says.

“They’re directly asking those platforms: ‘Please stop this image being distributed.’ And they can do it.

“It’s just a false choice to say someone else’s privacy is more important than those children’s dignity and their privacy.”

It is almost always children that suffer from the greed, the evil, and the madness of men.





AI-created child sex abuse imagery seized in NZ


Chief customs officer at the Child Exploitation Operations team, Simon Peterson (right) pictured with Customs operations manager Stephen Waugh. Photo: David White/Stuff.


Artificial Intelligence-created child abuse imagery has been seized in New Zealand, including a game that was created depicting child sexual abuse.

Stuff contacted New Zealand Customs, police and the Department of Internal Affairs to ascertain whether any AI-created objectionable imagery had been discovered in New Zealand, following reports of instances overseas. All three agencies confirmed they were aware of such material.

“Customs has seen an increase in digitally-generated child sexual exploitation material,” says Simon Peterson, chief customs officer at the Child Exploitation Operations team.

“Recently, Customs seized a game that was created depicting child sexual abuse. The concept is not new, but the power of AI is unfortunately making these images appear more realistic.”

Detective Inspector Stuart Mills, manager of police intercept/technology operations, says police are “aware artificial intelligence is being misused to create images depicting the sexual abuse of children”.

“This is an issue confronting law enforcement internationally.”

A DIA spokesperson says it has seized “significant quantities” of child sexual exploitation material, “including computer-generated imagery”.

“DIA is aware of online forums dedicated to the discussion of computer-generated child exploitation material, including AI content.”

All three agencies, and a number of AI experts the Stuff spoke to, are clear that despite being purely AI-generated, the abuse material is covered by existing laws and is illegal.

Somewhere there has to be a human who tells the AI what to do - it can't be 'purely AI-generated'.

“Any publication that promotes or supports the exploitation of children for sexual purposes, whether digitally generated or not, is deemed an objectionable publication,” says Peterson.

He says Customs has seized digitally-created abuse material since at least the early 2000s, typically made using technology like Photoshop, but are now seizing material that was “purely AI”.

He says around a quarter of the material seized now by the three agencies is digitally created.

“Some people’s collections will be mainly digital stuff.

“The risk with the AI platform is any idiot can use it. I’d like to say we’re pretty good at picking the fakes but AI can be pretty realistic.

“Someone can make something with AI and we couldn’t tell the difference.”

He says AI is simply the latest tech space being exploited by paedophiles, following on from the internet, then social media.

“It’s made child abuse more available. A scary prospect.”

Mills, from police, is clear the AI material created real world harm too.

“Outside of these images being shared, sold or traded, AI-generated imagery is likely being used by offenders to manipulate and coerce young victims online in instances of offending like sextortion.”

According to Associate Professor Colin Gavaghan​, University of Otago chair in emerging technologies, the use of AI for these purposes is not “remotely surprising”.

“Worries about ‘deep fakes’ have been around for years now,” he says.

“Digitally-rendered images depicting real people in sexual situations have been used quite frequently in attempts to humiliate usually female politicians and public figures.

“Pseudo images of child abuse are nothing new either – there have been convictions for those before now. The only thing that’s changing is that they’re getting more realistic and harder to distinguish from real images.”

Peterson says that when it comes to the tech companies themselves, some flag abuse material to the authorities, but “some are better than others”.

Amazingly too, according to University of Waikato Artificial Intelligence Institute director Albert Bifet​, the tech companies may be unable to detect whether their tech is being abused to create this material.

Asked if they could detect the creation of abuse material on their platforms, he says “unfortunately this cannot be done currently”.

“However, the EU and UK are considering a requirement for labelling pictures and videos generated by AI. Additionally, they may request that AI models disclose the data used in their creation.”

-Benn Bathgate/Stuff.



No comments:

Post a Comment