Everyday thousands of children are being sexually abused. You can stop the abuse of at least one child by simply praying. You can possibly stop the abuse of thousands of children by forwarding the link in First Time Visitor? by email, Twitter or Facebook to every Christian you know. Save a child or lots of children!!!! Do Something, please!

3:15 PM prayer in brief:
Pray for God to stop 1 child from being molested today.
Pray for God to stop 1 child molestation happening now.
Pray for God to rescue 1 child from sexual slavery.
Pray for God to save 1 girl from genital circumcision.
Pray for God to stop 1 girl from becoming a child-bride.
If you have the faith pray for 100 children rather than one.
Give Thanks. There is more to this prayer here

Please note: All my writings and comments appear in bold italics in this colour

Thursday, 19 August 2021

Extraordinary Wave of Concern Over Everyone's Rights Except Children's - What Else is New?

..

Apple’s new ‘child-safety’ features face fresh challenge over

censorship & privacy from over 90 rights groups

19 Aug, 2021 16:47

© Unsplash / Bagus Hernawan


Apple’s controversial plan to scan user photos and conversations for child sexual abuse material (CSAM) faces renewed criticism after rights groups warned it would “censor protected speech”, threaten privacy and endanger children.

In a letter published on the Center for Democracy and Technology website, a coalition of more than 90 groups from around the world urged Apple CEO Tim Cook to drop plans to introduce the surveillance feature – known as a CSAM hash – to detect child pornographic imagery stored on the iCloud.

While their concerns are certainly valid, their lack of concern over children being sexually assaulted to produce the CSAM is far more disturbing. Their priorities seem to be very selfish and lacking any concern for children being horribly abused.

The letter, published on Thursday, points to the use of “notoriously unreliable” machine learning algorithms to scan for sexually explicit images in the ‘Messages’ service on iOS devices. It notes that this could result in alerts that “threaten the safety and well-being” of young people with abusive parents.

“iMessages will no longer provide confidentiality and privacy to those users through an end-to-end encrypted messaging system in which only the sender and intended recipients have access to the information sent,” the groups warned.

They added that the technology could also open the door to “enormous pressure” and legal compulsions from various governments to scan for images deemed “objectionable” such as protests, human rights violations and even “unflattering images” of politicians.

Signatories to the letter include the American Civil Liberties Union, Electronic Frontier Foundation, Access Now, Privacy International, and the Tor Project. Besides, a number of overseas groups have added their concerns about the policy’s impact on countries with different legal systems.

An Apple spokesman told Reuters the company had addressed privacy and security concerns earlier. Last week, it released a document detailing why the scanning software’s complex architecture allowed it to resist attempts at abusing it.

Earlier this month, a separate letter posted on GitHub and signed by privacy and security experts, including former NSA whistleblower Edward Snowden, condemned the “privacy-invasive content scanning technology”. It also warned that the policy “threatens to undermine fundamental privacy protections” for users, under the guise of child protection.

Other concerns have been raised about the possibility of “false positives” in the hash-scanning feature, which looks for an image’s ‘hash’ – a string of letters and numbers that are unique to the image – and matches it to databases provided by child protection agencies like the National Center for Missing and Exploited Children (NCMEC).

Although a recent Apple FAQ claimed the likelihood of a false positive “less than one in one trillion [incorrectly flagged accounts] per year”, researchers reported the first case of “hash collision” – where the feature identified two completely different images as producing the same hash – this week.

According to TechCrunch, “hash collisions” are a “death knell” for systems relying on encryption.

However, the tech news outlet said Apple downplayed the concerns in a press call and argued that it had protections in place – including human moderators reviewing flagged incidents before they are reported to law enforcement – to protect against the false positive issue.


No comments:

Post a Comment