WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

437

On Wednesday, May 11, 2022, the EU Commission is expected to publish draft legislation on so-called chat control. The plan is to have an AI-based check of all message content and images directly on our devices. The so-called client-side scanning would be an attack on any confidential communication.

The draft plans to examine all communication content directly on our devices and to eject it in case of suspicion. This client-side scanning would not be the first excessive and misguided surveillance method justified by the fight against child abuse.

There is no doubt that those affected by child abuse need to be better helped, but chat control is an excessive approach, easy to circumvent and starts at the completely wrong place. Without any expected success in terms of the actual goal, an unprecedented monitoring tool is to be introduced. Completely missed the mark

The proposed bill would require every device to scan every message for images of child abuse and contact between criminals and children. If such content is detected in a message, it would be routed out directly to a supervisory authority or the police.

Mass scanning not only attacks confidential communications at their very foundations, but would be ineffective to boot: criminals already use distribution channels that would not be affected by these scans and will easily escape scans in the future:

The perpetrators use public hosters instead of the messengers targeted by the Commission - not least because messengers are completely unsuitable for exchanging large collections of files. They also encrypt the data before exchanging it.

For this reason alone, the planned surveillance will not prevent the further dissemination of abusive images. No trustworthy communication without trustworthy devices

Journalists and whistleblowers are not the only ones who depend on trustworthy communication - it is a fundamental right and an important cornerstone of IT security for us all. For communication to be truly trustworthy, two conditions must be met:

The device must have integrity and must not leak content to third parties.

Encryption must be secure, so that we do not have to trust the network.

With the secrecy of telecommunications and the fundamental right to guarantee the confidentiality and integrity of information technology systems, chat control suspends two fundamental rights. Users lose control over what data they share and with whom. They lose basic trust in their own devices.

continued in comments.

On Wednesday, May 11, 2022, the EU Commission is expected to publish draft legislation on so-called chat control. The plan is to have an AI-based check of all message content and images directly on our devices. The so-called client-side scanning would be an attack on any confidential communication. The draft plans to examine all communication content directly on our devices and to eject it in case of suspicion. This client-side scanning would not be the first excessive and misguided surveillance method justified by the fight against child abuse. There is no doubt that those affected by child abuse need to be better helped, but chat control is an excessive approach, easy to circumvent and starts at the completely wrong place. Without any expected success in terms of the actual goal, an unprecedented monitoring tool is to be introduced. Completely missed the mark The proposed bill would require every device to scan every message for images of child abuse and contact between criminals and children. If such content is detected in a message, it would be routed out directly to a supervisory authority or the police. Mass scanning not only attacks confidential communications at their very foundations, but would be ineffective to boot: criminals already use distribution channels that would not be affected by these scans and will easily escape scans in the future: The perpetrators use public hosters instead of the messengers targeted by the Commission - not least because messengers are completely unsuitable for exchanging large collections of files. They also encrypt the data before exchanging it. For this reason alone, the planned surveillance will not prevent the further dissemination of abusive images. No trustworthy communication without trustworthy devices Journalists and whistleblowers are not the only ones who depend on trustworthy communication - it is a fundamental right and an important cornerstone of IT security for us all. For communication to be truly trustworthy, two conditions must be met: The device must have integrity and must not leak content to third parties. Encryption must be secure, so that we do not have to trust the network. With the secrecy of telecommunications and the fundamental right to guarantee the confidentiality and integrity of information technology systems, chat control suspends two fundamental rights. Users lose control over what data they share and with whom. They lose basic trust in their own devices. # continued in comments.

(post is archived)

[–] 2 pts

amazing how a news like this gets so little traction

[–] 0 pt

not, really, it's just jews controlling the media

[–] 0 pt

so, it is jew even here ?

how many of the users are bot ?

So far, it is not clear who is supposed to define and control the recognition algorithms and databases. Such a non-transparent system can and will be easily expanded after its introduction. It is already foreseeable that the rights exploitation industry will be just as interested in the system as anti-democratic governments. It is all the more frightening to see the guilelessness with which it is now to be introduced. Error rates lead to flood of images at control points

An "artificial intelligence" that checks for abusive content will also falsely mark content as illegal. Even the smallest error rates would lead to massive amounts of falsely "detected" and ejected messages: In Germany alone, well over half a billion messages are sent per day. Even enormously "good" recognition rates would lead to the rejection of several thousand messages per day.

Of course, the likelihood of diversion increases in the case of private, completely legal and consensual image exchange among adults and young people. Young adults can already look forward to having their age estimated by monitoring agencies. The dull concern about whether our messages are being leaked, who is viewing them, and how safe they are from abuse there in turn will affect us all.

At the same time, checkpoints will accumulate mountains of irrelevant material, preventing officers from doing important investigative work. Investigating authorities are already overburdened with the data they are currently receiving. Investigative successes fail to materialize, and materials found are not even deleted. Effectively eliminating these deficits would be the most important goal in the fight against child abuse. Instead, the Commission wants to rely on mass surveillance and the promise of salvation of "artificial intelligence."

Chat control should be fundamentally rejected as a fundamentally misguided technology.

https://www.ccc.de/en/updates/2022/eu-kommission-will-alle-chatnachrichten-durchleuchten

Doesn't Apple already have something like this on its new iPhones?

[–] 0 pt

Same with COVID, within a month their latest update had everyhing needed for full blown surveillance. Like it was planned or something ....

[–] 1 pt

You have a license for that conversation?

EU plans to publish 'chat control' draft law that would enforce AI checks of all message content, images.

https://www.rebelnews.com/eu_plans_to_publish_chat_control_draft_law_that_would_enforce_ai_checks_of_all_message_content_images