It´s Coming! -Leaked Plan for EU internet Censorship Legislation

This is crazy...but not unexpected. If it is coming to Europe, it is coming here folks.  They have been building a matrix for us to live in for some time and much of the technology is already in place. The question is, who decides who is a terrorist and what constitutes "hate-speech"?  
- W.E.


 Europol, bots, end-user filters, creepy NGOs, Virtual Police Officers and abuse officials soon to be trawling our Internet and Social networks for "extremist" content

Excerpts below, full pdf report embedded at the botom of the post.

But first Clean IT´s embarrassing excuse after EDRI leaked the document


28-8-2012, version 0.63



This document is not for publication. The recipient may share this document only with others within
their organization on a u need~to-know" basis.

Legal framework


a. All States committing to the Clean IT document must implement the EU FD 2002 and EU FD 2008;

b. All States must implement the EU Data Directive;

2. Governments must have clear policies on intelligence gathering and when to take action, against terrorist or
radicalizing co

3. Governments must have specialized police officer(s) 'patrol" on social media;

7. Governments must disseminate lists of domain names that can are not allowed to be registered, to prevent
terrorist propaganda;

8. Governments must subsidize competent NGOs that substantially contribute to reducing terrorist use of the Internet and radicalizing content on the internet;

11. Governments should include Internet companies' track record on reducing terrorist use of the Internet as a
criterion in purchasing policies and Public Relation policies;

12. Governments could have programs to educate web moderators;

13. Governments could implement counter narrative policies and projects.


e. How flagging/reporting works must be explained by service providers to their users.

f. The anonymity of the reporter must be preserved. Reporter details must never be shown to content owners;

g. Internet companies must be sufficiently (quantity and quality) staffed or supported to handle reports. Recognizing illegal, terrorist use of the Internet requires specialist knowledge on terrorism, (national) legislation and (national) cultural differences;

h. Reports of (potential) cases of terrorist use of the platform must be analyzed. Clearly illegal terrorist activity must be reported to LEAs immediately;

i. Specialized NGOs should actively flag what is (deemed) illegal terrorist content;

j. LEAs should primarily use formal ways of notifying Internet companies (notice and take action). In some countries flagging is also regarded as a formal notification;

k. Internet companies could extend a higher credibility status to trusted flagging organizations, like LEAs and specialized NGOs. Users could also be provided higher credibility status based on their (calculated) reputation in successfully reporting abuse.

i. Service/business conditions and abuse policies should not be very detailed in describing terrorist activity. A very detailed description will very likely cause gaps;


1. A European organisation, preferably Europol, should operate and host this points of contact system;

3. Persons in the points of contact systems that are regularly and repeatedly complained about their performance, should be removed from the system, and replace by another person from the same organisation;

4. Large Internet companies should not have to deal, if they do not want to, with too many individual LEAs and NGOs, but only a few representatives. It would be good if regular telephone calls are organised to discuss overall contacts and ooperation. It would also be desirable to channel requests to these companies via a limited number of (government or LEA contacts);

5. Only persons committed to implementing the Clean IT 'draft document' practices should be included in the Points of Contact System. Plus some recognized and 'logical' LEA, government and academic/research specialists.

a. The Organization will provide research and advice on terrorist and other content which is recognised as dangerous throughout the EU and in each individual country;

e. The organization must include staff drawn from, not representatives of, governments, LEAs, Internet companies, NGOs and academic institutions;

f. The Organization should provide advice on:

• Material that can be researched and used for 'machine-learning';

• Known terrorist and extremist content;

Hate Speech;

End Privacy

4. Internet companies must allow only real, common names. These must be entered when registering. On request of the Internet company, a registrant must provide proof of the real or common name. Internet companies can request to prove the real or common name also after a user has been flagged, signalled or reported.

5. Social media companies must allow only real pictures of users;

7. Internet companies must know and store the real identity and contact information of users and customers, in order to provide at least this information to LEAs in case of an investigation into (potential) terrorist use of the Internet.

Virtual Police Force!

1. Virtual community policing must be used to find and connect to persons in danger to get radicalized.

2. To reduce terrorist use of the Internet virtual community police officers should (also) be active on those social media platforms known for terrorist or radicalizing activity.

5. Virtual police officers should use easy to understand ('popular') language, friendly icons and profile photographs, in order to lower the threshold of being contacted and in order to be effective in combination with (often younger) users of the social medium.

7. Virtual police offers must organize ways that other users of the social media can 'follow' or 'link' (to) them to increase awareness of terrorist use of the social medium and what is being or can be done against it.

8. Virtual police officers should become members in extremist and terrorist fora as much as possible, subscribe to news, mailing and alerts etc. to be able to detect any terrorist content or activity.

11. Virtual police officers should act on any terrorist content or activity they encounter, not only which is (clearly) related to their country, geographical unit or specialism.

12. Virtual police officers should also discuss with parents the dangers of radicalization of their children via social media.

Automatic Detection Systems

7. Governments and LEAs must offer assistance to Internet companies developing or using automated detection systems;

9. All terrorist use of the Internet signaled by the automated detection system and after that judged as such by the abuse officer of an Internet company, must be removed from the Internet, and put to the attention of LEA as soon as possible;

10. All terrorist use of the Internet signaled by the automated detection system and after that judged as such by an NGO must be put to the attention the Internet company and the competent LEA as soon as possible;

Operative System and Browser based policing buttons!

13. The browser or operating system base reporting button must send a signal to the Internet company involved, which will take appropriate action;

14. The system will also send a signal to LEA, which after some time will check whether it is satisfied with the action taken by the internet company and could chose to start a formal notice and action procedure,

15. Governments will start drafting legislation that will make offering such a system to Internet
users obligatory for browser or operating system service company as a condition of selling their
products in this country or the European Union.

CleanIT -Leaked EU Internet Censorship Legislation - LEA Clean IT
CleanIT -Leaked EU Internet Censorship Legislation - LEA Clean IT