Today the European Commission has released the results of its sixth evaluation of the Code of Conduct on countering illegal hate speech online. The results show a mixed picture as IT companies reviewed 81% of the notifications within 24 hours and removed an average of 62.5% of flagged content. These results are lower than the average recorded in 2019 and 2020. While some companies have improved, results for others have clearly worsened. As in previous monitoring rounds, a main weakness remains insufficient feedback to users’ notifications. Finally, a novelty in this year’s evaluation is the information provided by IT companies about measures they have taken to counter hate speech, including actions to automatically detect such content.
Věra Jourová, Vice-President for Values and Transparency, said: ”Hate speech online can lead to real harm offline. Violence often starts with words. Our unique Code has brought good results but the platforms cannot let the guard down and need to address the gaps. And gentlemen agreement alone will not suffice here. The Digital Services Act will provide strong regulatory tools to fight against illegal hate speech online.”
Didier Reynders, Commissioner for Justice, added: “The results show that IT companies cannot be complacent: just because the results were very good in the last years, they cannot take their task less seriously. They have to address any downward trend without delay. It is matter of protecting a democratic space and fundamental rights of all users. I trust that a swift adoption of the Digital Services Act will also help solving some of the persisting gaps, such as the insufficient transparency and feedback to users.”
The sixth evaluation shows that on average:
- IT companies assessed 81% of the notifications in less than 24 hours, which is lower than the 2020’s average of 90,4%.
- IT companies removed 62,5% of the content notified to them, which is lower than the average of 71% recorded in 2019 and 2020.
- Removal rates varied depending on the severity of hateful content. 69% of content calling for murder or violence against specific groups was removed, while 55% of the content using defamatory words or pictures aiming at certain groups was removed. Conversely, in 2020, the respective results were 83,5% and 57.8%.
- IT companies gave feedback to 60,3% of the notifications received, which is lower than during the previous monitoring exercise (67.1%).
- In this monitoring exercise, sexual orientation is the most commonly reported ground of hate speech (18,2%) followed by xenophobia (18%) and anti-gypsyism (12.5%).
- For the first time, the IT companies reported detailed information about measures taken to counter hate speech outside the monitoring exercise, including their actions to automatically detect and remove content.
The Commission will continue monitoring the implementation of the Code of Conduct. The Commission calls upon IT companies to reinforce the dialogue with trusted flaggers and Civil Society Organisations to address the gaps in reviewing notifications, taking action and to improve their feedback to users. The Digital Services Act (DSA) proposes a comprehensive legal framework for countering illegal content as well as a co-regulatory system that supposes initiatives such as the Code of conduct. The Commission aims to discuss how the Code could evolve with the IT Companies, also in light of the upcoming obligations and the collaborative framework in the proposal for a Digital Services Act.
The Framework Decision on Combating Racism and Xenophobia criminalises public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. As defined in this Framework Decision, hate speech is a criminal offence also when it occurs online.
In order to respond to the proliferation of racist and xenophobic hate speech online, the European Commission and four major IT companies (Facebook, Microsoft, Twitter and YouTube) presented a Code of Conduct on countering illegal hate speech online on 31 May 2016. Since then, Instagram, Google+, Snapchat, Dailymotion, Jeuxvideo.com and TikTok joined the Code. LinkedIn joined on 24 June 2021.
The Code of Conduct is based on close cooperation between the European Commission, IT platforms, civil society organisations (CSOs) and national authorities. All stakeholders meet regularly under the umbrella of the High Level Group on combatting racism and xenophobia, to discuss challenges and progress.
Each monitoring exercise was carried out following a commonly agreed methodology which makes it possible to compare the results over time. The sixth exercise was carried out over a period of 6 weeks, from 1 March to 14 April 2021, by 35 organisations which reported on the outcomes of a total sample of 4543 notifications from 22 Member States. Notifications were submitted either through reporting channels available to all users, or via dedicated channels only accessible to trusted flaggers/reporters.
The Digital Services Act includes rules for online intermediary services, which millions of Europeans use every day. The obligations of different online players match their role, size and impact in the online ecosystem. Building on the experience from the Code and its monitoring exercise, obligations related to clear notice and action systems, priority treatment of notices from trusted flaggers, feedback on notices to users and extensive transparency obligations seek to address the identified shortcomings. Specific rules are foreseen for very large online platforms reaching more than 10% of 450 million consumers in Europe. These platforms with a systemic role will have to assess the risks their systems pose and take mitigating measures to curb the dissemination of illegal content and address societal harms.
For more information