Facebook moderators said it was ‘mission impossible’ to keep the site free of extreme material. Photograph: Dominic Lipinski/PA
Theresa May’s initiative to put more pressure on tech companies over online extremism is born of a frustration that can only have been heightened by this week’s attack in Manchester.
For all the harsh words and threatened fines that have been thrown at social media companies over recent months, in the UK and across Europe, there is a feeling that they are not doing nearly enough, quickly enough, to tackle the problem.
Linked to this is an anxiety that the online war is being lost. The internet is where the battle for hearts and minds is often shaped – where calls for violence are posted, where videos of how to make bombs proliferate and the atrocities that are committed are too often celebrated.
There is no doubt that people who are either involved with, or are supporters of, Islamic State have become experts at exploiting the internet. Within minutes of Monday evening’s attack, hashtags, memes, instructions and info-graphics were being sprayed around messenger sites and on to social media platforms.
“They are always very quick,” said one source involved in monitoring online extremism. “They try to capitalise on the attacks and the chaos. They barge their way into mainstream media conversations online. Isis tells its ‘fan boys’ how to do it.”
This month, the home affairs select committee said that matters were getting worse, not better, with MPs delivering an unusually withering assessment of the tech industry: “There is a great deal of evidence that these platforms are being used to spread hate, abuse and extremism. That trend continues to grow at an alarming rate but it remains unchecked and, even where it is illegal, largely unpoliced. The evidence suggests that the problem is getting worse.”
And so it appears to be. The Facebook Files revealed by the Guardian this week gave a genuinely unprecedented insight into the way Facebook is trying to tackle online extremism, and the muddle it has got itself into.
While Facebook executives seem genuinely hurt by accusations they are not doing enough, and are offended by the idea they do not take these matters seriously, the fact remains that it is really struggling to contain the problem.
Its moderators told the Guardian they face a “mission impossible” trying to keep the site clean. There is so much content, and it is so easy to defy the rules, that Facebook cannot stop hateful material being published.
In one month last year, moderators escalated more than 1,300 posts with potential links to terrorism. Moderators told us this was really the tip of the iceberg. Videos of how to make bombs and how to create suicide vests are out there if you want to find them.
Facebook told the Guardian it had software that stops some images from being seen and is investing heavily in artificial intelligence. That is regarded as another way of disinfecting the site.
But the truth for Facebook, and for May, is that any number of moderators, and any number of algorithms, might not be enough. The terrorists, and their supporters, have been smart and they adapt quickly. One moderator told the Guardian: “It always feels like we are one step behind.”