Nearly half a million teenagers and young adults who had posted content with terms like “sharia” or “mujahideen” began last fall seeing a series of animated videos pop up on their Facebook news feeds.
In one, cartoon figures with guns appear underneath an Islamic State flag. “Do not be confused by what extremists say, that you must reject the new world. You don’t need to pick,” the narrator says. “Remember, peace up. Extremist thinking out.”
The videos are part of three experiments—funded by Google parent Alphabet Inc., with help from Facebook Inc. and Twitter Inc.—that explore how to use the machinery of online advertising to counterbalance the growing wave of extremist propaganda on the internet, both from Islamist radicals and far-right groups.
The goal: See what kinds of messages and targeting could reach potential extremists before they become radicalized—and then quickly roll the model out to content producers across the internet.
The study, detailed in a report set to be published Monday by London-based think tank Institute for Strategic Dialogue, is a step toward understanding what techniques work, said Yasmin Green, who heads the counter-radicalization efforts at Jigsaw, the Alphabet unit formerly known as Google Ideas.
“At the end of the day, it is a battle of ideas,” said Zahed Amanullah, head of the counter-narrative program at Institute for Strategic Dialogue.
A drumbeat of violent attacks by radicalized individuals or small groups has killed hundreds in Europe, Asia and the U.S. over the past two months. In many cases, such as the attacks in Nice, France, and Orlando, Florida, officials say propaganda and violent internet material has played a role in driving attackers to action.
The government response has largely been to demand technology firms move faster in removing extremist content from their services.
But Islamic State is fast to open new accounts and expand its propaganda to new apps, leading to a game of whack-a-mole. “It’s simply impossible to remove it all,” said Susan Benesch, a faculty associate at Harvard University’s Berkman Klein Center. “Even if one platform successfully takes something down, usually that content is available somewhere else.”
Government efforts to launch counter-narrative campaigns against the remaining propaganda have often fallen flat.
“Once the message is stamped ‘government,’ for many young people, it’s tainted,” said one French official.
Institute for Strategic Dialogue began working with Alphabet on how to better target its messages during an initial project in 2014 in the U.K. In that study, Google showed some sponsored search results and videos to people in selected demographics searching for information about Islamic radicalism. In the new study to be published Monday, organizers expanded that work to different content on Twitter, YouTube and Facebook for users in the U.S., U.K. and Pakistan.
Alphabet contributed an undisclosed amount of money to finance videos made by three outreach groups. Facebook, Twitter and Alphabet’s YouTube donated advertising credits worth nearly £20,000, or about $30,000 when the experiments took place during October and November of last year.
U.S.-based nonprofit Average Mohamed made animated videos that explain Islam and criticize jihadist groups for American teens. Harakat-ut-Taleem, run by an anonymous group in Pakistan, created videos to dissuade people from joining the Taliban.
The third project, called ExitUSA, targeted white supremacists and focused on people who had already become radicalized, and were either wanting to leave white power groups or had recently done so.
“Eventually in these groups you’re going to get disillusioned,” said Tony McAleer, executive director of ExitUSA producer Life After Hate. “We want to accelerate that process to get people to that place of disillusionment way ahead of schedule.”
By the end of the experiments, each of which lasted about three weeks, internet users had been exposed to some element of the three campaigns about 1.6 million times, with 379,000 video views on either Facebook, Twitter or YouTube.
The most concrete impact was that eight people approached Life After Hate for help leaving white-supremacist groups.
Mr. Amanullah concedes it is difficult to tell whether video views, likes and retweets correlate with a lower risk of radicalization.
“The classic question is ‘How many people have you prevented from becoming terrorists?’ Which you can’t answer.”
Mr. Amanullah said that the most promising results came when organizers engaged in extended conversations with people who commented on videos. When the comments were negative, that opened a “window for a response,” he said. “For someone holding extreme views, to open that window is a huge opportunity.”
[source:- The wall Street Journal]