Facebook policy chief: social media must step up fight against extremism
Facebook and other social media sites should use ‘counter-speech’ to fight extremism, says Monika Bickert. Photograph: Beck Diefenbach/Reuters
Social networks such as Facebook need a more proactive approach to countering extremism and hate speech than simply deleting extremist posts, the tech giant’s head of policy said at SXSW on Saturday night.
“Even if we were perfect at keeping violent extremism from ever hitting our community and other technology companies were perfect, we know that alone isn’t enough to change minds or stop the spread of violent extremism,” said Monika Bickert, who appeared on a panel titled Taking Back the Internet: Countering Extremism.
She also said networks need what she called counter-speech, a crowd-sourced response to extremism where posts are met with disagreement or derision.
“The best remedy is good speech that gets people thinking and challenging ideologies. We focus on trying to amplify some of the voices to counter violent narratives,” Bickert said.
Facebook has partnered with the US Department of Homeland Security’s Countering Violent Extremism Task Force and EdVenture Partners, an analytics firm, to fund and scale campaigns against hate and extremism. The partnership, called Peer to Peer: Challenging Extremism, invites university students to design prototypes and digital media campaigns designed to change the minds of people lured towards extremist groups such as Isis and neo-Nazis. The partners used the SXSW panel to outline some of the things they have learned since the program launched in 2015.
Who delivers the counter-speech is critical. “You have to be a credible speaker,” Bickert said. Someone from government or a senior executive from a tech company is “not likely to resonate in the same way as a young person’s voice speaking to a young person’s community” would, she said.
Matthew Rice, chief digital officer of the homeland security department task force, agreed that authorities struggled to persuade people at risk of becoming radicalised. “The government isn’t the voice,” he said. “A lot of this speech is first-amendment protected, so the government isn’t the best person to act in this space. You empower folks outside.”
The student project that won the Challenging Extremism program was called “It’s Time: ExOut Extremism”, and was created by a team from Rochester Institute of Technology. It creates videos, infographics and other educational tools and resources to empower people, who might otherwise stay silent, to stand against extremist content.
“People who are radicalised were searching for camaraderie, community,” said Olivia Hauck, ExOut CEO. ExOut provides that for people on the other side of the fence.
Keeping messages positive rather than negative makes a bigger impact. “If you say you’re wrong, your ideas are stupid, it doesn’t shift opinions. If you use humour, it’s more likely to be shared and ignites the community.” Bickert said.
Hauck added: “If you make someone uncomfortable to have a seat at the table, you never get to have those conversations.”
One challenge for Facebook and the homeland security department is how extremism and hate speech is defined, particularly in an administration that includes officials who arose from the far right. The White House chief strategist, Steve Bannon, for instance, was head of the site Breitbart News, which features antisemitic, racist and misogynistic articles, and an appointee to the Department of Energy was dismissed last week after a history of anti-Muslim remarks was exposed.
“The administration is still young and still figuring out its way,” Rice said. “But we are still combatting violent extremism regardless of ideology.” He added that although many of the student campaigns focused on the terror group Isis, there were others that focus on rightwing and xenophobic groups.
Facebook stopped short of accepting responsibility for attempts at intervention targeting users who appear to have been radicalised, although it has similar tools. The social network has recently revealed it can use artificial intelligence to identify, from someone’s posts, whether they might be suicidal. If they are, someone in Facebook’s team will be notified to make contact with the user at risk.
Facebook is also using AI to determine the difference between news stories about terrorism and actual terrorist propaganda, a system revealed by Mark Zuckerberg in a manifesto he published in February.
In a version of Zuckerberg’s letter sent to media outlets, the CEO described an even more specific and invasive tool: in the long-term, he said, AI would be used to “identify risks that nobody would have flagged at all, including terrorists planning attacks using private channels”.
However, when the Guardian asked Bickert whether Facebook would use AI to detect and intervene among people posting extremist material, she returned to an oft-cited company line: Facebook is a neutral platform.
“We are not the creators of this content,” she said. “We facilitate people using Facebook.”
That’s not to say that the algorithmic selection of content shown to targeted individuals can’t sway people’s opinions, as demonstrated by Facebook’s own advertising sales team, which has told advertisers it can help get senators re-elected.