Facebook is introducing new restrictions on the use of its livestreaming technology in hopes of curbing violence and hate speech in response to a deadly terrorist attack in New Zealand that was broadcast live on the social media giant’s platform. The changes come a day before global leaders are set to meet in Paris and press tech giants to stop the spread of violent ideologies online.
In a blog post published late Tuesday evening, Facebook Inc. said it would temporarily suspend users’ ability to broadcast live video if they posted hate speech, support for terrorist organizations, or other harmful content anywhere on the platform. The social media firm said it was also funding US$7.5-million in research partnerships at three American universities to study ways to improve technology that analyzes photos and videos.
Facebook already restricts people from posting on the platform if they break its rules, with bans that range from a few days for people who repeatedly post spam, to a complete ban for those sharing child pornography.
But in the past, the Silicon Valley firm has only restricted people from using Facebook Live if they had broken a rule while using the livestreaming feature, and often only after a user had repeatedly violated the company’s policies. Facebook said it will now extend the ban on livestreaming to people who post harmful content anywhere on the platform, even after a first offence.
Facebook did not say how long the temporary bans on using Facebook Live would last, though it gave the example of a 30-day ban for someone who shared a link to terrorist propaganda. The Silicon Valley firm said it plans to extend the same restrictions to other parts of the platform, such as preventing people who post harmful content from buying ads.
WhatsApp security flaw discovered with help from Canadian researchers
New Zealand seeks global support for tougher measures on online violence
Facebook rejects co-founder Chris Hughes’s call for breakup, senator urges U.S. antitrust probe
“We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” Guy Rosen, Facebook’s vice-president of integrity, said in the blog post. “Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.”
The changes come as Prime Minister Justin Trudeau is set to travel to Paris on Wednesday, where he is expected to sign on to an agreement backed by New Zealand Prime Minister Jacinda Ardern calling on governments and social media firms to do more to stop the spread of violent extremism online. Facebook, Microsoft and Google have also said they plan to sign the non-binding pledge known as the Christchurch Call.
“Paris will only be the start,” Ms. Ardern said in a Facebook Live post on Sunday. “We will be seeking other companies, other countries, to sign up in the aftermath.”
New Zealand began an inquiry this week into the shootings that killed 51 people at two mosques in March. The attacks were livestreamed on Facebook for 12 minutes and viewed by hundreds of people before someone alerted the social media company to the broadcast.
Copies of the video have continued to resurface across the internet, sparking an angry backlash from global regulators.
Last month, a gunman allegedly inspired by New Zealand carried out a deadly shooting at a synagogue near San Diego, Calif. Social media posts in the days leading up to the attack claimed the gunman planned to livestream the attack on Facebook, though the social media firm said it found no evidence of a video.