Skip to main content
opinion
Open this photo in gallery:

A student uses her cell phone at the Bronx High School of Science in New York, Jan. 11, 2016.YANA PASKOVA/The New York Times News Service

For years, social media companies have escaped liability for content posted on their platforms by using a legal protection known as common carriage. That is, they have been deemed to be a neutral conveyor of information – like a postal service or a telephone line – and so can’t be held responsible for harmful things said on their platforms.

This is always a dubious argument, but, sadly, for a long time it worked. U.S. jurisprudence built on a 1990s law originally meant to shield Internet service providers to eventually encompass all sorts of online activity.

A recent decision in a U.S. appeals court has for the first time seriously rocked the shaky foundations of the common carriage defence for social media firms. This is good, and should continue – both in the U.S. and globally – so social media companies are forced to take real responsibility for the social pollution they have released.

The details of the court case – Anderson v. TikTok Inc. – are horrific. A 10-year-old girl was scrolling on TikTok when the app recommended a video that depicted the “blackout challenge,” in which users are encouraged to asphyxiate themselves. The girl tried it and died. Her mother sued TikTok and its parent company, ByteDance, alleging they were negligent in allowing such dangerous videos on their platform and promoting them.

TikTok won in district court, citing Section 230 of the Communications Decency Act, which evokes the defence of the common carrier: that the app merely disseminates the content, but is not responsible for it.

At the end of August, a three-judge panel of the U.S. Court of Appeals for the Third Circuit reversed that decision in part, and eviscerated this application of the defence.

“Today, [Section] 230 rides in to rescue corporations from virtually any claim loosely related to content posted by a third party, no matter the cause of action and whatever the provider’s actions,” wrote Justice Paul Matey. “The result is a [Section] 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm.”

The issue is that social media companies are not, and have never really been, neutral distributors. They may not make the content, but they do make algorithms that shape what their users see, surfacing certain posts and hiding others. This curation carries with it a responsibility that platforms have tried to shirk. But the tide is turning, and other legal actions have exposed further troubling details of how platforms operate. A lawsuit filed by the Massachusetts Attorney-General last year claimed Meta – the owner of Facebook and Instagram – was harming the mental health of youth and not taking steps to minimize that harm.

As one example, the state’s complaint describes an internal research initiative at Instagram called “Project Daisy,” which found that hiding the number of likes on an Instagram post boosted teenagers’ mental health because it removed a way to compare themselves to peers. The company’s researchers recommended removing public like counts, or at least making it opt-in. But, the complaint says, internal e-mails showed the idea was vetoed by executives, who worried it might make teens spend less time on Instagram.

Here was a concrete way to lessen harm, and executives chose not to act because it could hurt their bottom line – a familiar refrain. A factory owner may argue it would be cheaper for them to dump their refuse in a town’s clean water supply. But the law doesn’t allow that business to pollute the environment for profit; the same holds true for pollution of the social media environment.

The case of Anderson v. TikTok has been remanded back to district court, and no doubt it will eventually find its way to the Supreme Court. So it will be some time before the U.S. judiciary has its last word on whether this new – and sensible – reading of common carriage, as it applies to social media, will stand. But there are other cases, and other initiatives to protect the vulnerable members of our society from social media pollution. Ottawa’s Online Harms Act contains some useful measures to combat deplorable online conduct, such as cyberbullying and revenge porn. This space has supported that goal, though noted that some parts of the bill dealing with sentencing should be excised.

It’s long past time to clean up our online environment.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe