Year in Review: Content Moderation on Social Media Platforms in 2019
from Net Politics and Digital and Cyberspace Policy Program

Year in Review: Content Moderation on Social Media Platforms in 2019

The spread of disinformation, misinformation, hate speech, and violent and extremist content on social media platforms in 2019 prompted heated debate over how tech companies and governments should approach content moderation. These issues will continue to play out in 2020, with important social and political consequences.
Facebook and Twitter logos are seen on a shop window in Malaga, Spain
Facebook and Twitter logos are seen on a shop window in Malaga, Spain REUTERS/Jon Nazca

Lauren Dudley is a research associate in Asia Studies at the Council on Foreign Relations.

Disinformation, misinformation, hate speech, and violent and extremist content continued to spread on social media platforms in 2019, turning up the pressure on tech companies and governments to moderate content. Although there were instances of collaboration between the tech companies and governments, the two sides generally found themselves divided over the most effective approaches to moderating and regulating content, as well as the appropriate mix of regulation and self-governance, with important social and political consequences for 2020.  

More on:

Digital Policy

United States

Technology and Innovation

Tech Companies Confront Content Moderation Challenges

While many in the tech community have trumpeted the potential gains of applying machine learning algorithms to enforce content policies, the results have been mixed. Facebook proudly announced this year that 98 percent of terrorist photos and videos published on the platform are automatically identified by its algorithm and removed before anyone has the chance to see them. However, an algorithm’s effectiveness strongly depends on the type of content that it is tasked with detecting. For example, limitations in natural language processing often prevent algorithms from correctly understanding the meaning, nuance, and context of language, which is necessary for identifying online bullying and harassment. Likewise, the sparsity of examples of certain types of content, such as first-person shooting footage and deepfakes, also limits the effectiveness of machine learning algorithms.

When machine learning algorithms fail, human content moderators are responsible for determining if content meets the threshold for removal. This is no small task. Thirty-five thousand content moderators around the world sift through offensive and violent content for Facebook every day. This year brought increased attention to the mental health issues faced by moderators who have to repeatedly view trauma-inducing content and quickly determine if it meets the threshold for removal.

Significant differences in how the companies approach content moderation have emerged. For example, in response to calls to limit misinformation ahead of the upcoming presidential election, Twitter banned political ads, and Google announced it will limit targeting. Facebook, on the other hand, allows political ads to be posted on its platform without restrictions, even if they contain unverified or fake information. This policy has raised fears that disinformation may sway public opinion ahead of the 2020 election. However, reports suggest Facebook may label political ads that have not been fact-checked, similar to how Instagram demotes posts rated false by third-party fact-checkers so that they are not displayed prominently on users’ feeds.

Controlling what users see on their feeds is not without political risks for social media platforms, and policymakers across the political spectrum have accused them of censorship and political bias. In an effort to insulate itself from these attacks, Facebook announced it would create a court-style “oversight board” to arbitrate disputed content moderation decisions, though critics say that it will do more to insulate Facebook executives from criticism than effectively adjudicate content moderation decisions.

More on:

Digital Policy

United States

Technology and Innovation

Governments Debate and Regulate Content

While tech companies struggle to create fair content moderation policies, governments around the world are exploring new legal approaches to limit the spread of fake news. Russia this year imposed new fines up to approximately $6,000 for individuals who spread what the government considers fake news and show “blatant disrespect” for the state online. Likewise, Singapore will require social media platforms to carry warnings on posts deemed false by the government and remove comments that are incompatible with the “public interest”. These fake news laws have sparked fears that companies could be co-opted into suppressing dissent and supporting authoritarian regimes.

The United States has not yet enacted any legislation outlining how tech companies should moderate content, but lawmakers on Capitol Hill are actively debating Section 230 of the Communications Decency Act, which gives tech companies immunity from being held liable for third-party content on their platforms. The White House is also exploring the issue, having reportedly drafted an executive order that would’ve tasked the Federal Communications Commission with creating content moderation regulations. However, the draft did not move forward due to concerns from industry, regulators, and some legislators that it would have violated the First Amendment.

Instances of International Collaboration

The March 2019 Christchurch shooting spurred international and cross-sector collaboration in content moderation. Soon after the attack occurred, New Zealand authorities alerted Facebook that footage of the shooting was being livestreamed on its platform, prompting Facebook to remove it (but not before the video had been viewed by over four thousand users). Content moderators around the world then worked around the clock to minimize the spread of the violent footage. In the aftermath, France and New Zealand announced the Christchurch Call, a non-binding document encouraging greater efforts to counter online extremism, apply ethical standards in reporting terrorist events online, and share information between governments and tech companies. Although most Western democracies and tech companies signed on, the United States did not because of concerns about free speech.

What to Expect in 2020

The upcoming presidential election—and the threat that the spread of deepfakes, political ads containing falsehoods, and other types of misinformation poses to election security—will keep tech companies and lawmakers intensely focused in 2020 on how to moderate content created to divide and misinform.

But even if tech companies enact new policies to curb the spread of misinformation and other prohibited content, the odds that these companies can effectively identify and remove problematic content are slim. Content moderation technology is not an effective substitute for human moderators because of technological limitations and the lack of sufficient, high-quality training data for some types of content. Although tech companies are continuously working to improve their algorithms, some “prohibited” content is likely to slip through in 2020.

The debate over the role tech companies and governments should have in moderating content posted on their platforms will force the United States to reevaluate how some of its founding principles, including free speech and free enterprise, should play out over the internet in the twenty-first century. Given the complexity of the issue and current technological limitations, 2020 is unlikely to hold the answer.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail