Tech

After the Trump ban: How Facebook will regulate its platform in the future

Last week, Facebook announced that it would ban former US President Donald J. Trump from the social network for two years, at least until January 7, 2023. A readmission is only possible “if the circumstances allow”.

The announcement comes in response to recommendations made last month by Facebook’s recently created, independent Oversight Board. The social network had hoped that the panel would finally determine how to deal with Trump’s account. But while it upheld the company’s original decision to remove Trump from the platform on Jan. 6 for inciting violence, it pushed long-term responsibility back to Palo Alto executives.

The news that Trump would be banned from Facebook for another 19 months leaves many questions unanswered. How does the largest social network in the world regulate politicians and their expressions? At the moment almost nobody seems to be satisfied, neither Trump opponents nor supporters of strict freedom of expression.


More from MIT Technology Review


More from MIT Technology Review

Although the announcement sheds light on some real rules on how politicians can use Facebook in the future – and clues as to how those rules will be enforced – the decision to ban Trump for at least two years has caused frustration among network observers.

Stakeholders like Ultraviolet and Media Matters, which has long been pushing Facebook to ban Trump, released statements saying that only a permanent ban would be appropriate. Meanwhile, those who see any rule enforcement against conservative politicians as evidence that Facebook punishes content from this political spectrum feel vindicated in their stance. For Trump, the most important thing is that he has a chance to be online again on Facebook in time for the 2024 election cycle.

The decision of the Oversight Board gives rise to five central questions, which Technology Review answers below.

Many platforms, including Facebook, have used a “newsworthiness” exception to override the enforcement of their own rules against politicians and heads of state. If such persons are important for the public debate, they are allowed to do more on the platform than normal citizens. Facebook’s announcement brings some changes to how this loophole can be used in the future. First, according to Facebook, a notice will be published in the future when the network applies the rule to an account. Second, Facebook will not treat content posted by politicians any differently than content posted by “normal” people.

Facebook formally introduced this policy in late 2016 after censoring a famous photo from the Vietnam War for containing nudity. However, the “newsworthiness” exception became a blanket exception for politicians, including Trump, allowing illegal content to stay online. They were considered to be of public interest. The new Facebook announcement seems to end this blanket protection, but does not completely remove it and does not go into detail on how Facebook will determine in the future whether a post falls under the exception.

The announcement was written by Nick Clegg, the company’s vice president of global affairs (and ex-UK politician), but uses a “we” throughout. However, it does not specify who was involved in the decision-making process at Facebook – which, given the controversial nature of the decision, would be important for transparency and credibility.

“We know that today’s decision will be criticized by many people on different sides of the political division – but our job is to make a decision as appropriate, fair and transparent as possible,” wrote Clegg.

The statement also states that the company will turn to “experts” to “assess whether the risk to public safety has decreased” – without specifying which experts they will be, what expertise they will bring, or how Facebook (or – again – whoever on Facebook) will have decision-making power based on their findings. The Oversight Board, which was intended, among other things, to outsource controversial decisions, has already signaled that it does not want to take on this role.

That means it’s especially important to know whose voice matters to Facebook and who has the authority to follow that advice – especially considering how much is at stake. Conflict assessment and violence analysis are specialty areas – those in which Facebook’s previous approach does not exactly inspire trust. For example, three years ago the United Nations accused the company of reacting “slowly and ineffectively” to the spread of internet hatred that led to attacks on the Rohingya minority in Myanmar. Facebook commissioned an independent report from the non-profit organization “Business for Social Responsibility”, which confirmed the allegations made by the UN.

The Rohingya report, released in 2018, pointed to the possibility of violence in the 2020 US election and recommended steps the company could take to prepare for such “multiple contingencies”. At the time, Facebook executives admitted that “we can and should do more”. But in the course of the 2020 election campaign, after Trump finally lost the presidency and in the run-up to the January 6 ban, the company made little move to act on these recommendations.

The limited nature of the ban ensures that the decision will continue to be controversial – until it may be even more inconvenient for Facebook than it already is. If the social network does not decide to uphold the ban based on its definition of “conditions permitting renewal”, it will be lifted just in time for the primary season of the next US presidential cycle. Nothing can go wrong there, according to observers.


(bsc)

To home page

.